<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: alexbeloi</title><link>https://news.ycombinator.com/user?id=alexbeloi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 05 May 2026 16:41:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=alexbeloi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by alexbeloi in "23andMe confirms hackers stole ancestry data on 6.9M users"]]></title><description><![CDATA[
<p>Fun fact, they can sometimes narrow down crime scene DNA to just a single person by having enough partial matches from their (potentially distant) relatives. I can't remember which DNA database was used, but some cases were solved this way, IIRC it introduced a bunch of legal questions about if you can search a database in that way.<p>I think this was the article that talked about this (apologies for the paywall): <a href="https://www.nytimes.com/2021/12/27/magazine/dna-test-crime-identification-genome.html" rel="nofollow noreferrer">https://www.nytimes.com/2021/12/27/magazine/dna-test-crime-i...</a></p>
]]></description><pubDate>Tue, 05 Dec 2023 09:30:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=38528643</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=38528643</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38528643</guid></item><item><title><![CDATA[New comment by alexbeloi in "Artificial intelligence is permeating business at last"]]></title><description><![CDATA[
<p>Seems like a steep price but I can see this becoming a marketable skill at some point. It feels similar to knowing how to google well, we've all internalized some google/search concepts like 'unique words' -> 'narrow results' and 'full sentences' -> 'phrase matching'. Probably there will be nuances to good gen art prompt writing.</p>
]]></description><pubDate>Thu, 08 Dec 2022 09:08:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=33905881</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=33905881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33905881</guid></item><item><title><![CDATA[New comment by alexbeloi in "Apple is becoming an ad company despite privacy claims"]]></title><description><![CDATA[
<p>They aren't mutually exclusive, but their incentives are opposed, usually the incentives bringing in the money win.</p>
]]></description><pubDate>Thu, 24 Nov 2022 22:11:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=33736819</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=33736819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33736819</guid></item><item><title><![CDATA[New comment by alexbeloi in "Algorithms by Jeff Erickson"]]></title><description><![CDATA[
<p>> What's the best way to prepare for DP in interviews?<p>Do 100 of these problems: <a href="https://leetcode.com/tag/dynamic-programming/" rel="nofollow">https://leetcode.com/tag/dynamic-programming/</a></p>
]]></description><pubDate>Tue, 09 Feb 2021 21:31:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=26082623</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=26082623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26082623</guid></item><item><title><![CDATA[New comment by alexbeloi in "Algorithms by Jeff Erickson"]]></title><description><![CDATA[
<p>It doesn't punish anyone. If you want solutions, then the book is not for you, that's all. The author is not obligated to accommodate every audience, or even a majority audience.</p>
]]></description><pubDate>Tue, 09 Feb 2021 21:13:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=26082442</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=26082442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26082442</guid></item><item><title><![CDATA[New comment by alexbeloi in "Reinforcement Learning at Facebook"]]></title><description><![CDATA[
<p>I was oversimplifying, but I stand by my words.<p>It does optimize for profit, just with extra steps. For most FB ads products (that you see in feed), advertisers pay based on conversions (views, clicks, likes, joins, purchases, etc.). So revenue is directly tied to conversions. Then there are extra steps weighing in revenue != profit, advertiser retention, repetitiveness, long term user value, etc.</p>
]]></description><pubDate>Tue, 02 Feb 2021 16:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=26003149</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=26003149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26003149</guid></item><item><title><![CDATA[New comment by alexbeloi in "Reinforcement Learning at Facebook"]]></title><description><![CDATA[
<p>Ads optimizes for profit, all other content is broadly optimized for <i>meaningful social interaction</i> and against <i>problematic content</i>.<p><a href="https://www.facebook.com/business/news/news-feed-fyi-bringing-people-closer-together" rel="nofollow">https://www.facebook.com/business/news/news-feed-fyi-bringin...</a><p><a href="https://about.fb.com/news/2019/04/remove-reduce-inform-new-steps/" rel="nofollow">https://about.fb.com/news/2019/04/remove-reduce-inform-new-s...</a></p>
]]></description><pubDate>Mon, 01 Feb 2021 19:14:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=25992351</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=25992351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25992351</guid></item><item><title><![CDATA[New comment by alexbeloi in "Practical Deep Learning for Coders 2019"]]></title><description><![CDATA[
<p>Machine Learning is a catch-all term for optimizing statistical models with data.<p>Simplest example that you are very likely already familiar with is that of a 'best fit line' to some xy scatter plot. This starts by making an assumption (model choice) that the relationship between `x` and `y` is linear, e.g. `y=m<i>x + b`, then you can use data (xy points) to figure out the most likely values for `m` and `b`. You can then make predictions for new `x_new` values by plugging them into your known line to get `y_new`.<p>Machine learning often manifests in a two step process: first feature extraction, and then fitting features to a desired output. Deep learning combines these as an end-to-end process to eliminate 'human in the loop' problems that occur from feature extraction.<p>Example: you want to predict who should win a chess game in a given board state<p></i> Feature extraction (what information you think matters): what pieces does white have, what pieces does black have, is white in check, is black in check, how many valid squares can white king move to, how many valid squares can black king move, etc...<p>* Fitting: make an assumption about the relationship between features and outcome (model choice), fit model using data (features, outcome)<p>The Deepblue 2 model that played Kasparov used around 8000 features (not sure if this is the feature vector size or # of features). As you can imagine, feature extraction is highly dependent on expert knowledge of the problem and will often fail to cover unknown situations/cases.<p>Deep learning models aim is to avoid limitations of expert knowledge by using raw data (e.g. occupancy of each square on a chess board) and extract features implicitly rather than relying on explicit human formulas. It has also opened up new possibilities for areas where expert knowledge has made little progress in the past (e.g. there is not much an expert can say about what pixel features are might indicate a dog/cat is contained in an image).</p>
]]></description><pubDate>Fri, 25 Jan 2019 19:57:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=19001870</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=19001870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19001870</guid></item><item><title><![CDATA[New comment by alexbeloi in "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II"]]></title><description><![CDATA[
<p>You'll likely be happy to hear that this has been (is being) addressed.<p>I watched the live broadcast of this announcement where they did a recap of all 10 previous matches (against TLO and Mana) and they talked about this concern. During today's announcement they presented a new model that could not see the whole map and had to use the camera movement to focus properly. The deepmind team said it took somewhat longer to train but they were able to achieve the same levels of performance according to their metrics and play-testing against previous version.<p>However...<p>They did a live match vs LiquidMana (6th match against Mana) against the latest version (with camera movement) and LiquidMana won! LiquidMana was able to repeatedly do hit-and-run immortal drop harassment in AlphaStar's base, forcing it to bring troops back to defend its base, causing it to fall behind in production and supply over time and ultimately lose a major battle.</p>
]]></description><pubDate>Fri, 25 Jan 2019 00:51:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=18994544</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=18994544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=18994544</guid></item><item><title><![CDATA[New comment by alexbeloi in "Toyota Investing $500M in Uber in Driverless Car Pact"]]></title><description><![CDATA[
<p>Movies are filmed at 27fps so the reasoning is humans have high confidence that they aren't missing any significant information between the frames, it should possible to make a 'mental model' of a road scene at the same fps to human skill level.<p>In the future we'll likely have super-human spatial and temporal resolution, right now more improvements have been gained from highest possible spatial resolution with minimal plausible temporal resolution.</p>
]]></description><pubDate>Tue, 28 Aug 2018 03:24:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=17856364</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17856364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17856364</guid></item><item><title><![CDATA[New comment by alexbeloi in "Toyota Investing $500M in Uber in Driverless Car Pact"]]></title><description><![CDATA[
<p>If by "do that" you mean mimic what a real driver would do for a specific set of sensor inputs, that is precisely ML tries to do.<p>To understand what the difficulty is, it's important to consider that the size of the sensor input is very large. Don't think of it like twenty range finders around the car, rather a 360 degree medium resolution color + depth image (about 0.5 million data points coming at 30 fps).<p>It's difficult because you will never encounter the same set of sensor inputs twice, so you can't treat it like a search space problem. Once you've accepted that, you're in AI/ML territory where you might try to reason about what the closest set of known sensor inputs and action would be (classical AI, expert system), but that is impractically difficult with as 0.5 million dimensional search space, or train an ML model to 'reason' about the sensor space to make a decision about the appropriate action.<p>Approaches using a small number of sensors can do automatic breaking and smarter cruise control, but haven't been seen to be successful about navigating and making strategic decisions. The current belief is that more can be done by using denser sensors and more data and seems to be the case. There are people working on reducing the sensor density requirement, but the main focus right now is building a successful and safe self driving car, regardless of sensor and compute costs.</p>
]]></description><pubDate>Tue, 28 Aug 2018 01:14:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=17855890</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17855890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17855890</guid></item><item><title><![CDATA[New comment by alexbeloi in "Machine Learning Guides"]]></title><description><![CDATA[
<p>The issues I've heard from a few people in hiring is that there is a surplus of junior data scientists from these camps and a shortage of senior data scientists to manage them. Problems not dissimilar to tech hiring in general, but companies need a lot more SWEs than data scientists.</p>
]]></description><pubDate>Mon, 23 Jul 2018 21:33:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=17596227</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17596227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17596227</guid></item><item><title><![CDATA[New comment by alexbeloi in "How Netflix became a billion-dollar titan"]]></title><description><![CDATA[
<p>It seems to me that Netflix used to have better recommendations. It's unclear why/how/when it went bad, my suspicion is that they changed their metric and it's having bad side effects, or their old models didn't scale as their user base grew quickly.<p>I've built deep learning recommendation systems in production for clients with millions of users and it's sometimes surprisingly difficult to beat the "most trending <products>" baseline if all you care about is views/purchases. It will in the short term to meet business goals, but it hurts the user experience over time and will inevitably increase churn.</p>
]]></description><pubDate>Thu, 05 Jul 2018 17:25:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=17465078</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17465078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17465078</guid></item><item><title><![CDATA[New comment by alexbeloi in "Conservation of Intent: why A/B tests aren’t as effective as they look"]]></title><description><![CDATA[
<p>That is actually one of the biggest contributing factors to the replication crisis in science, lots of scientists have been making this error for decades. Very not obvious.</p>
]]></description><pubDate>Tue, 03 Jul 2018 17:01:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=17451834</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17451834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17451834</guid></item><item><title><![CDATA[New comment by alexbeloi in "With more students boasting flashy GPAs, academic honors lose their luster"]]></title><description><![CDATA[
<p>The valedictorian is an 'executor', and likely very good at that crushing tasks to get the job done. The self-starter is creative and malleable and likely very good at complex problems with lots of unknowns.<p>Those people aren't interchangeable and it's preferable to have both to run smoothly. It sounds like you're speaking from a position of upper management, where you don't have the time to be managing an executor type person and you'd prefer more independent people directly under you.</p>
]]></description><pubDate>Mon, 02 Jul 2018 17:23:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=17443611</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17443611</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17443611</guid></item><item><title><![CDATA[New comment by alexbeloi in "With more students boasting flashy GPAs, academic honors lose their luster"]]></title><description><![CDATA[
<p>IQ scores are also increasing[0].<p>Are we actually smarter than our grandparents? Or has the education system trained us to be better test takers (one of the suggested contributing factors).<p>[0] <a href="https://en.wikipedia.org/wiki/Flynn_effect" rel="nofollow">https://en.wikipedia.org/wiki/Flynn_effect</a></p>
]]></description><pubDate>Mon, 02 Jul 2018 17:22:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=17443597</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17443597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17443597</guid></item><item><title><![CDATA[New comment by alexbeloi in "How a Diablo expansion led to behind-the-scenes trouble"]]></title><description><![CDATA[
<p>The relevant metrics would be daily/monthly active useres (DAU/MAU) and churn rate. But Blizzard doesn't report those number's unless they're good (e.g. Hearhstone/Overwatch hitting new record MAU last year), and hasn't ever released any such numbers for Diablo 3.<p>This article[0] claims Blizzards overall MAU was close to flat YoY-Q4 2016-2017, knowing that Overwatch and Hearthstone are hitting records high MAU, while overall MAU is flat means that the other games (D3, SC2, HotS) are losing players.<p>It's a success in the way Matrix Revolutions was a success, massively profitable[1] yet a disappointment to fans (see diablo 3 fan ratings[2]).<p>[0] <a href="https://venturebeat.com/2018/02/08/blizzards-monthly-active-users-for-q4-2017-drop-just-a-bit-from-2016s-number/" rel="nofollow">https://venturebeat.com/2018/02/08/blizzards-monthly-active-...</a><p>[1] <a href="https://www.the-numbers.com/movie/Matrix-Revolutions-The#tab=summary" rel="nofollow">https://www.the-numbers.com/movie/Matrix-Revolutions-The#tab...</a><p>[2] <a href="http://www.metacritic.com/game/pc/diablo-iii" rel="nofollow">http://www.metacritic.com/game/pc/diablo-iii</a></p>
]]></description><pubDate>Mon, 02 Jul 2018 00:18:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=17438509</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17438509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17438509</guid></item><item><title><![CDATA[New comment by alexbeloi in "Darpa invests $100M in a silicon compiler"]]></title><description><![CDATA[
<p>A great blog post here (<a href="https://wp.josh.com/2017/10/23/adventures-in-autorouting/" rel="nofollow">https://wp.josh.com/2017/10/23/adventures-in-autorouting/</a>) about some different auto-routing software.</p>
]]></description><pubDate>Fri, 29 Jun 2018 21:51:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=17428127</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17428127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17428127</guid></item><item><title><![CDATA[New comment by alexbeloi in "Mathematician-M.D. claims to have solved the 110-year-old Lindelöf hypothesis"]]></title><description><![CDATA[
<p>Not eli5, but a comparison to the Riemann Hypothesis (RH).<p>RH says the Riemann-zeta function has no zeros along the line (1/2) + iy in the complex plane.<p>The Lindelof hypothesis says that the number of zeros between (1/2) + iy and (1/2) + i(y+1) is much smaller (little-o) than log(y) as y grows.<p>So it can be thought of as a weaker version of RH, but still very very difficult. The fact that Lindelof has been an open problem for over a hundred years (and is an non-trivial weakening of RH) speaks to how difficult RH is as well.<p>Like RH, Lindelof implies things about primes, and also (like RH) has lots of implications about lots of interesting prime-like (irreducible) objects in different spaces.</p>
]]></description><pubDate>Thu, 28 Jun 2018 02:49:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=17413692</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17413692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17413692</guid></item><item><title><![CDATA[New comment by alexbeloi in "The Surface Book 2 is everything the MacBook Pro should be"]]></title><description><![CDATA[
<p>> Not completely failing to find files and apps based on how I typed them (“A” brings up “abc” but typing “ab” makes “abc” go away!?).<p>It's the 'basic' stuff like this and bizarre feeling UI pauses at blank windows that make it unbearable to work with. It's the opposite of snappy and constantly interrupts a productive workflow making me wonder 'why is it doing this' rather than thinking about my work. It's like having an essentially perfect phone that inexorably buzzes every 1-5 minutes (at random), you would throw it against the wall in less than a day.</p>
]]></description><pubDate>Wed, 27 Jun 2018 19:28:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=17411038</link><dc:creator>alexbeloi</dc:creator><comments>https://news.ycombinator.com/item?id=17411038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17411038</guid></item></channel></rss>