<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dkislyuk</title><link>https://news.ycombinator.com/user?id=dkislyuk</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 12:28:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dkislyuk" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dkislyuk in "Softmax, can you derive the Jacobian? And should you care?"]]></title><description><![CDATA[
<p>I agree that "it has nice derivatives" is a great empirical reason to use a specific function in ML, but it doesn't sufficiently prove that it's the best function to use. And even if a derivative term looks more complex, that doesn't necessarily imply that it is more computationally expensive to compute, so that can't be the only criteria to select a function.<p>Luckily, there are more axiomatic reasons for why softmax is the preferred way to map inputs to a probability distribution.</p>
]]></description><pubDate>Fri, 01 May 2026 18:50:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47978545</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=47978545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47978545</guid></item><item><title><![CDATA[New comment by dkislyuk in "Softmax, can you derive the Jacobian? And should you care?"]]></title><description><![CDATA[
<p>(meant to say, scale-invariance of probability ratios, or shift-invariance of the inputs)</p>
]]></description><pubDate>Fri, 01 May 2026 17:04:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47977195</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=47977195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47977195</guid></item><item><title><![CDATA[New comment by dkislyuk in "Softmax, can you derive the Jacobian? And should you care?"]]></title><description><![CDATA[
<p>Something that really helped me grasp the foundational relevance of the softmax is to justify from first principles why e^x shows up in the preferred mapping function in the numerator (1). The stated problem of mapping raw inputs/scores/logits to a probability distribution can be solved by a bunch of arbitrary functions, and the usual justification given for a softmax is "it has nice derivatives" which is empirically useful but not satisfying.<p>The sketch of the justification is something like this. We first need a function that maps from (-inf, inf) to a unique positive value, and then we need to normalize the resulting values. Setting aside the normalizing step, we imagine a f(x) that needs to fit the following properties:<p>1. It should be strictly positive, so that we can normalize it into a (0, 1) probability.<p>2. It should preserve the relative ordering of the logits to allow them to be interpreted as scores. Thus $f(x)$ should be monotonically increasing.<p>3. It should be continuous and differentiable everywhere, since we are interested in learning through this function via backpropagation.<p>4. It should have shift-invariance with respect to the input, as we don't want the model to have to learn some preferred logit-space where there is a stronger learning signal. For example, applying softmax on the values `(-1, 1, 3, 5)` would yield the same result as applying it to `(9, 11, 13, 15)`. This property can also be restated as a "scale invariance of probability ratios", where the ratio between $f(x)$ and $f(x+c)$ for a given $c$ is a constant. One useful interpretation of this property is that the learning domain or "gradient-learning surface" is stable, and high-magnitude initializations won't impede the learning process.<p>Taken at face value, these properties uniquely define e^x. The last property is actually pretty debatable, because in the context of machine learning, we actually do have a "preferred logit-space", namely closer to zero, for numerical stability. But there are other ways to enforce this in a post-hoc manner (e.g. weight initialization, normalization layers, etc.)<p>Another property that is uniquely justifies e^x and thus softmax is IIA (independence of irrelevant alternatives), which states that the odds for two classes, p_i / p_j, <i>only</i> depend on the logits/inputs for i and j, and an irrelevant class k has no impact. For example, for Softmax([5, 7, 1]) and Softmax([5, 7, 10]), the resulting odds for the first two values (p_i/p_j) should be the same from both distributions, regardless of the third value.<p>Finally, if the "desired properties" approach is not satisfying, a more theoretical route for justifying the form of the softmax uses the framework of maximum entropy (E. T. Jaynes published this in 1957 to justify the Boltzmann distribution).<p>TL;DR, softmax is not a the only solution to mapping function of unnormalized values to a probability distribution, but it can be justified through axiomatic properties.<p>(1) one could say that the exponential shows up from the Boltzmann distribution, but then the same question applies.</p>
]]></description><pubDate>Fri, 01 May 2026 13:29:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47974602</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=47974602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47974602</guid></item><item><title><![CDATA[New comment by dkislyuk in "Softmax, can you derive the Jacobian? And should you care?"]]></title><description><![CDATA[
<p>Softmax is defined over an arbitrary vector of raw real numbers. Stating that those inputs are "logits" is applying post-hoc semantics to what the model is learning. One of the key properties of a softmax is scale invariance, (e.g. softmax([-1, 1, 3, 5]) == softmax([9, 11, 13, 15])) and so it is easiest to just think of it as operating on a vector of unnormalized raw scores, which is the more colloquial definition of logit.<p>(also, log(p) is not the formal definition of a logit)</p>
]]></description><pubDate>Fri, 01 May 2026 12:47:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47974174</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=47974174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47974174</guid></item><item><title><![CDATA[New comment by dkislyuk in "Single bone in Spain offers first direct evidence of Hannibal's war elephants"]]></title><description><![CDATA[
<p>The Paul Cooper production is great. The Rest Is History also just finished a long series (spread out in three seasons, starting on episode 421) on the Punic wars, similarly well done.</p>
]]></description><pubDate>Wed, 11 Feb 2026 22:13:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46981951</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=46981951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46981951</guid></item><item><title><![CDATA[Writing Textbooks for Oneself]]></title><description><![CDATA[
<p>Article URL: <a href="https://dkislyuk.com/writing-textbooks">https://dkislyuk.com/writing-textbooks</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46879108">https://news.ycombinator.com/item?id=46879108</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 03 Feb 2026 23:43:08 +0000</pubDate><link>https://dkislyuk.com/writing-textbooks</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=46879108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46879108</guid></item><item><title><![CDATA[New comment by dkislyuk in "Important machine learning equations"]]></title><description><![CDATA[
<p>Presenting information theory as a series of independent equations like this does a disservice to the learning process. Cross-entropy and KL-divergence are directly derived from information entropy, where InformationEntropy(P) represents the baseline number of bits needed to encode events from the true distribution P, CrossEntropy(P, Q) represents the (average) number of bits needed for encoding P with a suboptimal distribution Q, and KL-Divergence (better referred to as relative entropy) is the difference between these two values (how many more bits are needed to encode P with Q, i.e. quantifying the inefficiency):<p>relative_entropy(p, q) = cross_entropy(p, q) - entropy(p)<p>Information theory is some of the most accessible and approachable math for ML practitioners, and it shows up everywhere. In my experience, it's worthwhile to dig into the foundations as opposed to just memorizing the formulas.<p>(bits assume base 2 here)</p>
]]></description><pubDate>Thu, 28 Aug 2025 15:16:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45053271</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=45053271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45053271</guid></item><item><title><![CDATA[New comment by dkislyuk in "Bill Atkinson has died"]]></title><description><![CDATA[
<p>From Walter Isaacson's _Steve Jobs_:<p>> One of Bill Atkinson’s amazing feats (which we are so accustomed to nowadays that we rarely marvel at it) was to allow the windows on a screen to overlap so that the “top” one clipped into the ones “below” it. Atkinson made it possible to move these windows around, just like shuffling papers on a desk, with those below becoming visible or hidden as you moved the top ones. Of course, on a computer screen there are no layers of pixels underneath the pixels that you see, so there are no windows actually lurking underneath the ones that appear to be on top. To create the illusion of overlapping windows requires complex coding that involves what are called “regions.” Atkinson pushed himself to make this trick work because he thought he had seen this capability during his visit to Xerox PARC. In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so. “I got a feeling for the empowering aspect of naïveté”, Atkinson said. “Because I didn’t know it couldn’t be done, I was enabled to do it.” He was working so hard that one morning, in a daze, he drove his Corvette into a parked truck and nearly killed himself. Jobs immediately drove to the hospital to see him. “We were pretty worried about you”, he said when Atkinson regained consciousness. Atkinson gave him a pained smile and replied, “Don’t worry, I still remember regions.”</p>
]]></description><pubDate>Sat, 07 Jun 2025 17:34:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44211144</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=44211144</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44211144</guid></item><item><title><![CDATA[New comment by dkislyuk in "Ask HN: Who is hiring? (May 2025)"]]></title><description><![CDATA[
<p>Pinterest | Hybrid @ {San Francisco, New York, or Seattle} | Full-time + internships<p>Pinterest’s Advanced Technologies Group (ATG) is an ML applied research organization within the company, focusing on large-scale foundation models (e.g. multimodal encoders, graph representation models, content embeddings, generative models, computer vision signals, etc.) that are deployed throughout the company. ATG is composed primarily of ML engineers and researchers, backed by a strong infrastructure team, and a small product prototyping + design team for deploying new AI/ML features in Pinterest. The organization is highly collaborative, research-driven, and delivers deep impact. The team is hiring for several engineering position<p>- iOS engineer for generative AI products: we are looking for senior or staff iOS engineers who have a track record of building fast prototyping work in the AI space — no deep machine learning domain expertise is required, but the ideal candidate would be comfortable interfacing with our ATG’s ML teams daily. An engineer in this role would be building entirely new features for Pinterest leveraging emerging technologies across LLMs, visual models, recommendation systems, and more.<p>- Computer vision domain specialist: we are looking for researchers or applied engineers with industry experience in the computer vision / visual-language modeling field (e.g. multimodal representation learning, visual diffusion models, visual encoders/decoders, etc.) We encourage the team to regularly publish, and the team works in a highly collaborative, research-driven environment, with full access to the Pinterest image-board-style graph for large-scale pre-training.<p>Please reach out to me directly (dkislyuk@pinterest.com) if you’re interested in either of these roles.<p>Additionally, the team is currently hiring for fall 2025 ML research internships for Master’s / PhD students, with opportunities to publish or to work on frontier models in the visual understanding and multimodal representation learning space: <a href="https://grnh.se/dad7c60e1us" rel="nofollow">https://grnh.se/dad7c60e1us</a></p>
]]></description><pubDate>Fri, 02 May 2025 01:59:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43865402</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=43865402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43865402</guid></item><item><title><![CDATA[New comment by dkislyuk in "What Is Entropy?"]]></title><description><![CDATA[
<p>This is a great characterization of self-information. I would add that the `log` term doesn't just conveniently appear to satisfy the additivity axiom, but instead is the exact historical reason why it was invented in the first place. As in, the log function was specifically defined to find a family of functions that satisfied f(xy) = f(x) + f(y).<p>So, self-information is uniquely defined by (1) assuming that information is a function transform of probability, (2) that no information is transmitted for an event that certainly happens (i.e. f(1) = 0), and (3) independent information is additive. h(x) = -log p(x) is the only set of functions that satisfies all of these properties.</p>
]]></description><pubDate>Tue, 15 Apr 2025 12:22:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43691716</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=43691716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43691716</guid></item><item><title><![CDATA[New comment by dkislyuk in "Has the decline of knowledge work begun?"]]></title><description><![CDATA[
<p>I think commodification is directly tied to a perceived drop in quality. For example, if the barriers to making a video game keep going down, there will be far more attempts, and per Sturgeon's law, the majority will be of low quality. And we have a recency bias where we over-index on the last few releases that we've seen, and we only remember the good stuff from a generation or two ago. But for every multitude of low-effort, AI-generated video games out there, we still get gems like Factorio and Valheim.</p>
]]></description><pubDate>Thu, 27 Mar 2025 13:24:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43493367</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=43493367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43493367</guid></item><item><title><![CDATA[New comment by dkislyuk in "The Lost Art of Logarithms"]]></title><description><![CDATA[
<p>Presumably the book from this thread by Charles Petzold will be a great canonical resource, but originally there was a quote by Howard Eves that I came across that got me curious:<p>> One of the anomalies in the history of mathematics is the fact that logarithms were discovered before exponents were in use.<p>One can treat the discovery of logarithms as the search for a computation tool to turn multiplication (which was difficult in the 17th century) into addition. There were previous approaches for simplifying multiplication dating back to antiquity (quarter square multiplication, prosthaphaeresis), and A Brief History of Logarithms by R. C. Pierce covers this, where it’s framed as establishing correspondences between geometric and and arithmetic sequences. Playing around with functions that could possibly fit the functional equation f(ab) = f(a) + f(b) is a good, if manual, way to convince oneself that such functions do exist and that this is the defining characteristic of the logarithm (and not just a convenient property). For example, log probability is central to information theory and thus many ML topics, and the fundamental reason is because Claude Shannon wanted a transformation on top of probability (self-information) that would turn the probability of multiple events into an addition — the aforementioned "f" is the transformation that fits this additive property (and a few others), hence log() everywhere.<p>Interestingly, the logarithm “algorithm” was considered quite groundbreaking at the time;  Johannes Kepler, a primary beneficiary of the breakthrough, dedicated one of his books to Napier. R. C. Pierce wrote:<p>> Indeed, it has been postulated that logarithms literally lengthened the life spans of astronomers, who had formerly been sorely bent and often broken early by the masses of calculations their art required.</p>
]]></description><pubDate>Fri, 14 Mar 2025 02:22:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=43359031</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=43359031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43359031</guid></item><item><title><![CDATA[New comment by dkislyuk in "The Lost Art of Logarithms"]]></title><description><![CDATA[
<p>Yes, but such a property was not available to Napier, and from a teaching perspective, it requires understanding exponentials and their characterizations first. Starting from the original problem of how to simplify large multiplications seems like a more grounded way to introduce the concept.</p>
]]></description><pubDate>Thu, 13 Mar 2025 21:56:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=43357625</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=43357625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43357625</guid></item><item><title><![CDATA[New comment by dkislyuk in "The Lost Art of Logarithms"]]></title><description><![CDATA[
<p>I found that looking at the original motivation of logarithms has been more elucidating than the way the topic is presented in grade-school. Thinking through the functional form that can solve the multiplication problem that Napier was facing (how to simplify multiplying large astronomical observations), f(ab) = f(a) + f(b), and why that leads to a unique family of functions, resonates a lot better with me for why logarithms show up everywhere. This is in contrast to teaching them as the inverse of the exponential function, which was not how the concept was discussed until Euler. In fact, I think learning about mathematics in this way is more fun — what original problem was the author trying to solve, and what tools were available to them at the time?</p>
]]></description><pubDate>Thu, 13 Mar 2025 21:26:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=43357371</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=43357371</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43357371</guid></item><item><title><![CDATA[New comment by dkislyuk in "Ask HN: Who is hiring? (December 2024)"]]></title><description><![CDATA[
<p>Pinterest | San Francisco, New York, or hybrid/remote (US-only) | ML Engineer / Applied Research Scientist | Full-time<p>Pinterest’s Advanced Technologies Group (ATG) is hiring for an engineering position on our visual modeling team for developing Pinterest Canvas. Canvas is a foundation text-to-image model developed internally for helping various visualization, inpainting, and outpainting products. In this role, you’ll get to work with Pinterest’s rich visual-text dataset to build large-scale generative models which are continuously being shipped to production. The core Canvas pod is a small group (~6 engineers) inside of ATG, which focuses on a broad variety of AI/ML initiatives, such as core computer vision, multimodal representation learning, heterogeneous graph neural networks, recommender systems, etc.<p>New-grads are welcome to apply (preferably with a masters or PhD). Candidates should have diffusion modeling experience (e.g. diffusion transformers, LoRA fine-tuning, complex {text, image} conditioning, style transfer, etc.) and some form of industry experience. Engineers within ATG have a lot of leeway in terms of product contribution, so both ML engineers and research scientists are welcome to apply. We encourage the team to regularly publish, and the role can be either in person (SF, NY) or hybrid is preferred.<p>Please reach out to me directly (dkislyuk@pinterest.com) if you’re interested.</p>
]]></description><pubDate>Tue, 03 Dec 2024 04:56:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=42303175</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=42303175</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42303175</guid></item><item><title><![CDATA[New comment by dkislyuk in "Ask HN: Who is hiring? (June 2024)"]]></title><description><![CDATA[
<p>Pinterest Advanced Technologies Group | Staff Engineer, iOS and applied ML | US remote or hybrid in SF/NY | Full-time<p>We’re looking for strong engineers to help us build consumer AI products within Pinterest’s Advanced Technologies Group (ATG), our in-house ML research division. You’d be working with a full-stack team of ML researchers and product engineers on projects that bring LLMs, diffusion models, and other core models in the generative multimodal ML and computer vision space to life inside the Pinterest product. Projects include assistants, new ways to search, restyling of boards / pins / rooms, and many other new applications. Your work will directly impact how millions of users experience Pinterest.<p>Tracks:<p>*iOS engineer*: You’ll craft beautiful and intuitive user experiences for our new AI products. Strong command of iOS and UI/UX craftsmanship required. Bonus points if you’re an opinionated product thinker with 0-1 mentality or have experience working with ML models. Please apply here: <a href="https://www.pinterestcareers.com/jobs/5426324/staff-ios-software-engineer-advanced-technologies-group/?gh_jid=5426324" rel="nofollow">https://www.pinterestcareers.com/jobs/5426324/staff-ios-soft...</a><p>*Applied ML*: If you think you’d be a better fit as an applied ML or research engineer with an interest in directly translating research into user-facing products, feel free to contact me directly (@dkislyuk everywhere).<p>The ML and product engineering teams on ATG work directly together, along with design. The team consists of long-tenured employees who care deeply about both the quality of the Pinterest experience, and taking full advantage of the new capabilities emerging in the ML space over the last two years. ATG more broadly has spent the past decade+ bringing various ML technologies into the Pinterest ecosystem, and values publishing our work, building long-term infrastructure, and a collaborative and remote-friendly culture (though we do expect everyone to join company onsites a few times a year).</p>
]]></description><pubDate>Mon, 03 Jun 2024 20:47:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=40567368</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=40567368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40567368</guid></item><item><title><![CDATA[New comment by dkislyuk in "Vision Transformers Are Overrated"]]></title><description><![CDATA[
<p>Yes, exactly. ViTs need O(100M)-O(1B) images to overcome the lack of spatial priors. In that regime and beyond, they begin to generalize better than ConvNets.<p>Unfortunately, ImageNet is not a useful benchmark for a while now since pre-training is so important for production visual foundation models.</p>
]]></description><pubDate>Tue, 02 Apr 2024 02:01:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=39901604</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=39901604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39901604</guid></item><item><title><![CDATA[New comment by dkislyuk in "Apollo astronaut Frank Borman has died"]]></title><description><![CDATA[
<p>Rocket Men by Robert Kurson tells the Apollo 8 story in a captivating manner. Some of the passages are quite dramatic but it's justified given the litany of firsts accomplished by the mission.</p>
]]></description><pubDate>Fri, 10 Nov 2023 00:02:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=38213131</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=38213131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38213131</guid></item><item><title><![CDATA[New comment by dkislyuk in "Nvidia and Foxconn to build 'AI factories'"]]></title><description><![CDATA[
<p>In the current world, deep learning with homogeneous computation graphs, tuned with backprop, has won the Hardware Lottery [1]. This is unfortunate for research outside of that area, but just looking at the momentum of development it seems like a sure bet to keep investing in GPU-based training and inference for the next decade. There's just too much lock-in already to this paradigm.<p>If a new algorithm appears from with a novel approach (analog compute, heterogeneous computation graphs from genetic algorithms, quantum, much more...), there will be a whole generation of R&D + tool + framework building, which gives the major players enough time to adapt.<p>[1] <a href="https://arxiv.org/abs/2009.06489" rel="nofollow noreferrer">https://arxiv.org/abs/2009.06489</a></p>
]]></description><pubDate>Thu, 19 Oct 2023 16:27:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=37944994</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=37944994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37944994</guid></item><item><title><![CDATA[New comment by dkislyuk in "Nvidia on the Mountaintop"]]></title><description><![CDATA[
<p>Moravec's paradox is the usual counterargument given to this line of reasoning. We've had far less progress in embodied robotics, where a robot has to interact with the real world in any kind of generalized, tactile way, compared to visual, audio, and language processing tasks. The history of AI is littered with predictions that <a reasoning or computation AI breakthrough> will lead to a humanoid robot, and the predictions always end up in the regime of ~real world data collection and integration is harder than we thought.<p>Maybe this time it's different, and maybe it's not, but that's why most recent robotics predictions fail to convince the ML industry broadly.</p>
]]></description><pubDate>Mon, 28 Aug 2023 18:46:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=37298704</link><dc:creator>dkislyuk</dc:creator><comments>https://news.ycombinator.com/item?id=37298704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37298704</guid></item></channel></rss>