<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: p1esk</title><link>https://news.ycombinator.com/user?id=p1esk</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 02:23:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=p1esk" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by p1esk in "Why doesn't HN have a mobile app?"]]></title><description><![CDATA[
<p>What’s wrong with reading HN in a browser? What problem are you trying to solve?</p>
]]></description><pubDate>Sun, 05 Apr 2026 15:31:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47650453</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47650453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47650453</guid></item><item><title><![CDATA[New comment by p1esk in "Finnish sauna heat exposure induces stronger immune cell than cytokine responses"]]></title><description><![CDATA[
<p>Just to clarify - it’s a temporary effect - lasts for 3-6 months</p>
]]></description><pubDate>Sun, 05 Apr 2026 15:02:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47650151</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47650151</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47650151</guid></item><item><title><![CDATA[New comment by p1esk in "Introduction to Computer Music [pdf]"]]></title><description><![CDATA[
<p>Oh ok, makes sense then</p>
]]></description><pubDate>Sun, 05 Apr 2026 03:10:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47645791</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47645791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47645791</guid></item><item><title><![CDATA[New comment by p1esk in "Introduction to Computer Music (2009) [pdf]"]]></title><description><![CDATA[
<p>Wow, this book has been published in 2025, and it has zero mention of AI generated music. Not saying it's a bad thing - from the table of content it covers a lot of important fundamentals, but ignoring the elephant in the room is... weird.</p>
]]></description><pubDate>Sun, 05 Apr 2026 03:03:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47645766</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47645766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47645766</guid></item><item><title><![CDATA[New comment by p1esk in "Embarrassingly simple self-distillation improves code generation"]]></title><description><![CDATA[
<p>It’s so ironic that Apple still publishes AI research and OpenAI does not.</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:32:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47639913</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47639913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47639913</guid></item><item><title><![CDATA[New comment by p1esk in "Embarrassingly simple self-distillation improves code generation"]]></title><description><![CDATA[
<p>LLMs are orders of magnitude simpler than brains, and we literally designed them from scratch. Also, we have full control over their operation and we can trace every signal.<p>Are you surprised we understand them better than brains?</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:21:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47639798</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47639798</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47639798</guid></item><item><title><![CDATA[New comment by p1esk in "The OpenAI graveyard: All the deals and products that haven't happened"]]></title><description><![CDATA[
<p>I’ve been a paid subscriber for all three players since day 1. CC (Opus) has been a clear winner for agentic coding starting about 6 months ago. GPT5.4 reduced the gap somewhat but the gap is still there.</p>
]]></description><pubDate>Wed, 01 Apr 2026 19:02:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47605064</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47605064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47605064</guid></item><item><title><![CDATA[New comment by p1esk in "Hamilton-Jacobi-Bellman Equation: Reinforcement Learning and Diffusion Models"]]></title><description><![CDATA[
<p>Yes, this is a problem - the most challenging samples might not even be present in your training data. This means your model will not perform well if real world data has lots of challenging samples.<p>This can be partially solved if we make some assumptions about your labeller:<p>1. they have still picked enough challenging samples.<p>2. their preferences are still based on features you care about.<p>3. he labelled the challenging samples correctly.<p>And probably some other assumptions should hold for distribution of labels, etc. But what we can do in this situation is first try to model that labeller preferences, by training a binary classifier - how likely he would choose this sample for labelling from the real-world distribution? If we train that classifier, we can then assign its confidence as a sample weight when preparing our training dataset (less likely samples get more weight). This would force our main classifier to pay more attention to the challenging samples during training.<p>This could help somewhat if all assumptions hold, but in practice I would not expect much improvement, and the solution above can easily make it worse - this problem needs to be solved by better labelling.<p>How did you solve it?</p>
]]></description><pubDate>Tue, 31 Mar 2026 17:58:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47591153</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47591153</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47591153</guid></item><item><title><![CDATA[New comment by p1esk in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>I don’t see why it’s not ok to do that to an AI model. Or are you asking why they don’t want you to do it?</p>
]]></description><pubDate>Tue, 31 Mar 2026 14:38:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588024</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47588024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588024</guid></item><item><title><![CDATA[New comment by p1esk in "Hamilton-Jacobi-Bellman Equation: Reinforcement Learning and Diffusion Models"]]></title><description><![CDATA[
<p>Maybe I don’t understand this data labeling issue - are you talking about imbalanced classification dataset? Are hard classes under-represented or missing labels completely?</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:14:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580381</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47580381</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580381</guid></item><item><title><![CDATA[New comment by p1esk in "Hamilton-Jacobi-Bellman Equation: Reinforcement Learning and Diffusion Models"]]></title><description><![CDATA[
<p>Literally every single example you provided does not require much math fundamentals. Just basic ML engineering knowledge. Are you saying that understanding things like numerical overflow or exploding gradients require sophisticated math background?</p>
]]></description><pubDate>Mon, 30 Mar 2026 18:50:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578184</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47578184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578184</guid></item><item><title><![CDATA[New comment by p1esk in "72% of the dollar's purchasing power was destroyed in just four episodes"]]></title><description><![CDATA[
<p>How about other currencies?</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:42:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575726</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47575726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575726</guid></item><item><title><![CDATA[New comment by p1esk in "Hamilton-Jacobi-Bellman Equation: Reinforcement Learning and Diffusion Models"]]></title><description><![CDATA[
<p>Debugging ML models (large part of my job) requires very little math. Engineering experience and mindset is a lot more relevant for debugging. Complicated math is typically needed when you want invent new loss functions, or new methods for regularization, normalization or model compression.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:29:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575569</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47575569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575569</guid></item><item><title><![CDATA[New comment by p1esk in "1929: Inside the Greatest Crash in Wall Street History"]]></title><description><![CDATA[
<p>I’m assuming US dollar will have more value abroad than here.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:20:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575453</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47575453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575453</guid></item><item><title><![CDATA[New comment by p1esk in "1929: Inside the Greatest Crash in Wall Street History"]]></title><description><![CDATA[
<p><i>Sorkin is more interested in how the crisis felt than why it happened.</i><p>Thank you - this is exactly what I want to learn more about. Placed an order.</p>
]]></description><pubDate>Sun, 29 Mar 2026 07:05:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47560997</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47560997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47560997</guid></item><item><title><![CDATA[New comment by p1esk in "1929: Inside the Greatest Crash in Wall Street History"]]></title><description><![CDATA[
<p>In this scenario I would take that 200k (or whatever there would be), and move to some low COL country.</p>
]]></description><pubDate>Sun, 29 Mar 2026 06:55:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47560957</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47560957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47560957</guid></item><item><title><![CDATA[New comment by p1esk in "Tinybox – A powerful computer for deep learning"]]></title><description><![CDATA[
<p>They wouldn’t build anything - they would order from Dell or Supermicro.</p>
]]></description><pubDate>Sun, 22 Mar 2026 16:33:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47479239</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47479239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47479239</guid></item><item><title><![CDATA[New comment by p1esk in "Tinybox – A powerful computer for deep learning"]]></title><description><![CDATA[
<p>I’m not a gamer but I know that joke</p>
]]></description><pubDate>Sun, 22 Mar 2026 16:24:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47479120</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47479120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47479120</guid></item><item><title><![CDATA[New comment by p1esk in "Learning athletic humanoid tennis skills from imperfect human motion data"]]></title><description><![CDATA[
<p>I think coffee test for robots will be similar to Turing Test for LLMs, which was quietly achieved and forgotten somewhere between gpt-3.5 and gpt-4. Real tests are tasks like cooking or plumbing - I expect that to come in 2-3 years.</p>
]]></description><pubDate>Mon, 16 Mar 2026 02:52:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47394677</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47394677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47394677</guid></item><item><title><![CDATA[New comment by p1esk in "Learning athletic humanoid tennis skills from imperfect human motion data"]]></title><description><![CDATA[
<p>I’ve been thinking about this too, and I share your optimism: <a href="https://news.ycombinator.com/item?id=47213310">https://news.ycombinator.com/item?id=47213310</a></p>
]]></description><pubDate>Mon, 16 Mar 2026 02:45:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47394627</link><dc:creator>p1esk</dc:creator><comments>https://news.ycombinator.com/item?id=47394627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47394627</guid></item></channel></rss>