<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: merizian</title><link>https://news.ycombinator.com/user?id=merizian</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 00:07:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=merizian" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by merizian in "Lawmakers want to ban VPNs"]]></title><description><![CDATA[
<p>I disagree that legislation can't help. Fundamentally there's an education disconnect and unnecessary friction in setting up parental controls. Governments can better educate parents about the risks, and give them better tools to filter/monitor content their children watch (eg at the device level). Being a parent is hard and it's possible to make this part easier imo.<p>eg consider child-proof packaging and labeling laws for medication, which dramatically reduced child mortality due to accidental drug misuse.</p>
]]></description><pubDate>Sat, 15 Nov 2025 11:40:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45936745</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=45936745</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45936745</guid></item><item><title><![CDATA[New comment by merizian in "I don't think AGI is right around the corner"]]></title><description><![CDATA[
<p>> The reason why I in particular am so interested in continual learning has pretty much zero to do with humans. Sensors and mechanical systems change their properties over time through wear and tear.<p>To be clear, this isn’t what Dwarkesh was pointing at, and I think you are using the term “continual learning” differently to him. And he is primarily interested in it <i>because</i> humans do it.<p>The article introduces a story about how humans learn, and calls it continual learning:<p>> How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student … This just wouldn’t work … Yes, there’s RL fine tuning. But it’s just not a deliberate, adaptive process the way human learning is.<p>The point I’m making is just that this is bad form: “AIs can’t do X, but humans can. Humans do task X because they have Y, but AIs don’t have Y, so AIs will find X hard.” Consider I replace X with “common sense reasoning” and Y with “embodied experience”. That would have seemed reasonable in 2020, but ultimately would have been a bad bet.<p>I don’t disagree with anything else in your response. I also buy into bitter lesson (and generally: easier to measure => easier to optimize). I think it’s just different uses of the same terms. And I don’t necessarily think what you’re referring to as continual learning won’t work.</p>
]]></description><pubDate>Mon, 07 Jul 2025 16:56:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44492286</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=44492286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44492286</guid></item><item><title><![CDATA[New comment by merizian in "I don't think AGI is right around the corner"]]></title><description><![CDATA[
<p>The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component.<p>In practice, continual learning has not been an important component of improvement in deep learning history thus far. Instead, large diverse datasets and scale have proven to work the best. I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve. I think generally, anthropomorphisms lack predictive power.<p>I think maybe a big real crux is the amount of acceleration you can achieve once you get very competent programming AIs spinning the RL flywheel. The author mentioned uncertainty about this, which is fair, and I share the uncertainty. But it leaves the rest of the piece feeling too overconfident.</p>
]]></description><pubDate>Sun, 06 Jul 2025 23:02:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44484927</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=44484927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44484927</guid></item><item><title><![CDATA[New comment by merizian in "AI Coding assistants provide little value because a programmer's job is to think"]]></title><description><![CDATA[
<p>I prefer a more nuanced take. If I can’t reliably delegate away a task, then it’s usually not worth delegating. The time to review the code needs to be less than the time it takes to write it myself. This is true for people and AI.<p>And there are now many tasks which I can confidently delegate away to AI, and that set of tasks is growing.<p>So I agree with the author for  most of the programming tasks I can think of. But disagree for some.</p>
]]></description><pubDate>Sun, 27 Apr 2025 22:57:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43815834</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=43815834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43815834</guid></item><item><title><![CDATA[New comment by merizian in "GPT-5 is behind schedule"]]></title><description><![CDATA[
<p>Because of mup [0] and scaling laws, you can test ideas empirically on smaller models, with some confidence they will transfer to the larger model.<p>[0] <a href="https://arxiv.org/abs/2203.03466" rel="nofollow">https://arxiv.org/abs/2203.03466</a></p>
]]></description><pubDate>Mon, 23 Dec 2024 06:10:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=42492221</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=42492221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42492221</guid></item><item><title><![CDATA[New comment by merizian in "We built a self-healing system to survive a concurrency bug at Netflix"]]></title><description><![CDATA[
<p>This reminds me of LLM pretraining and how there are so many points at which the program could fail and so you need clever solutions to keep uptime high. And it's not possible to just fix the bugs--GPUs will often just crash (e.g. in graphics, if a pixel flips the wrong color for a frame, it's fine, whereas such things can cause numerical instability in deep learning so ECC catches them). You also often have a fixed sized cluster which you want to maximize utilization of.<p>So improving uptime involves holding out a set of GPUs to swap out failed ones while they reboot. But also the whole run can just randomly deadlock, so you might solve that by listening to the logs and restarting after a certain amount of inactivity. And you have to be clever with how to save/load checkpoints, since that can start to become a huge bottleneck.<p>After many layers of self healing, we managed to take a vacation for a few days without any calls :)</p>
]]></description><pubDate>Wed, 13 Nov 2024 11:52:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42125246</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=42125246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42125246</guid></item><item><title><![CDATA[New comment by merizian in "Ask HN: What is the most expensive off-the-shelf software you have seen?"]]></title><description><![CDATA[
<p>You can buy a commercial license for OpenPose for $25K/year <a href="https://cmu.flintbox.com/technologies/b820c21d-8443-4aa2-a49f-8919d93a8740" rel="nofollow">https://cmu.flintbox.com/technologies/b820c21d-8443-4aa2-a49...</a></p>
]]></description><pubDate>Thu, 12 Sep 2024 09:31:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=41519076</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=41519076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41519076</guid></item><item><title><![CDATA[New comment by merizian in "A Real Life Off-by-One Error"]]></title><description><![CDATA[
<p>Even if the out of place hold were used, would you then conclude it to be causal? I still wouldn't rule out coincidence. Many discoveries happen as a result of investigating spurious patterns.<p>Also the author rules out psychology, but I wouldn't, especially since there were multiple confirmed errors in the route preparation, which I expect could reduce one's trust in the fairness of the competition. In the moment, I might start to wonder, "If one hold was out of place, why not more? Is anyone even checking this?" even if untrue / unlikely.</p>
]]></description><pubDate>Thu, 05 Sep 2024 01:31:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=41452629</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=41452629</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41452629</guid></item><item><title><![CDATA[New comment by merizian in "Multiple Displays on a Mac Sucks"]]></title><description><![CDATA[
<p>Another option that works for me: <a href="https://cordlessdog.com/stay/" rel="nofollow">https://cordlessdog.com/stay/</a></p>
]]></description><pubDate>Fri, 26 Apr 2024 09:02:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=40167265</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=40167265</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40167265</guid></item><item><title><![CDATA[New comment by merizian in "I disagree with Geoff Hinton regarding "glorified autocomplete""]]></title><description><![CDATA[
<p>The fallacy being made in this argument is that computers need to perform tasks the same way as humans to achieve equal or better performance on them. While having better "system 2" abilities may improve performance, it's plausible that scaled-up next-token prediction along with a bit of scaffolding and finetuning could match human performance on the same diversity of tasks while doing them a completely different way.<p>If I had to critique Hinton's claims, I would say his usage of the word "understand" can be vague and communicate assumptions because it's from an ontology used for reasoning about human reasoning, not this new alien form of reasoning which language models embody.</p>
]]></description><pubDate>Sat, 18 Nov 2023 17:32:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=38322023</link><dc:creator>merizian</dc:creator><comments>https://news.ycombinator.com/item?id=38322023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38322023</guid></item></channel></rss>