<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sdenton4</title><link>https://news.ycombinator.com/user?id=sdenton4</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 09:39:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sdenton4" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sdenton4 in "I used AI. It worked. I hated it"]]></title><description><![CDATA[
<p>When you resolve bottlenecks, new bottlenecks become apparent. Right now, it's looking like assessment and evaluation are massive bottlenecks.</p>
]]></description><pubDate>Sun, 05 Apr 2026 05:57:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47646498</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47646498</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47646498</guid></item><item><title><![CDATA[New comment by sdenton4 in "I used AI. It worked. I hated it"]]></title><description><![CDATA[
<p>It's far more sane to review a complete PR than to verify every small change. They are like dicey new interns - do you want to look over their shoulder all day, or review their code after they've had time to do some meaningful quantum of work?</p>
]]></description><pubDate>Sun, 05 Apr 2026 05:55:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47646493</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47646493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47646493</guid></item><item><title><![CDATA[New comment by sdenton4 in "OpenClaw privilege escalation vulnerability"]]></title><description><![CDATA[
<p>That's funny. In my study it was 70%.  Nah, make that 85%.</p>
]]></description><pubDate>Sat, 04 Apr 2026 01:03:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634446</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47634446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634446</guid></item><item><title><![CDATA[New comment by sdenton4 in "Artemis II crew take “spectacular” image of Earth"]]></title><description><![CDATA[
<p>Or maybe press the timer and let it float...</p>
]]></description><pubDate>Fri, 03 Apr 2026 22:53:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47633381</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47633381</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47633381</guid></item><item><title><![CDATA[New comment by sdenton4 in "Qwen3.6-Plus: Towards real world agents"]]></title><description><![CDATA[
<p>Sure, it might try to subtly steer you towards fascism, but other than that, it's great.</p>
]]></description><pubDate>Fri, 03 Apr 2026 01:49:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47622415</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47622415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47622415</guid></item><item><title><![CDATA[New comment by sdenton4 in "Google releases Gemma 4 open models"]]></title><description><![CDATA[
<p>It indicates that there's a good chance that they have trained on the test set, making the eval scores useless. Even if you have given up on the dream of generalization entirely, you can't meaningfully compare models which have trained on test to those which have not.</p>
]]></description><pubDate>Thu, 02 Apr 2026 23:10:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47621390</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47621390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47621390</guid></item><item><title><![CDATA[New comment by sdenton4 in "Google releases Gemma 4 open models"]]></title><description><![CDATA[
<p>Doing great on public datasets and underperforming on private benchmarks is not a good look.</p>
]]></description><pubDate>Thu, 02 Apr 2026 19:28:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619049</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47619049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619049</guid></item><item><title><![CDATA[New comment by sdenton4 in "TinyLoRA – Learning to Reason in 13 Parameters"]]></title><description><![CDATA[
<p>It's the statistics equivalent of 'no one needs more than 640kb of RAM'</p>
]]></description><pubDate>Wed, 01 Apr 2026 04:08:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47596695</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47596695</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47596695</guid></item><item><title><![CDATA[New comment by sdenton4 in "Goodbye to Sora"]]></title><description><![CDATA[
<p>And that energy costs money, both at the training/cgi stage and at the inference/consumption stage. It's not even an externality.<p>CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.</p>
]]></description><pubDate>Wed, 25 Mar 2026 04:43:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47513344</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47513344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47513344</guid></item><item><title><![CDATA[New comment by sdenton4 in "Wine 11 rewrites how Linux runs Windows games at kernel with massive speed gains"]]></title><description><![CDATA[
<p>Idk, kernel anti cheat is a pretty clear sign to me that I should pick a different game to play anyway...</p>
]]></description><pubDate>Wed, 25 Mar 2026 04:36:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47513293</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47513293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47513293</guid></item><item><title><![CDATA[New comment by sdenton4 in "Goodbye to Sora"]]></title><description><![CDATA[
<p>Right idea, but the application is incorrect.<p>Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.<p>Both a movie and a language model can cost tens or hundreds of dollars to produce.<p>In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.<p>At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.</p>
]]></description><pubDate>Wed, 25 Mar 2026 04:11:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47513131</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47513131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47513131</guid></item><item><title><![CDATA[New comment by sdenton4 in "The bridge to wealth is being pulled up with AI"]]></title><description><![CDATA[
<p>What are ai and robots other than excess labor, waiting to be allocated? Why does the wealth have to come from the meat bags?</p>
]]></description><pubDate>Tue, 24 Mar 2026 15:34:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47504202</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47504202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47504202</guid></item><item><title><![CDATA[New comment by sdenton4 in "Autoresearch on an old research idea"]]></title><description><![CDATA[
<p>Yah, I'm a bit skeptical - ime humans tend to under explore due to incorrect assumptions. Often this is due to forming a narrative to explain some result, and then over attaching to it. Also, agents aren't actually good at reasoning yet.<p>Good Bayesian exploration is much, much better than grid search, and does indeed learn to avoid low value regions of the parameter space. If we're talking about five minute experiments (as in the blog post), Bayesian optimization should chew through the task no problem.</p>
]]></description><pubDate>Mon, 23 Mar 2026 20:28:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47494663</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47494663</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47494663</guid></item><item><title><![CDATA[New comment by sdenton4 in "Autoresearch on an old research idea"]]></title><description><![CDATA[
<p>For raw hyperparameter search, though, I would expect a proper Bayesian framework to be much better. Eg, vizier.</p>
]]></description><pubDate>Mon, 23 Mar 2026 19:23:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493972</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47493972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493972</guid></item><item><title><![CDATA[New comment by sdenton4 in "Autoresearch on an old research idea"]]></title><description><![CDATA[
<p>The gist of these things is you point them at an eval metric and say 'make it go better.' so, you can point it at anything you can measure. The example in the blog post here is bonding boxes on wood cut images.</p>
]]></description><pubDate>Mon, 23 Mar 2026 19:23:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493958</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47493958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493958</guid></item><item><title><![CDATA[New comment by sdenton4 in "The “small web” is bigger than you might think"]]></title><description><![CDATA[
<p>On the one hand, a search engine is not heroin... It's a pretty broken analogy.<p>On the other hand, we could probably convince Cory Doctorow to write a piece about how fentanyl is really about the enshitification of opiates.</p>
]]></description><pubDate>Mon, 16 Mar 2026 19:04:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47403293</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47403293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47403293</guid></item><item><title><![CDATA[New comment by sdenton4 in "$96 3D-printed rocket that recalculates its mid-air trajectory using a $5 sensor"]]></title><description><![CDATA[
<p>However, D_A is moving, while D_B can be stationary.</p>
]]></description><pubDate>Sun, 15 Mar 2026 15:11:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47388159</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47388159</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47388159</guid></item><item><title><![CDATA[New comment by sdenton4 in "White House plan to break up iconic U.S. climate lab moves forward"]]></title><description><![CDATA[
<p>Harris was the vice president, and was therefore the closest thing to a small-d democratic choice amongst the available options. Otherwise... Why Harris and not Newsom?<p>The better choice would have been Biden stepping out earlier and having a real primary, of course.</p>
]]></description><pubDate>Thu, 12 Mar 2026 21:13:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47357194</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47357194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47357194</guid></item><item><title><![CDATA[New comment by sdenton4 in "Don't post generated/AI-edited comments. HN is for conversation between humans."]]></title><description><![CDATA[
<p>AI doesn't just hide your voice -- it improves it!</p>
]]></description><pubDate>Wed, 11 Mar 2026 20:02:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47340565</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47340565</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47340565</guid></item><item><title><![CDATA[New comment by sdenton4 in "The Gervais Principle, or the Office According to “The Office” (2009)"]]></title><description><![CDATA[
<p>to hazard a guess: the better your coding model is, the less you have to assess your fundamentals, and thereby suffer arrested development.</p>
]]></description><pubDate>Tue, 10 Mar 2026 23:01:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47329869</link><dc:creator>sdenton4</dc:creator><comments>https://news.ycombinator.com/item?id=47329869</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47329869</guid></item></channel></rss>