<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: _0ffh</title><link>https://news.ycombinator.com/user?id=_0ffh</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 21:23:48 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=_0ffh" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by _0ffh in "The hypocrisy of cyberlibertarianism"]]></title><description><![CDATA[
<p>What problem would that be that has not already been addressed further up the chain?<p>And it <i>is</i> an ad hominem - it's nothing more than an allegation impugning the character of libertarians in order to dismiss their arguments. The allegation alone does neither prove anything about the actual character of these people, nor what their view on reality and empathy actually is, nor if that view is actually wrong, nor who is doing the actual projecting here.</p>
]]></description><pubDate>Mon, 11 May 2026 17:48:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=48098203</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=48098203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48098203</guid></item><item><title><![CDATA[New comment by _0ffh in "The hypocrisy of cyberlibertarianism"]]></title><description><![CDATA[
<p>I prefer to judge such advice by the available facts, not by hearsay about the moral character of the advisor - especially not hearsay spread by his enemies. Your ad hominem has no bearing on the argument.<p>So, how is trusting politicians and bureaucrats to be selfless and focused on their duty to society working out for you?</p>
]]></description><pubDate>Mon, 11 May 2026 09:06:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=48092683</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=48092683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48092683</guid></item><item><title><![CDATA[New comment by _0ffh in "The hypocrisy of cyberlibertarianism"]]></title><description><![CDATA[
<p>Buying out maybe, but that only exacerbates the problem for the company in the long run. Regulatory capture is what actually <i>works</i>, but not within the libertarian framework, because regulation again is not a market mechanism, but government intervention into the market - exactly what libertarians say we should have less of in the first place.<p>Mind you, not <i>different</i>, or "<i>better</i>" intervention, but <i>less</i>, or even none at all. One could argue the point about libertarianism is that you can't trust the government to do a good job because it is based on force, and not voluntary market interactions, and hence lacks the proper incentives. It's just a bunch of guys on a spending spree with other peoples money, and their incentive is to make as much of it as humanly possible land in their own pockets.</p>
]]></description><pubDate>Mon, 11 May 2026 06:52:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=48091806</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=48091806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48091806</guid></item><item><title><![CDATA[New comment by _0ffh in "We see something that works, and then we understand it"]]></title><description><![CDATA[
<p>That's infuriating to me, and I wasn't even the victim!<p>I'm so lucky such a thing never happened to me. The closest thing was a math teacher in middle school asking me about a problem solution of mine in a test, that he had marked "mysterious" (I had to come up with my own solution path during the test because I hadn't memorized the canonical one we were taught). He asked me to explain my reasoning, and when he was satisfied that it was sound he gave me full marks for the question.</p>
]]></description><pubDate>Sun, 10 May 2026 19:15:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=48086903</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=48086903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48086903</guid></item><item><title><![CDATA[New comment by _0ffh in "The hypocrisy of cyberlibertarianism"]]></title><description><![CDATA[
<p>The corporation (that runs internally as a planned economy) will get more and more inefficient the larger it gets, because that is what planning an economy does. Which in turn means it will loose market share and be forced to lean up until it is competitive again.</p>
]]></description><pubDate>Sun, 10 May 2026 16:52:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=48085541</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=48085541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48085541</guid></item><item><title><![CDATA[New comment by _0ffh in "DeepSeek V4 – almost on the frontier"]]></title><description><![CDATA[
<p>Personally, I'm not bothered very much by LLM confabulation, as long as it's the result of missing context. In most practical tasks, we either give context to the model, or tell it to find it itself using the internet. What I <i>am</i> concerned with is confabulation that contradicts available in-context information, but that doesn't seem to be what is measured here.</p>
]]></description><pubDate>Sat, 02 May 2026 17:30:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988494</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47988494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988494</guid></item><item><title><![CDATA[New comment by _0ffh in "Becoming a father shrinks your cerebrum (2022)"]]></title><description><![CDATA[
<p>* In favour of our genes.</p>
]]></description><pubDate>Sat, 02 May 2026 14:40:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47986792</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47986792</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47986792</guid></item><item><title><![CDATA[New comment by _0ffh in "Deep under Antarctic ice, a long-predicted cosmic whisper breaks through"]]></title><description><![CDATA[
<p>Wouldn't the higher density of liquid water be an advantage?</p>
]]></description><pubDate>Tue, 28 Apr 2026 20:54:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47940610</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47940610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47940610</guid></item><item><title><![CDATA[New comment by _0ffh in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>Well, the famous Turing test was evidently insufficient. All that happened is that the test is dead and nobody ever mentions it anymore. I'm not sure that any other test would fare any better once solved.</p>
]]></description><pubDate>Sun, 26 Apr 2026 09:46:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47908892</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47908892</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47908892</guid></item><item><title><![CDATA[New comment by _0ffh in "Our eighth generation TPUs: two chips for the agentic era"]]></title><description><![CDATA[
<p>Still, attributing that progress to "years of research at Google" alone is simplifying the facts to the point of being just plain wrong. That kind of research was always very much in the open and cooperative, with deep levels of standing-on-shoulders.<p>Attention e.g. was developed by Dzmitry Bahdanau et al. (those being Kyunghyun Cho and Yoshua Bengio) in 2014 while interning at the University of Montreal.<p>The insight of the paper you point to was that with attention you could dispense of the RNN that attention was initially developed to support.</p>
]]></description><pubDate>Wed, 22 Apr 2026 15:41:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47865198</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47865198</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47865198</guid></item><item><title><![CDATA[New comment by _0ffh in "What if AI doesn't need more RAM but better math?"]]></title><description><![CDATA[
<p>Openreview link is not working, was split apparently.<p><a href="https://openreview.net/forum?id=tO3ASKZlok" rel="nofollow">https://openreview.net/forum?id=tO3ASKZlok</a></p>
]]></description><pubDate>Sun, 29 Mar 2026 14:09:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47563288</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47563288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47563288</guid></item><item><title><![CDATA[New comment by _0ffh in "How much precision can you squeeze out of a table?"]]></title><description><![CDATA[
<p>Also nobody in his right mind uses lookup tables where the table value is actually the float approximation of the true f(x) - you choose the support values to minimize an error (e.g. mse) of a dense sampling of your interpolated value over x (or, in the limit, the integral of the chosen error function between the true curve and the interpolation of your supports). If you want to e.g. approximate a convex function using linear interpolation, all the tabulated values f'(x) would be <= the true f(x).</p>
]]></description><pubDate>Thu, 26 Mar 2026 23:53:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47537390</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47537390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47537390</guid></item><item><title><![CDATA[New comment by _0ffh in "The day I discovered type design"]]></title><description><![CDATA[
<p>I took a look and it's probably a color isoluminance effect.</p>
]]></description><pubDate>Fri, 20 Mar 2026 08:29:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47451960</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47451960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47451960</guid></item><item><title><![CDATA[New comment by _0ffh in "Why the global elite gave up on spelling and grammar"]]></title><description><![CDATA[
<p>Birmingham, AL</p>
]]></description><pubDate>Wed, 11 Mar 2026 20:11:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47340751</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47340751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47340751</guid></item><item><title><![CDATA[New comment by _0ffh in "Swiss e-voting pilot can't count 2,048 ballots after decryption failure"]]></title><description><![CDATA[
<p>I'm sure you could even let every voter verify that their vote has been registered correctly.<p>Edit: But as a comment somewhere else in the tree noted ,,And if it could tell you that then a third party could force you to reveal that you voted "right" as agreed before.`` - I guess everything's trade-offs.</p>
]]></description><pubDate>Wed, 11 Mar 2026 19:46:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47340319</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47340319</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47340319</guid></item><item><title><![CDATA[New comment by _0ffh in "How important was the Battle of Hastings?"]]></title><description><![CDATA[
<p>I was wondering a while ago what the outcome would have been if the order was reversed. Harold Godwinson might have fought off William of Normandy, but then be too exhausted to stop Harald Hardrada, all with just a few weeks of difference in timing.</p>
]]></description><pubDate>Sun, 08 Mar 2026 00:57:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47293194</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47293194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47293194</guid></item><item><title><![CDATA[New comment by _0ffh in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>I'm on board with that framing of the process, and I see how my original formulation was too rough.<p>I was reacting to "We're moving to a world where the mechanical part of grinding the code is not worth much". I have the impression that in the past just mechanically grinding the code was less of a thing than it apparently is today. Guidance, sure, but not as much as seems to be common (often necessarily so) today. But I'm sure that varies with a lot of factors, not just the calendar year.</p>
]]></description><pubDate>Sat, 07 Mar 2026 18:41:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47290301</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47290301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47290301</guid></item><item><title><![CDATA[New comment by _0ffh in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>Might be a big company thing then, but I'm not wholly convinced. There's a <i>big</i> gap between designing the outline of a big system and coding instructions that can be followed without having to make your own decisions. The question of how much of that gap is filled by the "design" vs "coding" levels is a spectrum.</p>
]]></description><pubDate>Sat, 07 Mar 2026 16:56:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47289320</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47289320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47289320</guid></item><item><title><![CDATA[New comment by _0ffh in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>> This is hyperbolic<p>Maybe a bit, but unfortunately sometimes not so much. I recently had an LLM write a couple of transforms on a tree in Python. The node class just had "kind" and "children" defined, nothing else. The LLM added new attributes to use in the new node kinds (Python allows to just do "foo.bar=baz" to add one). Apparently it saw a lot of code doing that during training.<p>I corrected the code by hand and modified the Node class to raise an error when new attributes are added, with an emphatic source code comment to not add new attributes.<p>A couple of sessions later it did it again, even adding  it's own comment about circumventing the restriction! X-|<p>Anyways, I think I mostly agree with your assessment. I might be dating myself here, but I'm not even sure what happened that made "coding" grunt work. It used to be every "coder" was an "architect" as well, and did their own legwork as needed. Maybe labor shortages changed that.</p>
]]></description><pubDate>Sat, 07 Mar 2026 15:33:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47288534</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47288534</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47288534</guid></item><item><title><![CDATA[New comment by _0ffh in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>Huh, that explains a lot about the F500, and their buzzword slogans like "culture of excellence".<p>LLM code is still mostly absurdly bad, unless you tell it in painstaking detail what to do and what to avoid, and never ask it to do a bigger job at a time than a single function or very small class.<p>Edit: I'll admit though that the detailed explanation is often still much less work than typing everything yourself. But it is a showstopper for autonomous "agentic coding".</p>
]]></description><pubDate>Sat, 07 Mar 2026 11:35:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47286699</link><dc:creator>_0ffh</dc:creator><comments>https://news.ycombinator.com/item?id=47286699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47286699</guid></item></channel></rss>