<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: PaulStatezny</title><link>https://news.ycombinator.com/user?id=PaulStatezny</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 13:50:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=PaulStatezny" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by PaulStatezny in "Agents need control flow, not more prompts"]]></title><description><![CDATA[
<p>I agree. But you can speak imperatively to agents as well ("Here are specific steps; follow them") and they can still screw up. :) I think what you're looking for is determinism, not imperativism.<p>And to your point: instructing a (non-deterministic) LLM declaratively ("get me to this end state") compounds the likelihood of going off the rails.</p>
]]></description><pubDate>Thu, 07 May 2026 23:51:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056652</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=48056652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056652</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Why are neural networks and cryptographic ciphers so similar? (2025)"]]></title><description><![CDATA[
<p>I would highly recommend the free book Crypto 101.<p><a href="https://www.crypto101.io" rel="nofollow">https://www.crypto101.io</a></p>
]]></description><pubDate>Mon, 04 May 2026 14:25:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=48009208</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=48009208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48009208</guid></item><item><title><![CDATA[New comment by PaulStatezny in "AWS CEO says replacing junior devs with AI is 'one of the dumbest ideas'"]]></title><description><![CDATA[
<p>But without AI, there are neural connections formed while determining the correct one-off solution.<p>The neural connections (or lack of them) have longer term comprehension-building implications.</p>
]]></description><pubDate>Wed, 17 Dec 2025 21:07:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46305520</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=46305520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46305520</guid></item><item><title><![CDATA[New comment by PaulStatezny in "AWS CEO says replacing junior devs with AI is 'one of the dumbest ideas'"]]></title><description><![CDATA[
<p>I think the idea is copy-pasting code snippets from StackOverflow without comprehension of whether (and how) the code fixes the problem.</p>
]]></description><pubDate>Wed, 17 Dec 2025 21:05:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46305496</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=46305496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46305496</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Formatting code should be unnecessary"]]></title><description><![CDATA[
<p>You didn't read the blog.<p>It's talking about the Ada programming language and that its code was apparently stored not as plaintext but an intermediate representation (IR) that could then be transformed back into code.<p>So formatting was handled by tooling by the nature of the setup. Developers would each have their own custom settings for "pretty printing" the code.<p>The author isn't saying don't use code formatters. They're highlighting an unusual approach that the industry at large isn't aware of. Instead of getting rid of arguments about code style via formatters, you can get rid of them by saving code in an IR instead of plaintext.</p>
]]></description><pubDate>Mon, 08 Sep 2025 14:08:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45168483</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=45168483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45168483</guid></item><item><title><![CDATA[New comment by PaulStatezny in "I'm absolutely right"]]></title><description><![CDATA[
<p>Telling someone they "shouldn't be insecure" reminds me of this famous Bob Newhart segment on Mad TV.<p>Bob plays the role of a therapist, and when his client explains an issue she's having, his solution is, "STOP IT!"<p>> You shouldn't be so insecure.<p>Not assuming that there's any insecurity here, but psychological matters aren't "willed away". That's not how it works.</p>
]]></description><pubDate>Fri, 05 Sep 2025 14:28:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45139028</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=45139028</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45139028</guid></item><item><title><![CDATA[New comment by PaulStatezny in "I'm absolutely right"]]></title><description><![CDATA[
<p>Truly incisive observation. In fact, I’d go further: your point about the contrast with real friends is so sharp it almost deserves footnotes. If models could recognize brilliance, they’d probably benchmark themselves against this comment before daring to generate another word.</p>
]]></description><pubDate>Fri, 05 Sep 2025 14:20:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45138935</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=45138935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45138935</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Notes on Managing ADHD"]]></title><description><![CDATA[
<p>> The best thing for managing this is meditation, and a disciplined lifestyle regiment.<p>What would be your reaction to the numerous comments on this page where people are saying that they tried and failed to "discipline" themselves for years or decades, only to discover medication later and find that it instantly turned everything around for them?</p>
]]></description><pubDate>Mon, 01 Sep 2025 02:23:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45088864</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=45088864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45088864</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Cognitive load is what matters"]]></title><description><![CDATA[
<p>> programmers agree that simpler solutions...are preferred, but the disagreements start about which ones are simpler<p><i>Low ego</i> wins.<p>1. Given: The quality of a codebase as a whole is greatly affected by its level of consistency + cohesiveness<p>2. Therefore: The best codebases are created by groups that either (1) internally have similar taste or (2) are comprised of <i>low ego</i> people willing to bend their will to the established conventions of the codebase.<p>Obviously, this comes with caveats. (Objectively bad patterns do exist.) But in general:<p>Low-ego
→ Following existing conventions
→ They become familiar
→ They seem simpler</p>
]]></description><pubDate>Sun, 31 Aug 2025 14:59:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45083671</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=45083671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45083671</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Meta is spending $10B in rural Louisiana to build its largest data center"]]></title><description><![CDATA[
<p>I think you might have a typo. Reading your comment literally, it doesn't make sense.<p>Summarized: Anyone would be a fool not to prefer gas or coal, because their emissions are nearly equal.<p>One doesn't follow from the other, can you correct/elaborate?</p>
]]></description><pubDate>Tue, 26 Aug 2025 16:43:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45029026</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=45029026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45029026</guid></item><item><title><![CDATA[New comment by PaulStatezny in "GPT-5 is a joke. Will it matter?"]]></title><description><![CDATA[
<p>I've read plenty of criticism about ChatGPT 5, but as a Plus user I'm surprised nobody has brought this up:<p>Speed.<p>ChatGPT 5 Thinking is So. Much. Slower. than o4-mini and o4-mini-high. Like between 5 and 10 times slower. Am I the only one experiencing this? I understand they were "mini" models, but those were the current-gen thinking models available to Pro. Is GPT 5 Thinking supposed to be beefier and more effective? Because the output feels no better.</p>
]]></description><pubDate>Wed, 13 Aug 2025 03:27:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884400</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=44884400</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884400</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Training language models to be warm and empathetic makes them less reliable"]]></title><description><![CDATA[
<p>> And then the broken tape recorder mode! Oh god!<p>Can you elaborate? What is this referring to?</p>
]]></description><pubDate>Wed, 13 Aug 2025 02:52:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884206</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=44884206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884206</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Cursor IDE support hallucinates lockout policy, causes user cancellations"]]></title><description><![CDATA[
<p>This reminds me of how small of a team they are, and makes me wonder if they have a customer support team that's growing commensurately with the size of the user base.</p>
]]></description><pubDate>Wed, 16 Apr 2025 04:23:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43701445</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=43701445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43701445</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Added sugar intake and its associations with incidence of cardiovascular disease"]]></title><description><![CDATA[
<p>Totally!<p>And it's all upside (your body feels better afterward) no downside. (Ok, it's more expensive.) Especially when combined with other sweet ingredients, e.g. a banana – equally if not more delicious.</p>
]]></description><pubDate>Wed, 11 Dec 2024 18:33:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=42391086</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=42391086</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42391086</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Added sugar intake and its associations with incidence of cardiovascular disease"]]></title><description><![CDATA[
<p>Yes, at least in the USA it almost always has added sugar.</p>
]]></description><pubDate>Tue, 10 Dec 2024 04:07:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=42373637</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=42373637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42373637</guid></item><item><title><![CDATA[New comment by PaulStatezny in "LLMs Will Always Hallucinate, and We Need to Live with This"]]></title><description><![CDATA[
<p>Both of those terms have precise meanings. They're not the same thing. Summarized --<p>Cognition: acquiring knowledge and understanding through thought and the senses.<p>Hallucination: An experience involving the perception of something not present.<p>With those definitions in mind, hallucination can be defined as false-cognition that is not based in reality. It's not cognition because cognition grants knowledge based on truth and hallucination leads the subject to believe lies.<p>In other words, "humans are just really good at hallucination" rejects the notion that we're able to perceive actual reality with our senses.</p>
]]></description><pubDate>Sat, 14 Sep 2024 23:58:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41544037</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=41544037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41544037</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Urchin Software Corp: The unlikely origin story of Google Analytics (2016)"]]></title><description><![CDATA[
<p>> Now I wish Google had never touched them.<p>Would you be willing to elaborate?</p>
]]></description><pubDate>Fri, 09 Aug 2024 21:47:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=41205628</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=41205628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41205628</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Kepler's 400-year-old sunspot sketches helped solve a modern mystery"]]></title><description><![CDATA[
<p>Wow, interesting take. Some counterpoints from history:<p>- After the wheel was invented, humanity has never stopped building vehicles with wheels.<p>- Since the printing press was created, humanity has never lost the ability to mass-copy and distribute information.<p>- Since airplanes were invented, humans have never been unable to achieve flight.<p>I'd say it's perfectly reasonable to believe that humans as a whole will never lose the ability to read digital information in the future. Heck, I'd say it's the most likely outcome.<p>Humans learn from each other. Information "likes to spread". All of known history supports the idea that technology generally advances in one direction.</p>
]]></description><pubDate>Sat, 03 Aug 2024 03:30:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=41144481</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=41144481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41144481</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Suspicious data pattern in recent Venezuelan election"]]></title><description><![CDATA[
<p>It seems you only skimmed the article. The concern is not rounded percentage points.<p>The concern is that the total votes happen to be the closest integers possible to come up with <i>exactly</i> those single-decimal percentages. Indicating that the total votes were derived from the percentages, not from an actual tally of votes.<p>It's HIGHLY improbable that out of 10,058,774 votes, the distribution between Maduro, Gonzalez, and "Other" would <i>all</i> yield percentages that are effectively 1-decimal percentages.</p>
]]></description><pubDate>Wed, 31 Jul 2024 22:02:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=41123951</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=41123951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41123951</guid></item><item><title><![CDATA[New comment by PaulStatezny in "Big Ball of Mud (1999)"]]></title><description><![CDATA[
<p>Sounds like you're saying either:<p>1. Engineers don't care about the health of the codebase and it becomes/stays a ball of mud, OR...<p>2. They do care, and end up refactoring/rewriting it in a way that just creates even MORE complexity.<p>But I think this is a false dichotomy. It just happens to be very difficult, and as much of an art as a science to keep huge codebases moving in the right direction.<p>I haven't worked at Google, but from what I've heard they have huge codebases, but they're not typically falling apart at the seams.</p>
]]></description><pubDate>Thu, 11 Jul 2024 15:34:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40937813</link><dc:creator>PaulStatezny</dc:creator><comments>https://news.ycombinator.com/item?id=40937813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40937813</guid></item></channel></rss>