<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: par1970</title><link>https://news.ycombinator.com/user?id=par1970</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 09:47:39 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=par1970" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by par1970 in "Claude Code Found a Linux Vulnerability Hidden for 23 Years"]]></title><description><![CDATA[
<p>But the service also tells criminals and adversaries about the bomb locations.</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:57:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642738</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47642738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642738</guid></item><item><title><![CDATA[New comment by par1970 in "Significant Raise of Reports"]]></title><description><![CDATA[
<p>Yeah, maybe you are right.  But is doing math and reasoning about Turing machines a priori?  If so, then it seems plausible to me that reasoning about a codebase (without running it) is also ‘a priori’.</p>
]]></description><pubDate>Thu, 02 Apr 2026 20:41:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619895</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47619895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619895</guid></item><item><title><![CDATA[New comment by par1970 in "Significant Raise of Reports"]]></title><description><![CDATA[
<p>> What do you mean "a priori understanding codebases"?<p>I took him to be distinguishing between (1) just reading the code/docs and reasoning about it, and (2) that + crafting and running tests.</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:42:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618476</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47618476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618476</guid></item><item><title><![CDATA[New comment by par1970 in "What major works of literature were written after age of 85? 75? 65?"]]></title><description><![CDATA[
<p>Why?</p>
]]></description><pubDate>Tue, 31 Mar 2026 19:55:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592596</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47592596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592596</guid></item><item><title><![CDATA[New comment by par1970 in "Ask HN: Are the newest LLMs better than you at programming?"]]></title><description><![CDATA[
<p>Did you tell it that it should test, or did you have it generate actual tests that you could run if you wanted to?</p>
]]></description><pubDate>Thu, 19 Mar 2026 22:13:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47447060</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47447060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47447060</guid></item><item><title><![CDATA[New comment by par1970 in "Ask HN: Are the newest LLMs better than you at programming?"]]></title><description><![CDATA[
<p>How much domain experience do you have?  Is it helping you solve problems for paying customers?</p>
]]></description><pubDate>Thu, 19 Mar 2026 22:12:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47447046</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47447046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47447046</guid></item><item><title><![CDATA[New comment by par1970 in "Ask HN: Are the newest LLMs better than you at programming?"]]></title><description><![CDATA[
<p>If your project requires the solution of a tricky algorithmic issue, then is the AI system able to solve that part, or do you have to give it the solution?</p>
]]></description><pubDate>Thu, 19 Mar 2026 21:38:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47446618</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47446618</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47446618</guid></item><item><title><![CDATA[New comment by par1970 in "Ask HN: Are the newest LLMs better than you at programming?"]]></title><description><![CDATA[
<p>What models + versions are you using?<p>Is it bad at designing systems that don't have a bunch of integrations?</p>
]]></description><pubDate>Thu, 19 Mar 2026 21:16:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47446255</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47446255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47446255</guid></item><item><title><![CDATA[New comment by par1970 in "Ask HN: Are the newest LLMs better than you at programming?"]]></title><description><![CDATA[
<p>> I don't use ChatGPT, but i've been using an agent with Claude Sonnet 4.<p>Are you using Sonnet 4.6?<p>> So this AI Agent... It is much faster at doing code when given specific instructions. But it keeps loosing context on architecture, and i cant really let it build complex things with interdependencies that build on each other.<p>I've only built small things (< 1000 lines) with the systems, so I might be missing this problem.<p>Is it better than you at building small self-contained things?<p>> And i get a bad feel when i then wonder how is this app doing what it does? because my agent cant explain it, and i would be stupid to believe what it hallucinated because it sounds really solid until you scratch the construction.<p>Do you ask it to generate test suites for the things that it builds?<p>> it would be also faster to build a catastrophic spaghetti code nightmare if not used with great care.<p>noted</p>
]]></description><pubDate>Thu, 19 Mar 2026 21:07:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47446103</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47446103</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47446103</guid></item><item><title><![CDATA[New comment by par1970 in "Ask HN: Are the newest LLMs better than you at programming?"]]></title><description><![CDATA[
<p>Which models + versions are you using?  Can you give a specific problem that you found them to be bad at?</p>
]]></description><pubDate>Thu, 19 Mar 2026 20:59:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47446001</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47446001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47446001</guid></item><item><title><![CDATA[Ask HN: Are the newest LLMs better than you at programming?]]></title><description><![CDATA[
<p>I've been programming for 10+ years.  From my usage of ChatGPT 5.4, it seems to me that it's better than me at programming.  I never thought this about any of the ChatGPT 4.* models that I tried.<p>How do your abilities compare to the newest models?<p>edit: Please specify which model and version you are talking about.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47445663">https://news.ycombinator.com/item?id=47445663</a></p>
<p>Points: 4</p>
<p># Comments: 19</p>
]]></description><pubDate>Thu, 19 Mar 2026 20:36:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47445663</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=47445663</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47445663</guid></item><item><title><![CDATA[New comment by par1970 in "Hemp ban hidden inside government shutdown bill"]]></title><description><![CDATA[
<p>Maybe we are talking past one another.<p>> Right, but that's explicitly not the body of government meant to represent people.<p>I haven't claimed that the Senate was intended to represent the people.  I also haven't claimed that OP claimed that the Senate was intended to represent the people.<p>> So is he saying the Senate is fundamentally a ridiculous way of representing 100 states, or is he saying the House is fundamentally a ridiculous way of representing 350 million people?<p>He didn't say either of those things.  He said this "The Senate is fundamentally a ridiculous way of representing 350 million people."</p>
]]></description><pubDate>Thu, 13 Nov 2025 22:39:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45921613</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=45921613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45921613</guid></item><item><title><![CDATA[New comment by par1970 in "Hemp ban hidden inside government shutdown bill"]]></title><description><![CDATA[
<p>I think OP is arguing that because they literally said "The Senate is fundamentally a ridiculous way of representing 350 million people and we’re going to continue to get absurd unrepresentative outcomes for as long as it remains a relevant body."<p>What do you think they are arguing?</p>
]]></description><pubDate>Thu, 13 Nov 2025 20:22:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45920023</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=45920023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45920023</guid></item><item><title><![CDATA[New comment by par1970 in "Hemp ban hidden inside government shutdown bill"]]></title><description><![CDATA[
<p>Are you arguing this?<p>(Premise 1) If a country has 350 million people, then the Senate will produce unrepresentative outcomes.<p>(Premise 2) America has 350 million people.<p>(Conclusion 1) So, the Senate will produce unrepresentative outcomes in America.<p>(Conclusion 2) So, the Senate is bad for America.</p>
]]></description><pubDate>Thu, 13 Nov 2025 19:52:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45919642</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=45919642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45919642</guid></item><item><title><![CDATA[New comment by par1970 in "Why "everyone dies" gets AGI all wrong"]]></title><description><![CDATA[
<p>> We're nowhere close to AGI and don't have a clue how to get there.<p>Do you have an argument?</p>
]]></description><pubDate>Sun, 02 Nov 2025 01:52:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45787216</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=45787216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45787216</guid></item><item><title><![CDATA[New comment by par1970 in "Grapevine canes can be converted into plastic-like material that will decompose"]]></title><description><![CDATA[
<p>So do we already do this? And if not, why not?</p>
]]></description><pubDate>Sun, 14 Sep 2025 23:57:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45244579</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=45244579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45244579</guid></item><item><title><![CDATA[New comment by par1970 in "SCP-055 is an "antimeme" – it erases itself from memory when observed"]]></title><description><![CDATA[
<p>> Sadly, the answer is that you can't.<p>Suppose there are two distinct entities, each such that if it is learned about, then it kills the learner, call them Geigh and Ritaar.  What happens when Geigh learns about Ritaar?</p>
]]></description><pubDate>Wed, 16 Jul 2025 01:00:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44577640</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=44577640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44577640</guid></item><item><title><![CDATA[New comment by par1970 in "SCP-055 is an "antimeme" – it erases itself from memory when observed"]]></title><description><![CDATA[
<p>How did you make a blank comment? I thought hn prevented it</p>
]]></description><pubDate>Wed, 16 Jul 2025 00:54:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44577596</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=44577596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44577596</guid></item><item><title><![CDATA[New comment by par1970 in "College baseball, venture capital, and the long maybe"]]></title><description><![CDATA[
<p>How is that the same thing?</p>
]]></description><pubDate>Fri, 20 Jun 2025 13:58:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44327796</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=44327796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44327796</guid></item><item><title><![CDATA[New comment by par1970 in ""AI", students, and epistemic crisis"]]></title><description><![CDATA[
<p>Okay.  So, in your original comment are you asserting that teachers are mostly telling students to believe propositions without giving any epistemic justification for those propositions?</p>
]]></description><pubDate>Sun, 07 Jul 2024 12:10:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=40897037</link><dc:creator>par1970</dc:creator><comments>https://news.ycombinator.com/item?id=40897037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40897037</guid></item></channel></rss>