<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nyyp</title><link>https://news.ycombinator.com/user?id=nyyp</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 02:55:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nyyp" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nyyp in "Three Inverse Laws of AI"]]></title><description><![CDATA[
<p>With regard to my personal use of LLMs, I strongly agree with this framing. But to each point:<p>Anthropomorphism: As we are all aware, providers are incentivized to post-train anthropomorphic behavior in their models - it increases engagement. My regret is that instructing a model at prompt time to "reduce all niceties and speak plainly" probably reduces overall task efficacy since we are leaving their training space.<p>Deference: I view the trustworthiness of LLMs the same as I view the trustworthiness of Wikipedia and my friends: good enough for non-critical information. Wikipedia has factual errors, and my friends' casual conversation certainly has more, but most of the time that doesn't matter. For critical things, peer-reviewed, authoritative, able-to-be-held-liable sources will not go away. Unlike above, providers are generally incentivized to improve this facet of their models, so this will get better over time.<p>Abdication of Responsibility: This is the one that bothers me most at work. More and more people are opening PRs whose abstractions were designed by Claude and not reasoned about further. Reviewing a PR often involves asking the LLM to "find PR feedback" and not reading the code. Arguments begin with "Claude suggested that...". This overall lack of ownership, I suspect, is leading to an increase in maintenance burden down the line as the LLM ultimately commits the wrong code for the wrong abstractions.</p>
]]></description><pubDate>Tue, 05 May 2026 17:06:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=48025357</link><dc:creator>nyyp</dc:creator><comments>https://news.ycombinator.com/item?id=48025357</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48025357</guid></item><item><title><![CDATA[New comment by nyyp in "Show HN: Vibe coding a bookshelf with Claude Code"]]></title><description><![CDATA[
<p>This post felt AI-touched to me, but the usage falls on a spectrum. You can write the whole post yourself, have an LLM write the whole post, or - what I suspect is the case here - have the LLM "polish" your first draft.<p>Many weaker or non-native writers might use AI for that "editor's eye" without realizing that they are being driven to sound identical to every other blog post these days. And while I'm certainly growing tired of constantly reading the same LLM style, it's hard to fault someone for wanting to polish what they publish.</p>
]]></description><pubDate>Mon, 29 Dec 2025 19:10:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46424184</link><dc:creator>nyyp</dc:creator><comments>https://news.ycombinator.com/item?id=46424184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46424184</guid></item><item><title><![CDATA[New comment by nyyp in "OpenStreetMap Is Turning 20"]]></title><description><![CDATA[
<p>I'm very glad OpenStreetMap is still around. It has often contained data that I couldn't easily find elsewhere, and I've enjoyed being able to contribute to the places I care about.</p>
]]></description><pubDate>Sun, 11 Aug 2024 06:55:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=41214433</link><dc:creator>nyyp</dc:creator><comments>https://news.ycombinator.com/item?id=41214433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41214433</guid></item><item><title><![CDATA[New comment by nyyp in "FineWeb: Decanting the web for the finest text data at scale"]]></title><description><![CDATA[
<p>I'm always happy to see the proliferation of open-source resources for the next generative models. But I strongly suspect that OpenAI and friends are all using copywritten content from the wealth of shadow book repositories available online [1]. Unless open models are doing the same, I doubt they will ever get meaningfully close to the quality of closed-source models.<p>Related: I also suspect that this is one reason we get so little information about the exact data used to train Meta's Llama models ("open weights" vs "open source").<p>[1]: <a href="https://www.annas-archive.org/llm" rel="nofollow">https://www.annas-archive.org/llm</a></p>
]]></description><pubDate>Mon, 03 Jun 2024 15:34:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=40563720</link><dc:creator>nyyp</dc:creator><comments>https://news.ycombinator.com/item?id=40563720</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40563720</guid></item><item><title><![CDATA[New comment by nyyp in "NetBSD hier(7)"]]></title><description><![CDATA[
<p>Aber waren Sie schon mal in Baden-Württemberg?<p>reference for those unaware: <a href="https://commons.wikimedia.org/wiki/File:Nice_here._(Nett_hier.)_But_have_you_been_to_Baden-W%C3%BCrttemberg_(Aber_waren_Sie_schon_mal_in_Baden-W%C3%BCrttemberg,_Germany%3F)_sticker_(37713545446).jpg" rel="nofollow noreferrer">https://commons.wikimedia.org/wiki/File:Nice_here._(Nett_hie...</a></p>
]]></description><pubDate>Sun, 25 Jun 2023 17:06:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=36470369</link><dc:creator>nyyp</dc:creator><comments>https://news.ycombinator.com/item?id=36470369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36470369</guid></item><item><title><![CDATA[New comment by nyyp in "Cardpunch: Punch a punched card"]]></title><description><![CDATA[
<p>Before the Living Computers: Museum in Seattle was closed, they had a working punch. It was cool to make your own.</p>
]]></description><pubDate>Tue, 02 Aug 2022 01:08:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=32314106</link><dc:creator>nyyp</dc:creator><comments>https://news.ycombinator.com/item?id=32314106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32314106</guid></item></channel></rss>