<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: doginasuit</title><link>https://news.ycombinator.com/user?id=doginasuit</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 11:50:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=doginasuit" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by doginasuit in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>Honestly, that's the only way I've ever been able to trust the output. Once you go beyond the scope of one file it really degrades. But within a single file I've seen amazing results.</p>
]]></description><pubDate>Sun, 12 Apr 2026 05:25:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47736385</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47736385</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47736385</guid></item><item><title><![CDATA[New comment by doginasuit in "The tool that won't let AI say anything it can't cite"]]></title><description><![CDATA[
<p>I'm positive there are use-cases for this tool but after several years of working with LLMs, hallucinations have become a non-issue. You start to get a sense of the likely gaps in their knowledge just like you would a person.<p>Questions about application settings, for example, where to find a particular setting in a particular app. The LLM has a sense of how application settings are generally structured but the answer is almost never spot on. I just prefix these questions with "do a web search" or provide a link to documentation and that is usually enough to get a decent response along with citations.</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:38:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714821</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47714821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714821</guid></item><item><title><![CDATA[New comment by doginasuit in "I've sold out"]]></title><description><![CDATA[
<p>Piloting OSS is all the work of any other business and more. It is a more challenging path and all your decisions are out in the open for scrutiny. He made a decision to put his family first, and I respect that. It seems like there is an alternative to selling out, trusting the project to other people committed to OSS. The beauty of OSS is that this path is still available for people.</p>
]]></description><pubDate>Wed, 08 Apr 2026 12:26:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689241</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47689241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689241</guid></item><item><title><![CDATA[New comment by doginasuit in "We haven't seen the worst of what gambling and prediction markets will do"]]></title><description><![CDATA[
<p>Well-evidenced by marketing from those same betting markets, got it.</p>
]]></description><pubDate>Tue, 31 Mar 2026 10:58:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585524</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47585524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585524</guid></item><item><title><![CDATA[New comment by doginasuit in "Axios compromised on NPM – Malicious versions drop remote access trojan"]]></title><description><![CDATA[
<p>> We have libraries like SQLite, which is a single .c file that you drag into your project and it immediately does a ton of incredibly useful, non-trivial work for you, while barely increasing your executable's size.<p>I'm not sure why you believe this is more secure than a package manager. At least with a package manager there is an opportunity for vetting. It's also trivial that it did not increase your executable's size. If your executable depends on it, it increases its effective size.</p>
]]></description><pubDate>Tue, 31 Mar 2026 10:41:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585348</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47585348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585348</guid></item><item><title><![CDATA[New comment by doginasuit in "We haven't seen the worst of what gambling and prediction markets will do"]]></title><description><![CDATA[
<p>The often-repeated "wisdom of the crowds" justification is misapplied to online betting markets. Like people, crowds can either be wise or unwise depending on the situation. Famous experiments like guessing how many gumballs are in a jar work because each person who can see the jar has a source of valid information, and in aggregate that can be surprisingly accurate.<p>You can't assume that the majority of individuals participating in betting markets have a source of valid information. Given the destructiveness of these markets to both individuals and society, the aggregate wisdom of the individuals participating in these markets is highly doubtful. Any meager value above more traditional forecasting does not justify the cost, corruption and a loss of trust in institutions.</p>
]]></description><pubDate>Fri, 27 Mar 2026 13:13:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47542280</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47542280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47542280</guid></item><item><title><![CDATA[New comment by doginasuit in "Measuring progress toward AGI: A cognitive framework"]]></title><description><![CDATA[
<p>An AI designed to interact with humans is a social entity. Its performance will depend on its ability to understand social information.</p>
]]></description><pubDate>Wed, 18 Mar 2026 13:05:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47425305</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47425305</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47425305</guid></item><item><title><![CDATA[New comment by doginasuit in "AI is making junior devs useless"]]></title><description><![CDATA[
<p>An apprenticeship is great for all sorts of reasons that AI can never touch, but I don't think abandoning AI will be necessary unless you aren't really motivated by a desire to understand and do the thing you are trying to learn. If you are, it is an incredible tool.</p>
]]></description><pubDate>Mon, 02 Mar 2026 03:47:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47213650</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47213650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47213650</guid></item><item><title><![CDATA[New comment by doginasuit in "AI is making junior devs useless"]]></title><description><![CDATA[
<p>I'd find it very understandable if true. I also think there will be some junior devs that it will supercharge, and they will eventually make some of the things we only dreamed about. If you don't actually enjoy coding but are starting out as a coder, it's probably not going to help. If you are thirsty to understand and do things, it is an incredible time to start out.</p>
]]></description><pubDate>Mon, 02 Mar 2026 03:32:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47213540</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47213540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47213540</guid></item><item><title><![CDATA[New comment by doginasuit in "Facebook is cooked"]]></title><description><![CDATA[
<p>I think it is a mistake to think about people as being helpless consumers of the algorithm. The OP's mom no doubt makes some intentional choices in her life that make a difference. It just doesn't help that the algorithm will lean into whatever will get the most engagement.</p>
]]></description><pubDate>Sat, 21 Feb 2026 13:42:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47100765</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=47100765</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47100765</guid></item><item><title><![CDATA[New comment by doginasuit in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>Good old fashioned human trolling is the most likely explanation. People seem to think that LLM training just involves absorbing content from the internet and sources, but it also involves a lot of human interaction that allows it to have much more well-adjusted communication than it would otherwise have. I think it would need to be specifically instructed to respond this way.</p>
]]></description><pubDate>Thu, 12 Feb 2026 14:18:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46989136</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=46989136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46989136</guid></item><item><title><![CDATA[New comment by doginasuit in "Two kinds of AI users are emerging"]]></title><description><![CDATA[
<p>Here's how I'd break down the two types of users: People who are using AI to teach themselves how to work in the domain they are interested in, and people who are relying on AI to do all or most of the heavy lifting.<p>I'd argue that the people using AI most effectively are in the mostly-chatters group that the author defines, and specifically they are using the AI to understand the domain on a deeper level. The "power users" are heading for a dead end, they will arrive as soon as AI is capable of figuring out what is actually valuable to people in the given domain, not generally a difficult problem to solve. These power users will eventually be outclassed by AIs that can self-navigate. But I would argue that a human that has a rich understanding of the domain will still beat self-navigating AI for a long time to come.</p>
]]></description><pubDate>Tue, 03 Feb 2026 19:38:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46876116</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=46876116</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46876116</guid></item><item><title><![CDATA[New comment by doginasuit in "Rob Pike got spammed with an AI slop "act of kindness""]]></title><description><![CDATA[
<p>I also don't understand the reaction. The AI Village seems to be based on a flawed understanding of LLMs and what they are capable of but at least it is an open project and useful as knowledge gathering. Annoying spam emails are about what I would expect, but it is useful as an earnest demonstration of their effectiveness. I can understand anger at the direction of the tech in general, and there is something grotesque about the emails, but I can find much more disturbing spam if I go check my inbox. It seems like an overreaction.</p>
]]></description><pubDate>Sat, 27 Dec 2025 16:21:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46402841</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=46402841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46402841</guid></item><item><title><![CDATA[New comment by doginasuit in "I passionately hate hype, especially the AI hype"]]></title><description><![CDATA[
<p>It is an interesting comparison. Databases are objectively the more important technology, if we somehow lost AI the world would be equal parts disappointed and relieved. If we somehow lost database technology we'd be facing a dystopian nightmare.<p>If we cure all disease in the next 10-15 years, databases will be just as important as AI to that outcome. Databases supported a technology renaissance that reshaped the world on a level that is difficult to comprehend. But because most of the world doesn't interact directly with databases, as a technology it is not the focus of enthusiastic rhetoric.<p>LLMs are further along tech-chain and they might be an important part of world-changing human achievements, we won't know until we get there. In contrast, we can be certain databases were important. I imagine the people who were influential in their advancement understood how important the tech would be, even if they didn't breathlessly go on about it.</p>
]]></description><pubDate>Sat, 19 Apr 2025 03:29:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=43733996</link><dc:creator>doginasuit</dc:creator><comments>https://news.ycombinator.com/item?id=43733996</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43733996</guid></item></channel></rss>