<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: halfadot</title><link>https://news.ycombinator.com/user?id=halfadot</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 15:55:51 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=halfadot" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by halfadot in "The AI coding trap"]]></title><description><![CDATA[
<p>> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.<p>No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.<p>So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.</p>
]]></description><pubDate>Sun, 28 Sep 2025 18:45:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45406797</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=45406797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45406797</guid></item><item><title><![CDATA[New comment by halfadot in "Ask HN: How do you promote your personal project in limited bugget?"]]></title><description><![CDATA[
<p>>In that context, the answer in this case is to simply start talking about your project and showing it to people and asking for feedback (as you have done), and be conscious that what you're looking for is signals of user interest -- little sparks that you can convert into tiny flames so that you can start a fire.<p>So all of this text just to tell him to do what he's already been doing?</p>
]]></description><pubDate>Wed, 21 May 2025 16:47:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44053371</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=44053371</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44053371</guid></item><item><title><![CDATA[New comment by halfadot in "Discord Unveiled: A Comprehensive Dataset of Public Communication (2015-2024)"]]></title><description><![CDATA[
<p>>The hypothetical is irrelevant here; what is germane is that the expectation of privacy by the individual participants, and the terms which bind people who use that service.<p>How can you have an expectation of privacy in a public forum? Where did this bizarre disorder originate, where people knowingly put their writing out there for literally anyone to read, then turn around and start talking about "expectations of privacy" when they realize what it entails?</p>
]]></description><pubDate>Wed, 21 May 2025 16:39:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44053291</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=44053291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44053291</guid></item><item><title><![CDATA[New comment by halfadot in "LLMs get lost in multi-turn conversation"]]></title><description><![CDATA[
<p>You produced a passive-aggressive taunt instead of addressing the argument. 
For clarity: nobody was asking about your business decisions, nobody is intimidated by your story. what your personal opinions about "attitude" are is irrelevant to what's being discussed (LLMs allowing optimal time use in certain cases). Also, unless your boss made the firing decision, you weren't forced to do anything.</p>
]]></description><pubDate>Wed, 21 May 2025 16:24:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44053142</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=44053142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44053142</guid></item><item><title><![CDATA[New comment by halfadot in "I'd rather read the prompt"]]></title><description><![CDATA[
<p>For almost everyone in the world, the answer is the latter.</p>
]]></description><pubDate>Mon, 05 May 2025 00:37:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43890883</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=43890883</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43890883</guid></item><item><title><![CDATA[New comment by halfadot in "I'd rather read the prompt"]]></title><description><![CDATA[
<p>> Leaning on an LLM to ease through those tough moments is 100% short circuiting the learning process.<p>Sounds like "back in my days" type of complaining. Do you have any evidence of this "100% reduction" or is it just "AI bad" bandwagoning?<p>> But you're definitely not learning how to write.<p>How would you know? You've never tested him. You're making a far-reaching assumption about someone's learning based on using an aid. It's the equivalent of saying "you're definitely not learning how to ride a bicycle if you use training wheels".</p>
]]></description><pubDate>Mon, 05 May 2025 00:35:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43890868</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=43890868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43890868</guid></item><item><title><![CDATA[New comment by halfadot in "I'd rather read the prompt"]]></title><description><![CDATA[
<p>>  Don’t let a computer write for you! I say this not for reasons of intellectual honesty, or for the spirit of fairness. I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.<p>Having spent about two decades reading other humans' "original thoughts", I have nothing else to say here other than: doubt.</p>
]]></description><pubDate>Mon, 05 May 2025 00:31:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=43890851</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=43890851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43890851</guid></item><item><title><![CDATA[New comment by halfadot in "Mistral Small 3"]]></title><description><![CDATA[
<p>Luckily for Mistral, capital also exists in countries other than the USA.</p>
]]></description><pubDate>Thu, 30 Jan 2025 21:47:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42882501</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=42882501</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42882501</guid></item><item><title><![CDATA[New comment by halfadot in "An analysis of DeepSeek's R1-Zero and R1"]]></title><description><![CDATA[
<p>> Three months later in April when this tagged data is used to train the next iteration, the AI can successfully learn that today's date is actually January 29th.<p>Such an ingenious attack, surely none of these companies ever considered it.</p>
]]></description><pubDate>Thu, 30 Jan 2025 03:55:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42874656</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=42874656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42874656</guid></item><item><title><![CDATA[New comment by halfadot in "An analysis of DeepSeek's R1-Zero and R1"]]></title><description><![CDATA[
<p>Yeah, it's great comedy.<p>> Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users.<p>Because a website with lots of links is executable code. And the scrapers totally don't have any checks in them to see if they spent too much time on a single domain. And no data verification ever occurs.
Hell, why not go all the way? Just put a big warning telling everyone: "Warning, this is a cyber-nuclear weapon! Do not deploy unless you're a super rad bad dude who totally traps the evil AI robot and wins the day!"</p>
]]></description><pubDate>Thu, 30 Jan 2025 03:53:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42874648</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=42874648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42874648</guid></item><item><title><![CDATA[New comment by halfadot in "An analysis of DeepSeek's R1-Zero and R1"]]></title><description><![CDATA[
<p>And they are hilarious, because they ride on the assumption that multi-billion dollar companies are all just employing naive imbeciles who just push buttons and watch the lights on the server racks go, never checking the datasets.</p>
]]></description><pubDate>Thu, 30 Jan 2025 03:49:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42874628</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=42874628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42874628</guid></item><item><title><![CDATA[New comment by halfadot in "An analysis of DeepSeek's R1-Zero and R1"]]></title><description><![CDATA[
<p>AI models don't assume anything. AI models are just statistical tools. Their data is prepared by humans, who aren't morons. What is it with these super-ignorant AI critiques popping up everywhere?</p>
]]></description><pubDate>Thu, 30 Jan 2025 03:48:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42874619</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=42874619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42874619</guid></item><item><title><![CDATA[New comment by halfadot in "An analysis of DeepSeek's R1-Zero and R1"]]></title><description><![CDATA[
<p>What makes people think companies like OpenAI can't just pay experts for verified true data? Why do all these "gotcha" replies always revolve around the idea that everyone developing AI models is credulous and stupid?</p>
]]></description><pubDate>Thu, 30 Jan 2025 03:46:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=42874604</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=42874604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42874604</guid></item><item><title><![CDATA[New comment by halfadot in "An analysis of DeepSeek's R1-Zero and R1"]]></title><description><![CDATA[
<p>It is absolutely fascinating to read the fantasy produced by people who (apparently) think they live in a sci-fi movie.<p>The companies whose datasets you're "poisoning" absolutely know about the attempts to poison data. 
All the ideas I've seen linked on this side so far about how they're going to totally defeat the AI companies' models sound like a mixture of wishful thinking and narcissism.</p>
]]></description><pubDate>Thu, 30 Jan 2025 03:44:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42874582</link><dc:creator>halfadot</dc:creator><comments>https://news.ycombinator.com/item?id=42874582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42874582</guid></item></channel></rss>