<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bad_username</title><link>https://news.ycombinator.com/user?id=bad_username</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 05:44:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bad_username" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bad_username in "Why the most valuable things you know are things you cannot say"]]></title><description><![CDATA[
<p>This is a reason why, before LLMs truly become contenders for replacing humans, the sharp distinction between pre-training, fine-tuning, and providing context in a conversation has to disappear. The model must be able to update itself (learn!) by calibrating all these massively multidimensional parameters on the fly, as they operate, like people do.</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:37:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649913</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47649913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649913</guid></item><item><title><![CDATA[New comment by bad_username in "Why are executives enamored with AI, but ICs aren't?"]]></title><description><![CDATA[
<p>I do not think most executives are particularly enamored with AI. They are being mostly driven by the fear of missing out. More precisely, their thought process is: if they bet on AI and fail, they can plausibly claim that it was the technology's fault (not good enough, poorly suited for the business, etc). But if they skip on AI by choice, and their competition succeeds, they will be blamed personally. The more hyped a technology is, the stronger this calculus is for the managers. It's like Pascal's wager in a way.</p>
]]></description><pubDate>Sat, 28 Mar 2026 00:40:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47550287</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47550287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47550287</guid></item><item><title><![CDATA[New comment by bad_username in "Markdown Ate the World"]]></title><description><![CDATA[
<p>Came for Markdown, stayed for a very interesting glimpse into the MS Word file format history.</p>
]]></description><pubDate>Tue, 24 Mar 2026 06:44:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499314</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47499314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499314</guid></item><item><title><![CDATA[New comment by bad_username in "AI boom risks widening wealth divide, says BlackRock's Larry Fink"]]></title><description><![CDATA[
<p>This is exactly why Larry Fink makes these statements: to singal the correct virtues and keep himself out of trouble. This is the modus operandi for most people who possess unfairly oversized influence today.</p>
]]></description><pubDate>Tue, 24 Mar 2026 06:26:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499207</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47499207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499207</guid></item><item><title><![CDATA[New comment by bad_username in "Iran War Live Updates: U.S. and Iran Send Conflicting Signals on Peace Prospects"]]></title><description><![CDATA[
<p>> so much so that there's a wikipedia page dedicated to the falsehoods<p>Not detracting from the merits of your statement, but Wikipedia is not neutral, it is biased politically/ideologically, so it should not be used as a fair "measure" of things.</p>
]]></description><pubDate>Tue, 24 Mar 2026 06:22:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499190</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47499190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499190</guid></item><item><title><![CDATA[New comment by bad_username in "The Dude"]]></title><description><![CDATA[
<p>> Almost every male character in The Big Lebowski is, in some way, a failed version of what he thinks he should be.<p>What a fresh, contrarian take! So corageous.</p>
]]></description><pubDate>Mon, 23 Mar 2026 16:03:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47491347</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47491347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47491347</guid></item><item><title><![CDATA[New comment by bad_username in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>> There is no world where you input a document lacking clarity and detail and get a coding agent to reliably fill in that missing clarity and detail<p>That is not true, and the proof is that LLMs _can_ reliably generate (relatively small amounts of) working code from relatively terse descriptions. Code is the detail being filled in. Furthermore, LLMs are the ultimate detail fillers, because they are language interpolation/extrapolation machines. And their popularity is precisely because they are usually very good at filling in details: LLMs use their vast knowledge to guess what detail to generate, so the result usually makes sense.<p>This doesn't detract much from the main point of the article though. Sometimes the interpolated detail is wrong (and indeterministic), so, if reliable result is to be achieved, important details have to be constrained, and for that they have to be specified. And whereas we have decades of tools and culture for coding, we largely don't have that for extremely detailed specs (except maybe at NASA or similar places). We could figure it out in the future, but we haven't yet.</p>
]]></description><pubDate>Thu, 19 Mar 2026 07:53:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47436202</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47436202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436202</guid></item><item><title><![CDATA[New comment by bad_username in "Harold and George Destroy the World"]]></title><description><![CDATA[
<p>HN is turning into Reddit. It's noticeably worse than it was 5 years ago.</p>
]]></description><pubDate>Sun, 15 Mar 2026 15:56:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47388621</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47388621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47388621</guid></item><item><title><![CDATA[New comment by bad_username in "I lost my ability to learn anything new because of AI and I need your opinions"]]></title><description><![CDATA[
<p>Sounds like you didn't lose the ability, you lost motivation. Why learn Rust, you say, if an LLM can crank out a Rust app for me, and it will be good enough?<p>LLMs may have removed the critical need for a SW engineer to know details, like the syntax of Rust or the intricacies of its borrow checking semantics. But LLMs, I maintain, didn't remove the critical need for an engineer to learn _concepts_ and have a large, robust library of concepts in your head. Diverse, orthogonal concepts like data structures, security concerns, callbacks, recursion, event driven architecture, big O, cloud computing patterns, deadlocks, memory leaks, etc etc. As long as you are proficient with your concepts, you will easily catch up with the relevant details in any given situation. Once you've ever seen recursion, for example, you will find no trouble recognizing it in any language.<p>That's the beauty of LLMs : you don't _have_ to be good at technical details any more. But you still have to be very good with concepts, not just to be able to use LLMs properly, but also _be in control_ of their work. LLM slop is dangerous not because of incorrect details like bad syntax. It is dangerous because it misplaces concepts: it may use a list where you need a hash map and degrade performance, it may forget a security constraint and cause a data leak, or it can be specific where it needs to be general, etc. An engineer needs to know and check the concepts if they want to remain in control. (And you absolutely do want that.)<p>But it is impossible, or very impractical, to just learn an abstract concept out of thin air. The normal way to learn a concept is to see its concrete instantiation somewhere, in all its detailed glory, and then retain its abstract version in your head.<p>So, the only way to stay relevant and stay in control is to have a robust concept library in your mind. And the only way to get that is to immerse yourself in many real technical situations, the details of which you must crack first, but free to forget later. That is learning, and that is still important today in the age of LLMs.</p>
]]></description><pubDate>Wed, 04 Mar 2026 07:22:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47244266</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47244266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47244266</guid></item><item><title><![CDATA[New comment by bad_username in "Nobody gets promoted for simplicity"]]></title><description><![CDATA[
<p>Given the modern trend of using genderless singular "they", that seems intentional (and off-putting).</p>
]]></description><pubDate>Wed, 04 Mar 2026 06:49:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47244022</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47244022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47244022</guid></item><item><title><![CDATA[New comment by bad_username in "Don't use passkeys for encrypting user data"]]></title><description><![CDATA[
<p>The author's concern of "misgendering" an imaginary person (with ab unambiguously female name) is quite odd.</p>
]]></description><pubDate>Sat, 28 Feb 2026 06:39:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191268</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47191268</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191268</guid></item><item><title><![CDATA[New comment by bad_username in "Don't use passkeys for encrypting user data"]]></title><description><![CDATA[
<p>> you can remember them, but are you also suggesting to not use generated passwords?<p>You can remember a strong generated password if it's a pass phrase. Better "rememberability" with the same amount of entropy.</p>
]]></description><pubDate>Sat, 28 Feb 2026 06:36:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191255</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47191255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191255</guid></item><item><title><![CDATA[New comment by bad_username in "You are not supposed to install OpenClaw on your personal computer"]]></title><description><![CDATA[
<p>I feel this OpenClaw stuff is a bit like the "crypto" of agentic AI. Promise much, move fast and break things, be shiny and trendy, have a multitude of names, be moderately useful while things go right (and be very useful to malicious actors), be catastrophic and leave no recourse when things inevitably go wrong.</p>
]]></description><pubDate>Tue, 24 Feb 2026 04:28:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132868</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47132868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132868</guid></item><item><title><![CDATA[New comment by bad_username in "Attention Media ≠ Social Networks"]]></title><description><![CDATA[
<p>> These algorithmic feeds clearly work for someone<p>They clearly work for advertisers, and that's all that matters.</p>
]]></description><pubDate>Sun, 22 Feb 2026 15:23:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47111722</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47111722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47111722</guid></item><item><title><![CDATA[New comment by bad_username in "Evidence of the bouba-kiki effect in naïve baby chicks"]]></title><description><![CDATA[
<p>Objects that have sharp edges generate higher frequency harmonics when agitated, because lower-size features resonate on higher frequencies (like shorter strings ring on higher pitch). Objects that are round resonate on low frequencies only. The "kiki" sound has more high frequency content than the "bouba" sound, and it's no mystery why the brain associates one with the other.</p>
]]></description><pubDate>Sun, 22 Feb 2026 08:10:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47109272</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=47109272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47109272</guid></item><item><title><![CDATA[New comment by bad_username in "Shifts in U.S. Social Media Use, 2020–2024: Decline, Fragmentation, Polarization (2025)"]]></title><description><![CDATA[
<p>Do you remember how these things were called social NETWORKS, as in something you navigate and explore? Then they gradually became social MEDIA, as in something you consume...</p>
]]></description><pubDate>Mon, 09 Feb 2026 05:54:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941992</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=46941992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941992</guid></item><item><title><![CDATA[New comment by bad_username in "When Every Network is 192.168.1.x"]]></title><description><![CDATA[
<p>Your website landing page is great. No stock photo hipsters drinking coffee, no corporate fluff amid whitespace wasteland. Just straight to the point. Rare sight today.</p>
]]></description><pubDate>Thu, 29 Jan 2026 08:02:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46807143</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=46807143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46807143</guid></item><item><title><![CDATA[New comment by bad_username in "Kidnapped by Deutsche Bahn"]]></title><description><![CDATA[
<p>Your ticket was without assigned seat? Because if there was assigned seat, surely the seat would be in the correct carriage?</p>
]]></description><pubDate>Mon, 29 Dec 2025 15:38:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46421734</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=46421734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46421734</guid></item><item><title><![CDATA[New comment by bad_username in "How I Left YouTube"]]></title><description><![CDATA[
<p>> If you're a white or Asian dude, everyone assumes you're good at coding, just by default. You could have graduated yesterday with a degree in law, and people assume you can code<p>Close tab</p>
]]></description><pubDate>Thu, 25 Dec 2025 10:18:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46383478</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=46383478</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46383478</guid></item><item><title><![CDATA[New comment by bad_username in "If you don't design your career, someone else will (2014)"]]></title><description><![CDATA[
<p>I don't think jus the raw distance (from here) is the metric to necessarily optimize for. It may be more useful to throughly search the nearby area, for example - especially if you feel you're in a good neighborhood already.</p>
]]></description><pubDate>Mon, 22 Dec 2025 14:26:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46354344</link><dc:creator>bad_username</dc:creator><comments>https://news.ycombinator.com/item?id=46354344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46354344</guid></item></channel></rss>