<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: eslaught</title><link>https://news.ycombinator.com/user?id=eslaught</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 14:38:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=eslaught" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by eslaught in "Show HN: I built a Cargo-like build tool for C/C++"]]></title><description><![CDATA[
<p>Just popping in here because people seem to be surprised by<p>> I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.<p>This is exactly the use case in HPC. We always build -march=native and go to some trouble to enable all the appropriate vectorization flags (e.g., for PowerPC) that don't come along automatically with the -march=native setting.<p>Every HPC machine is a special snowflake, often with its own proprietary network stack, so you can forget about binaries being portable. Even on your own machine you'll be recompiling your binaries every time the machine goes down for a major maintenance.</p>
]]></description><pubDate>Fri, 10 Apr 2026 05:39:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714062</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47714062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714062</guid></item><item><title><![CDATA[New comment by eslaught in "Linux Running in a PDF (2025)"]]></title><description><![CDATA[
<p>Post is new but the original PDF is from 2025:<p>Previous discussion: <a href="https://news.ycombinator.com/item?id=42959775">https://news.ycombinator.com/item?id=42959775</a></p>
]]></description><pubDate>Fri, 03 Apr 2026 19:38:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47631179</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47631179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47631179</guid></item><item><title><![CDATA[Sunset of Bitbucket Issues and Wikis]]></title><description><![CDATA[
<p>Article URL: <a href="https://community.atlassian.com/forums/Bitbucket-articles/Announcing-sunset-of-Bitbucket-Issues-and-Wikis/ba-p/3193882">https://community.atlassian.com/forums/Bitbucket-articles/Announcing-sunset-of-Bitbucket-Issues-and-Wikis/ba-p/3193882</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47520355">https://news.ycombinator.com/item?id=47520355</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 25 Mar 2026 17:20:21 +0000</pubDate><link>https://community.atlassian.com/forums/Bitbucket-articles/Announcing-sunset-of-Bitbucket-Issues-and-Wikis/ba-p/3193882</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47520355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47520355</guid></item><item><title><![CDATA[New comment by eslaught in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>Please don't take up space in the comment section with accusations. You can report this at the email below and the mods will look at it:<p>> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.<p>> <a href="https://news.ycombinator.com/newsguidelines.html">https://news.ycombinator.com/newsguidelines.html</a></p>
]]></description><pubDate>Sat, 21 Mar 2026 19:43:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47470541</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47470541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47470541</guid></item><item><title><![CDATA[New comment by eslaught in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.<p>I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.</p>
]]></description><pubDate>Sat, 21 Mar 2026 19:40:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47470509</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47470509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47470509</guid></item><item><title><![CDATA[New comment by eslaught in "Shall I implement it? No"]]></title><description><![CDATA[
<p>I agree with your points. Answering your one question for posterity:<p>> Also how were the data races significant if nobody noticed them for a decade ?<p>They only replicated in our CI, so it was mainly an annoyance for those of us doing release engineering (because when you run ~150 jobs you'll inevitably get ~2-4 failures). So it's not that no one noticed, but it was always a matter of prioritization vs other things we were working on at the time.<p>But that doesn't mean they got zero effort put into them. We tried multiple times to replicate, perhaps a total of 10-20 human hours over a decade or so (spread out between maybe 3 people, all CS PhDs), and never got close enough to a smoking gun to develop a theory of the bug (and therefore, not able to develop a fix).<p>To be clear, I don't think "proves" anything one way or another, as it's only one data point, but given this is a team of CS PhDs intimately familiar with tools for race detection and debugging, it's notable that the tools meaningfully helped us debug this.</p>
]]></description><pubDate>Tue, 17 Mar 2026 16:34:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47414976</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47414976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47414976</guid></item><item><title><![CDATA[New comment by eslaught in "A Survival Guide to a PhD (2016)"]]></title><description><![CDATA[
<p>> it is a career-ending failure<p>It depends highly on the field. In history, sure. The point of getting a history PhD is to become a history professor, and you can't do that if you don't get the PhD, and meanwhile history PhDs don't meaningfully open up any other job prospects, so attempting and failing to get a PhD provides negative value.<p>In CS and many engineering disciplines, there is a long history of people dropping out of PhDs and landing in industry. The industry is therefore much more accustomed to, and therefore accommodating to, people taking this path. Whether it's a maximally efficient use of time is another question, but it's certainly not wasted effort.<p>But I do agree that it's stressful nonetheless because it still feels like a failure even if it is not actually in reality. I wrote about this when I put down my own PhD journey here [1]. In particular after the control replication (2017) paper, I very nearly quit out of academia entirely despite it being my biggest contribution to the field by far.<p>[1]: <a href="https://elliottslaughter.com/2024/02/legion-paper-history" rel="nofollow">https://elliottslaughter.com/2024/02/legion-paper-history</a> (written without any use of LLMs, for anyone who is wondering)</p>
]]></description><pubDate>Sat, 14 Mar 2026 18:50:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47379830</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47379830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47379830</guid></item><item><title><![CDATA[New comment by eslaught in "Shall I implement it? No"]]></title><description><![CDATA[
<p>I have an old account, you can read my history of comments and see if my style has changed. No need to take my word for it.</p>
]]></description><pubDate>Fri, 13 Mar 2026 18:00:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47367548</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47367548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47367548</guid></item><item><title><![CDATA[New comment by eslaught in "Shall I implement it? No"]]></title><description><![CDATA[
<p>Iteration is inherent to how computers work. There's nothing new or interesting about this.<p>The question is who prunes the space of possible answers. If the LLM spews things at you until it gets one right, then sure, you're in the scenario you outlined (and much less interesting). If it ultimately presents one option to the human, and that option is correct, then that's much more interesting. Even if the process is "monkeys on keyboards", does it matter?<p>There are plenty of optimization and verification algorithms that rely on "try things at random until you find one that works", but before modern LLMs no one accused these things of being monkeys on keyboards, despite it being literally what these things are.</p>
]]></description><pubDate>Fri, 13 Mar 2026 17:59:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47367536</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47367536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47367536</guid></item><item><title><![CDATA[New comment by eslaught in "Shall I implement it? No"]]></title><description><![CDATA[
<p>For context I've been an AI skeptic and am trying as hard as I can to continue to be.<p>I honestly think we've moved the goalposts. I'm saying this because, for the longest time, I thought that the chasm that AI couldn't cross was generality. By which I mean that you'd train a system, and it would work in that specific setting, and then you'd tweak just about anything at all, and it would fall over. Basically no AI technique truly generalized for the longest time. The new LLM techniques fall over in their own particular ways too, but it's increasingly difficult for even skeptics like me to deny that they provide meaningful value at least some of the time. And largely that's because they generalize so much better than previous systems (though not perfectly).<p>I've been playing with various models, as well as watching other team members do so. And I've seen Claude identify data races that have sat in our code base for nearly a decade, given a combination of a stack trace, access to the code, and a handful of human-written paragraphs about what the code is doing overall.<p>This isn't just a matter of adding harnesses. The fields of program analysis and program synthesis are old as dirt, and probably thousands of CS PhD have cut their teeth of trying to solve them. All of those systems had harnesses but they weren't nearly as effective, as general, and as broad as what current frontier LLMs can do. And on top of it all we're driving LLMs with inherently fuzzy natural language, which by definition requires high generality to avoid falling over simply due to the stochastic nature of how humans write prompts.<p>Now, I agree vehemently with the superficial point that LLMs are "just" text generators. But I think it's also increasingly missing the point given the empirical capabilities that the models clearly have. The real lesson of LLMs is not that they're somehow not text generators, it's that we as a species have somehow encoded intelligence into human language. And along with the new training regimes we've only just discovered how to unlock that.</p>
]]></description><pubDate>Fri, 13 Mar 2026 06:24:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47361286</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47361286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47361286</guid></item><item><title><![CDATA[New comment by eslaught in "Don't post generated/AI-edited comments. HN is for conversation between humans"]]></title><description><![CDATA[
<p>It's not just about the increase in volume, it's about the delta between the prompt and the generation.<p>If the generation merely restates the prompt (possibly in prettier, cleaner language), then usually it's the case that the prompt is shorter and more direct, though possibly less "correct" from a formal language perspective. I've seen friends send me LLM-generated stuff and when I asked to see the prompt, the prompts were honestly better. So why bother with the LLM?<p>But if you're using the LLM to generate information that goes beyond the prompt, then it's likely that you don't know what you're talking about. Because if you really did, you'd probably be comfortable with a brief note and instructions to go look the rest up on one's own. The desire to generate more comes from either laziness or else a desire to inflate one's own appearance. In either case, the LLM generation isn't terribly useful since anyone could get the same result from the prompt (again).<p>So I think LLMs contribute not just to a drowning out of human conversation but to semantic drift, because they encourage those of us who are less self-assured to lean into things without really understanding them. A danger in any time but certainly one that is more acute at the moment.</p>
]]></description><pubDate>Fri, 13 Mar 2026 06:11:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47361206</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47361206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47361206</guid></item><item><title><![CDATA[New comment by eslaught in "The Collective Ambition Behind Odysseus, a Game-Changing Sci-Fi Larp"]]></title><description><![CDATA[
<p>Is there something like this, but without (or with minimal) spoilers?</p>
]]></description><pubDate>Mon, 09 Mar 2026 16:34:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47311340</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=47311340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47311340</guid></item><item><title><![CDATA[New comment by eslaught in "We mourn our craft"]]></title><description><![CDATA[
<p>It's that, but it's also that the incentives are misaligned.<p>How many supposed "10x" coders actually produced unreadable code that no one else could maintain? But then the effort to produce that code is lauded while the nightmare maintenance of said code is somehow regarded as unimpressive, despite being massively more difficult?<p>I worry that we're creating a world where it is becoming easy, even trivial, to be that dysfunctional "10x" coder, and dramatically harder to be the competent maintainer. And the existence of AI tools will reinforce the culture gap rather than reducing it.</p>
]]></description><pubDate>Sat, 07 Feb 2026 21:59:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46928560</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46928560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46928560</guid></item><item><title><![CDATA[New comment by eslaught in "Employers, please use postmarked letters for job applications (2025)"]]></title><description><![CDATA[
<p>Not the same industry but at least one literary agent does this: if you physically print and mail your book proposal, they will respond with a short but polite, physical rejection letter if they reject you.<p>But I think it's a generational thing. The younger agents I know of just shut down all their submissions when they get overwhelmed, or they start requiring everyone to physically meet them at a conference first.</p>
]]></description><pubDate>Fri, 30 Jan 2026 01:37:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46819563</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46819563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46819563</guid></item><item><title><![CDATA[New comment by eslaught in "After two years of vibecoding, I'm back to writing by hand"]]></title><description><![CDATA[
<p>But this is exactly my point: if your "code" is different than your "pseudocode", something is wrong. There's a reason why people call Lisp "executable pseudocode", and it's because it shrinks the gap between the human-level description of what needs to happen and the text that is required to actually get there. (There will always be a gap, because no one understands the requirements perfectly. But at least it won't be exacerbated by irrelevant details.)<p>To me, reading the prompt example half a dozen levels up, reminds me of Greenspun's tenth rule:<p>> Any sufficiently complicated C++ program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. [1]<p>But now the "program" doesn't even have formal semantics and isn't a permanent artifact. It's like running a compiler and then throwing away the source program and only hand-editing the machine code when you don't like what it does. To me that seems crazy and misses many of the most important lessons from the last half-century.<p>[1]: <a href="https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule" rel="nofollow">https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule</a> (paraphrased to use C++, but applies equally to most similar languages)</p>
]]></description><pubDate>Tue, 27 Jan 2026 17:08:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782884</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46782884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782884</guid></item><item><title><![CDATA[New comment by eslaught in "After two years of vibecoding, I'm back to writing by hand"]]></title><description><![CDATA[
<p>But this is what I don't get. Writing code is <i>not that hard</i>. If the act of physically typing my code out is a bottleneck to my process, <i>I am doing something wrong</i>. Either I've under-abstracted, or over-abstracted, or flat out have the wrong abstractions. It's time to sit back and figure out why there's a mismatch with the problem domain and come back at it from another direction.<p>To me this reads like people have learned to put up with poor abstractions for so long that having the LLM take care of it feels like an improvement? It's the classic C++ vs Lisp discussion all over again, but people forgot the old lessons.</p>
]]></description><pubDate>Tue, 27 Jan 2026 06:31:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46776232</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46776232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46776232</guid></item><item><title><![CDATA[New comment by eslaught in "Two Concepts of Intelligence"]]></title><description><![CDATA[
<p>Here's a paper from September 2025 that compares programs for (a) semantic equivalence (do they do the same thing) and (b) syntactic similarity (are the parse trees similar).<p>LLMs are more likely to judge programs (correctly or incorrectly) as being semantically equivalent when they are syntactically similar, even though syntactically similar programs can actually do drastically different things. In fact LLMs are generally pretty bad at program equivalence, suggesting they don't really "understand" what programs are doing, even for a fairly mechanical definition of "understand".<p><a href="https://arxiv.org/pdf/2502.12466" rel="nofollow">https://arxiv.org/pdf/2502.12466</a><p>While this is a point in time study and I'm sure all these tools will evolve, this matches my intuition for how LLMs behave and the kinds of mistakes they make.<p>By comparison the approach in this article seems narrow and doesn't explain a whole lot, and more importantly doesn't give us any hypotheses we can actually test against these systems.</p>
]]></description><pubDate>Mon, 19 Jan 2026 21:34:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46684806</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46684806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46684806</guid></item><item><title><![CDATA[New comment by eslaught in "Toll roads are spreading in America"]]></title><description><![CDATA[
<p>If you drive in the FasTrak lanes without an account you pay the fee + $10 surcharge (for a first time violation), and it goes up on the second violation:<p><a href="https://www.bayareafastrak.org/en/help/invoices-and-penalties-faqs.shtml#faq-3" rel="nofollow">https://www.bayareafastrak.org/en/help/invoices-and-penaltie...</a><p>I'm having a hard time finding a citation but according to Google's AI summary if the second violation is unpaid they put a hold on your DMV registration, and the fine itself can be sent to a collection agency.<p>I agree empirically I see people driving through the lane without a tag (i.e., no number shows up in the overhead display), but maybe these are people with FasTrak accounts being lazy?</p>
]]></description><pubDate>Sun, 28 Dec 2025 03:42:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46408200</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46408200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46408200</guid></item><item><title><![CDATA[New comment by eslaught in "Rust GCC backend: Why and how"]]></title><description><![CDATA[
<p>The other answers are great, but let me just add that C++ <i>cannot</i> be parsed with conventional LL/LALR/LR parsers, because the syntax is ambiguous and requires disambiguation via type checking (i.e., there may be multiple parse trees but at most one will type check).<p>There was some research on parsing C++ with GLR but I don't think it ever made it into production compilers.<p>Other, more sane languages with unambiguous grammars may still choose to hand-write their parsers for all the reasons mentioned in the sibling comments. However, I would note that, even when using a parsing library, almost every compiler in existence will use its own AST, and not reuse the parse tree generated by the parser library. That's something you would only ever do in a compiler class.<p>Also I wouldn't say that frontend/backend is an evolution of previous terminology, it's just that parsing is not considered an "interesting" problem by most of the community so the focus has moved elsewhere (from the AST design through optimization and code generation).</p>
]]></description><pubDate>Tue, 16 Dec 2025 17:03:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46291049</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46291049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46291049</guid></item><item><title><![CDATA[New comment by eslaught in "Deprecate like you mean it"]]></title><description><![CDATA[
<p>The solution I've found is to make using the API a hard error with an explicitly temporary and obnoxiously-named workaround variable.<p><pre><code>    WORKAROUND_URLLIB3_HEADER_DEPRECATION_THIS_IS_A_TEMPORARY_FIX_CHANGE_YOUR_CODE=1 python3 ...
</code></pre>
It's loud, there's an out if you need your code working <i>right now</i>, and when you finally act on the deprecation, if anyone complains, they don't really have legs to stand on.<p>Of course you can layer it with warnings as a first stage, but ultimately it's either this or remove the code outright (or never remove it and put up with whatever burden that imposes).</p>
]]></description><pubDate>Thu, 11 Dec 2025 20:39:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46236822</link><dc:creator>eslaught</dc:creator><comments>https://news.ycombinator.com/item?id=46236822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46236822</guid></item></channel></rss>