<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: BatmanAoD</title><link>https://news.ycombinator.com/user?id=BatmanAoD</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 20:14:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=BatmanAoD" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by BatmanAoD in "Why users cannot create Issues directly"]]></title><description><![CDATA[
<p>This seems to be missing the point. Sometimes users see error messages. Sometimes they're good, sometimes they're bad; and yeah, software engineers should endeavor to make sure that error behaviors are graceful, but of all the not-perfect things in this world, error handling is one of the least perfect, so users do encounter unfortunately ungraceful errors.<p>In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.</p>
]]></description><pubDate>Sat, 03 Jan 2026 03:47:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46472602</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=46472602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46472602</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Face it: you're a crazy person"]]></title><description><![CDATA[
<p>> Reading C++ for dummies even though I had untreated ADHD and couldn’t sit still long enough to get much past std::cout.<p>You may have lucked out. I also didn't get terribly far in that book, but I thought it was fairly weird when I tried to read it, and after majoring in CS in college and eventually reading some very good books on programming, I believe I was entirely justified in not liking that one.</p>
]]></description><pubDate>Mon, 04 Aug 2025 02:22:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44781572</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=44781572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44781572</guid></item><item><title><![CDATA[New comment by BatmanAoD in "LLM Inevitabilism"]]></title><description><![CDATA[
<p>Like a lot of blog posts, this feels like a premise worth exploring, lacking a critical exploration of that premise.<p>Yes, "inevitabilism" is a thing, both in tech and in politics. But, crucially, it's not always wrong! Other comments have pointed out examples, such as the internet in the 90s. But when considering new cultural and technological developments that seem like a glimpse of the future, how do we know if they're an inevitability or not?<p>The post says:<p>> what I’m most certain of is that we have choices about what our future should look like, and how we choose to use machines to build it.<p>To me, that sounds like mere wishful thinking. Yeah, sometimes society can turn back the tide of harmful developments; for instance, the ozone layer is well on its way to complete recovery. Other times, even when public opinion is mixed, such as with bitcoin, the technology does become quite successful, but doesn't seem to become quite as ubiquitous as its most fervent adherents expect. So how do we know which category LLM usage falls into? I don't know the answer, because I think it's a difficult thing to know in advance.</p>
]]></description><pubDate>Wed, 16 Jul 2025 03:43:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44578483</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=44578483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44578483</guid></item><item><title><![CDATA[New comment by BatmanAoD in "America underestimates the difficulty of bringing manufacturing back"]]></title><description><![CDATA[
<p>If 20% of people really think they'd be better off as factory workers, that's actually kind of a lot. Can you imagine if 20% of the working population really did work in factories? That's an enormous number.</p>
]]></description><pubDate>Thu, 17 Apr 2025 13:12:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=43716306</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=43716306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43716306</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Maestro: A Linux-compatible kernel in Rust"]]></title><description><![CDATA[
<p>...okay, so what is "PI lockfree"?</p>
]]></description><pubDate>Sat, 06 Jan 2024 01:41:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=38887469</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38887469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38887469</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Maestro: A Linux-compatible kernel in Rust"]]></title><description><![CDATA[
<p>Nobody said anything about RISC-V being "perfect" or not. The problem isn't how good RISC-V is or isn't; it's that your desire for software to target one and only one type of hardware just doesn't make any sense. That's not how computers have ever worked.<p>By the way, what do you mean by "PI lockfree"? Googling "ISA PI lockfree" just leads me to...another hacker news thread where you're arguing that RISC-V should replace everything.<p>Anyway, yes, please do "investigate how much out-of-the-box-thinking and disruptive this is" before continuing to have these inane arguments.</p>
]]></description><pubDate>Fri, 05 Jan 2024 05:42:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=38876093</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38876093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38876093</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Maestro: A Linux-compatible kernel in Rust"]]></title><description><![CDATA[
<p>So the "right way" is to replace <i>all hardware</i> with new hardware, and the second-best solution is for CISC systems to emulate a specific RISC architecture? And you think this will be <i>more</i> maintainable, performant, etc? Do you have even a shred of evidence that this makes any sense at all, beyond "RISC is a good standard"?</p>
]]></description><pubDate>Thu, 04 Jan 2024 06:06:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=38863641</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38863641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38863641</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Maestro: A Linux-compatible kernel in Rust"]]></title><description><![CDATA[
<p>Are you proposing a kernel that would only run on risc-v hardware, or expecting that people would run some kind of emulator?<p>....or do you think that because RISC-V is "standard", assembly for RISC-V would run on any hardware?</p>
]]></description><pubDate>Wed, 03 Jan 2024 14:31:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=38854447</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38854447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38854447</guid></item><item><title><![CDATA[New comment by BatmanAoD in "IDEs we had 30 years ago"]]></title><description><![CDATA[
<p>Any good Vim-emulator extension has macro support. VSCode also has an extension that lets you run the <i>actual neovim server</i> to manage your text buffer.<p>The settings GUI in VSCode is just an auto-generated layer over raw JSON files. You can even configure it to skip the GUI and open the JSON files directly when you open settings.</p>
]]></description><pubDate>Sat, 30 Dec 2023 15:50:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=38816125</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38816125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38816125</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Why Golang instead of Rust to develop the Krater desktop app"]]></title><description><![CDATA[
<p>Precisely true, but from a PHP background, I assume there's not much to prepare you for this.</p>
]]></description><pubDate>Sat, 11 Nov 2023 07:15:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=38228221</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38228221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38228221</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Why Golang instead of Rust to develop the Krater desktop app"]]></title><description><![CDATA[
<p>What are you talking about? Even the standard library is littered with `any` and reflection. Look at how JSON serialization works.</p>
]]></description><pubDate>Sat, 11 Nov 2023 07:14:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=38228216</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38228216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38228216</guid></item><item><title><![CDATA[New comment by BatmanAoD in "A four year plan for async Rust"]]></title><description><![CDATA[
<p>That's...not...how threads or async work...?<p>> Blocking I/O executed on another thread, with a callback to execute when done, becomes async I/O (from the user's PoV).<p>That's not what we're talking about when we discuss languages with async I/O, though. That's just bog-standard synchronous I/O with multithreading.<p>> The read/write operations are still potentially blocking, so for efficiency you need multiple threads.<p>That doesn't actually follow. The entire point of language-level async I/O is to be able to continue doing other work while waiting for the kernel to finish an I/O operation, <i>without</i> spawning a new OS thread just for this purpose.</p>
]]></description><pubDate>Tue, 07 Nov 2023 20:56:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=38182705</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=38182705</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38182705</guid></item><item><title><![CDATA[New comment by BatmanAoD in "Reddit App – Suspicious high number of recent 5 star, one word reviews"]]></title><description><![CDATA[
<p>Okay, that's quite funny. Thank you.</p>
]]></description><pubDate>Sat, 17 Jun 2023 04:57:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=36367428</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=36367428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36367428</guid></item><item><title><![CDATA[New comment by BatmanAoD in "The Rust I wanted had no future"]]></title><description><![CDATA[
<p>That's...not precisely true. The C++ standard doesn't specify how std::async works, and for a while GCC just ran the operation sequentially, and later both GCC and Clang launched new OS threads by default. <a href="https://stackoverflow.com/q/10059239/1858225" rel="nofollow">https://stackoverflow.com/q/10059239/1858225</a></p>
]]></description><pubDate>Fri, 09 Jun 2023 06:18:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=36253993</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=36253993</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36253993</guid></item><item><title><![CDATA[New comment by BatmanAoD in "The age of average"]]></title><description><![CDATA[
<p>Honestly, this is better than the article itself.</p>
]]></description><pubDate>Thu, 30 Mar 2023 13:51:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=35372704</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=35372704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35372704</guid></item><item><title><![CDATA[New comment by BatmanAoD in "C Isn't a Programming Language Anymore"]]></title><description><![CDATA[
<p>Well, yeah. Hence the rest of my comment. And if you don't go through glibc, then you still must follow the C ABI rules (since that's the only thing the kernel understands), and you are at risk of having your calls break when the kernel is updated (it's already been mentioned elsewhere in these comments that this actually happened to Go on Mac).</p>
]]></description><pubDate>Thu, 17 Mar 2022 18:35:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=30714432</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=30714432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30714432</guid></item><item><title><![CDATA[New comment by BatmanAoD in "C Isn't a Programming Language Anymore"]]></title><description><![CDATA[
<p>Sure. You can re-implement everything starting with the Kernel, as long as you don't have to interface with any of the C microcode on the hardware itself. And, yeah, people are doing this, for instance with Redox OS.<p>But if you actually want to program something usable in conjunction with existing software, such as Linux, you need to use the C ABI. There is no alternative.</p>
]]></description><pubDate>Thu, 17 Mar 2022 18:33:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=30714412</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=30714412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30714412</guid></item><item><title><![CDATA[New comment by BatmanAoD in "C Isn't a Programming Language Anymore"]]></title><description><![CDATA[
<p>It's "organic" because, as this article is pointing out, creating alternatives is really difficult due to this exact lock-in, both at the OS level and at the hardware-vendor level.<p>You shouldn't need a "miracle language" or even an "all-encompassing solution" to have a chance to break free of this.</p>
]]></description><pubDate>Thu, 17 Mar 2022 06:46:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=30708344</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=30708344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30708344</guid></item><item><title><![CDATA[New comment by BatmanAoD in "C Isn't a Programming Language Anymore"]]></title><description><![CDATA[
<p>You still have to follow the C ABI when interfacing with C. That's the exact problem being called out in the post.<p>Zig solves interoperability by incorporating an entire copy of LLVM. Go does do bare metal syscalls, but as mentioned elsewhere in these comments, this has caused breakage on Mac when the kernel was updated, because this interface isn't stable.</p>
]]></description><pubDate>Thu, 17 Mar 2022 06:44:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=30708336</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=30708336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30708336</guid></item><item><title><![CDATA[New comment by BatmanAoD in "C Isn't a Programming Language Anymore"]]></title><description><![CDATA[
<p>It runs on all hardware because hardware manufacturers support it, which they essentially must do because that's what's expected. It's a self-fulfilling prophecy (and arguably a vicious cycle).</p>
]]></description><pubDate>Thu, 17 Mar 2022 03:24:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=30707324</link><dc:creator>BatmanAoD</dc:creator><comments>https://news.ycombinator.com/item?id=30707324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30707324</guid></item></channel></rss>