<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: beautron</title><link>https://news.ycombinator.com/user?id=beautron</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 03:45:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=beautron" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by beautron in "The 1987 game “The Last Ninja” was 40 kilobytes"]]></title><description><![CDATA[
<p>This is sort of the defining mechanic of these games in my memory. The first thing that pops into my head when I think of Last Ninja is aligning and realigning myself, and squatting, awkwardly and repeatedly (just like a real ninja, lol), until that satisfying new item icon appears. Perhaps surprisingly, these are very fond memories.<p>This mechanic is augmented by not even always knowing which graphics in the environment can be picked up, or by invisible items that are inside boxes or otherwise out of sight (I think LN2 had something in a bathroom? You have to position yourself in the doorway and do a squat of faith).<p>The other core memory is the spots that require a similarly awkward precision while jumping. These are worse, because each failure loses you one of your limited lives. The combat is also finicky. I remember if you come into a fight misaligned, your opponent might quickly drain your energy while you fail to get a hit in.<p>At the time, it seemed appropriate to me that it required such a difficult precision to be a ninja. I was also a kid, who approached every game non-critically, assuming each game was exactly as it was meant to be. Thus I absolutely loved it, lol.</p>
]]></description><pubDate>Mon, 06 Apr 2026 06:12:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47657512</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=47657512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47657512</guid></item><item><title><![CDATA[New comment by beautron in "Why are we still using Markdown?"]]></title><description><![CDATA[
<p>One problem with the /italics/ form is that it's not convenient when writing about filesystem paths (though I do like its visual indication of slanting).</p>
]]></description><pubDate>Sat, 04 Apr 2026 05:08:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47635976</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=47635976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47635976</guid></item><item><title><![CDATA[New comment by beautron in "Ask HN: Please restrict new accounts from posting"]]></title><description><![CDATA[
<p>Perhaps more proof of work is necessary, but it makes me sad.<p>I still remember creating my HN account. It stands out in my memory, because it was the smoothest, simplest, easiest, and quickest account creation of my life.<p>I had lurked here for around a decade before finally creating an account. Any urge to participate was thwarted by my resistance toward creating accounts (I just hate account creation for some reason). But HN's account creation process was a breath of fresh air. "You mean it can be this easy? Why isn't it this easy everywhere? If I had known how simple it was, I would have created an HN account years earlier, lol."<p>It was especially stunning to me, because I think the discourse on HN is generally of a higher quality than most other sites (which I wouldn't naturally associate with such an easy account creation process).<p>It's my only fond memory of account creation (along with maybe when I created an account on America-Online back in the 90s, since that was my first ever account and it was all so novel). Just a few quick seconds, and then I'm already commenting on HN. It was beautiful. I remain delighted.</p>
]]></description><pubDate>Mon, 09 Mar 2026 03:03:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304415</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=47304415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304415</guid></item><item><title><![CDATA[New comment by beautron in "Good software knows when to stop"]]></title><description><![CDATA[
<p>I prefer the perspective that a computer program is akin to a mathematical constant. This was true in the old days. A program I wrote in C64 BASIC, way back in 1980s, should still work precisely the same today (even on one of the freshly-manufactured Commodore 64 Ultimates).<p>You've honed right in on what's changed since the old days: Platform vendors (such as Apple) now continuously inject instability into everything.<p>You might argue that developments such as "changing processor architectures" justify such breaks from stability (though I myself have qualms with one of the richest companies in the world having a general policy of "cutting support and moving on"). But I would point out that Apple (and other vendors) create instability far beyond such potentially-justifiable examples.<p>To me, it appears as if Apple actively works to foster this modern "software is never finished" culture. This is seen perhaps most clearly by they way they remove apps from their iOS store if they haven't been updated recently (even as little as 2 years!): <a href="https://daringfireball.net/linked/2022/04/27/apple-older-games-app-store" rel="nofollow">https://daringfireball.net/linked/2022/04/27/apple-older-gam...</a><p>Shouldn't we be demanding stability from our platforms? Isn't the concept of "finished software" a beautiful one? Imagine if you could write a useful program once, leave it alone for 40 years, and then come back to it, and find that it's still just as useful. Isn't this one of the most valuable potential benefits of software as a premise? Are the things we're trading this for worth it?</p>
]]></description><pubDate>Fri, 06 Mar 2026 08:05:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47272287</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=47272287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47272287</guid></item><item><title><![CDATA[New comment by beautron in "Firefox Getting New Controls to Turn Off AI Features"]]></title><description><![CDATA[
<p>> Not a lot of companies allow disabling their garbage, but FF does.
>
> Can't we be happy with this nice switch?<p>I want my tools to keep working the way they have been working. I don't want to be paranoid that "garbage" (as you put it), or any other controversial changes, are going to be slipped into my tools while I'm not looking.</p>
]]></description><pubDate>Tue, 03 Feb 2026 05:37:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46866932</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=46866932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46866932</guid></item><item><title><![CDATA[New comment by beautron in "Rue: Higher level than Rust, lower level than Go"]]></title><description><![CDATA[
<p>> The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.<p>Why do you think the typical Go story is to use a bunch of auto generation? This does not match my experience with the language at all. Most Go projects I've worked on, or looked at, have used little or no code generation.<p>I'm sure there are projects out there with a "bunch" of it, but I don't think they are "typical".</p>
]]></description><pubDate>Mon, 22 Dec 2025 08:15:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46352209</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=46352209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46352209</guid></item><item><title><![CDATA[New comment by beautron in "Thoughts on Go vs. Rust vs. Zig"]]></title><description><![CDATA[
<p>But sometimes it is useful to return both a value and a non-nil error. There might be partial results that you can still do things with despite hitting an error. Or the result value might be information that is useful with or without an error (like how Go's ubiquitous io.Writer interface returns the number of bytes written along with any error encountered).<p>I appreciate that Go tends to avoid making limiting assumptions about what I might want to do with it (such as assuming I <i>don't</i> want to return a value whenever I return a non-nil error). I like that Go has simple, flexible primitives that I can assemble how I want.</p>
]]></description><pubDate>Fri, 05 Dec 2025 08:35:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46158096</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=46158096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46158096</guid></item><item><title><![CDATA[New comment by beautron in "The Swift SDK for Android"]]></title><description><![CDATA[
<p>I share your unpopular opinion.<p>While I understand that having identical UI elements across apps aids in discoverability, I just love it so much when an app has its own bespoke interface that was clearly made with love.<p>Like you, it might be my love of games that has given me this preference. Would StarCraft II have a better UX if its menus used the standard Windows widgets where applicable? I think certainly not. And I think the same can be true for many non-game apps.</p>
]]></description><pubDate>Sat, 25 Oct 2025 02:16:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45700882</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=45700882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45700882</guid></item><item><title><![CDATA[New comment by beautron in "Leaving Google"]]></title><description><![CDATA[
<p>My perception is that there's always been some dissonance between loving tech intrinsically for itself vs. the Silicon Valley venture capitalist business model. Conflating tech with its dominant business model enables the paradox where a person deeply in love with tech can also be anti-tech.</p>
]]></description><pubDate>Mon, 12 May 2025 14:44:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43963591</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=43963591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43963591</guid></item><item><title><![CDATA[New comment by beautron in "Atari Missile Command Game Built Using AI Gemini 2.5 Pro"]]></title><description><![CDATA[
<p>> Going from a project with 1,000 to 1,000,000 lines of code is a tiny leap compared to going from 0 to 1000.<p>Are you sure the leap is tiny? It's a much easier problem to get only 1,000 lines of code to be correct, because the lines only have to be consistent <i>with each other</i>.</p>
]]></description><pubDate>Fri, 11 Apr 2025 16:52:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=43655908</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=43655908</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43655908</guid></item><item><title><![CDATA[New comment by beautron in "A love letter to the CSV format"]]></title><description><![CDATA[
<p>> When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.<p>But the premise of CSV is so simple, that there are only four quirks to empirically discover: cell delimiter, row delimiter, quote, escaped-quote.<p>I think it's "easy" to peek at the file and say, "Oh, they use semicolon cell delimiters."<p>And it's likewise "easy" to write the "custom logic", which is about as simple as parsing something directly from a text stream gets. I typically have to stop and think a minute about the quoting, but it's not that bad.<p>If a programmer is practiced at parsing from a text stream (a powerful, general skill that is worth exercising), than I think it is reasonable to think they might find parsing CSV by hand to be easier and quicker than parsing JSON (etc.) with a library.</p>
]]></description><pubDate>Fri, 28 Mar 2025 04:42:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43501619</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=43501619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43501619</guid></item><item><title><![CDATA[New comment by beautron in "A love letter to the CSV format"]]></title><description><![CDATA[
<p>> If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy."<p>But it isn't that <i>reliably</i> easy with JSON. Sometimes I have clients give me data that I just have to work with, as-is. Maybe it was invalid JSON spat out by some programmer or tool long ago. Maybe it's just from a different department than my contact, which might delay things for days before the bureaucracy gets me a (hopefully) valid JSON.<p>I consider CSV's level of "easy" more reliable.<p>And even <i>valid</i> JSON can be less easy. I've had experiences where writing the high-level parsing for some JSON file, in terms of a JSON library, was less easy and more time-consuming than writing a custom CSV parser.<p>Subjectively, I think programming a CSV parser from basic programming primitives is just more fun and appealing than programming in terms of a JSON library or XML library. And I find the CSV code is often simpler and quicker to write.</p>
]]></description><pubDate>Fri, 28 Mar 2025 03:52:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43501370</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=43501370</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43501370</guid></item><item><title><![CDATA[New comment by beautron in "A love letter to the CSV format"]]></title><description><![CDATA[
<p>I also love CSV for its simplicity. A key part of that love is that it comes from the perspective of me <i>as a programmer</i>.<p>Many of the criticisms of CSV I'm reading here boil down to something like: CSV has no authoritative standard, and everyone implements it differently, which makes it bad as a data interchange format.<p>I agree with those criticisms when I imagine them from the perspective of a user <i>who is not also a programmer</i>. If this user exports a CSV from one program, and then tries to load the CSV into a different program, but it fails, then what good is CSV to them?<p>But from the perspective of a programmer, CSV is great. If a client gives me data to load into some app I'm building for them, then I am very happy when it is in a CSV format, because I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file.<p>Parsing CSV is quick and fun <i>if you only care about parsing one specific file</i>. And that's the key: It's <i>so</i> quick and fun, that it enables you to just parse anew each time you have to deal with some CSV file. It just doesn't take very long to look at the file, write a row-processing loop, and debug it against the file.<p>The beauty of CSV isn't that it's easy to write a General CSV Parser that parses every CSV file in the wild, but rather that its easy to write specific CSV parsers on the spot.<p>Going back to our non-programmer user's problem, and revisiting it as a programmer, the situation is now different. If I, a programmer, export a CSV file from one program, and it fails to import into some other program, then as long as I have an example of the CSV format the importing program wants, I can quickly write a translator program to convert between the formats.<p>There's something so appealing about to me about simple-to-parse-by-hand data formats. They are very empowering to a programmer.</p>
]]></description><pubDate>Thu, 27 Mar 2025 06:21:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43490859</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=43490859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43490859</guid></item><item><title><![CDATA[New comment by beautron in "Hyrum’s Law in Golang"]]></title><description><![CDATA[
<p>I love that the Go project takes compatibility so seriously. And I think taking Hyrum's Law into account is necessary, if what you're serious about is compatibility itself.<p>Being serious about compatibility allows the concept of a piece of software being finished. If I finished writing a book twelve years ago, you could still read it today. But if I finished writing a piece of software twelve years ago, could you still build and run it today? Without having to fix anything? Without having to fix <i>lots</i> of things?<p>> Sure, but now that there's a "correct" way to do this, you don't get to complain that the hacky thing you did needs to keep being supported.<p>But that's the whole point and beauty of Go's compatibility promise. Once you finish getting something working, you finished getting it working. It works.<p>What I don't want, is for my programming platform to suddenly say that the way I got the thing working is no longer supported. I am no longer finished getting it working. I will never be finished getting it working.<p>Go is proving that a world with permanently working software is possible (vs a world with software that breaks over time).</p>
]]></description><pubDate>Fri, 22 Nov 2024 08:06:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42212001</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=42212001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42212001</guid></item><item><title><![CDATA[New comment by beautron in "Sleep regularity is a stronger predictor of mortality than sleep duration (2023)"]]></title><description><![CDATA[
<p>This matches my experience. I think I have a 25 hour circadian rhythm, which has me always wanting to stay up one hour later than the night before.</p>
]]></description><pubDate>Sat, 02 Nov 2024 05:40:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42024280</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=42024280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42024280</guid></item><item><title><![CDATA[New comment by beautron in "We're forking Flutter"]]></title><description><![CDATA[
<p>I think the experience of building something atop a framework should absolutely have bearing on how to build the underlying framework.</p>
]]></description><pubDate>Tue, 29 Oct 2024 06:24:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=41979989</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=41979989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41979989</guid></item><item><title><![CDATA[New comment by beautron in "The 4-chan Go programmer"]]></title><description><![CDATA[
<p>> And it doesn't really color the function because you can trivially make it sync again.<p>Yes, but this goes both ways: You can trivially make the sync function async (assuming it's documented as safe for concurrent use).<p>So I would argue that the sync API design is simpler and more natural. Callers can easily set up their own goroutine and channels around the function call if they need or want that. But if they don't need or want that, everything is simpler and they don't even need to think about channels.</p>
]]></description><pubDate>Thu, 29 Aug 2024 06:43:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=41388002</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=41388002</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41388002</guid></item><item><title><![CDATA[New comment by beautron in "The Monospace Web"]]></title><description><![CDATA[
<p>Wow, this is wild! Maintaining perfect justification solely through word choice is... quite a writing constraint, lol. I don't think I've seen this done before.</p>
]]></description><pubDate>Tue, 27 Aug 2024 23:17:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=41374306</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=41374306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41374306</guid></item><item><title><![CDATA[New comment by beautron in "I learned Vulkan and wrote a small game engine with it"]]></title><description><![CDATA[
<p>Didn't Unity and Epic have people working directly on the Vulkan specification? I remember reading a comment (maybe here!), way back during Vulkan's original release, where someone was screaming about how Vulkan was a conspiracy to make us dependent on giant, multi million dollar game engines.<p>I don't know about any conspiring, but I've thought about that comment often, because the result appears the same: The barrier to in-house game engine development has been risen further.</p>
]]></description><pubDate>Fri, 07 Jun 2024 07:05:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40606034</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=40606034</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40606034</guid></item><item><title><![CDATA[New comment by beautron in "Ask HN: 30y After 'On Lisp', PAIP etc., Is Lisp Still "Beating the Averages"?"]]></title><description><![CDATA[
<p>> Common Lisp's string support certainly feels like it comes from a world where allocating memory is expensive, and something the programmer should be aware of, and have the option to reuse buffers instead of allocating additional memory...<p>We are still in that world. I do game programming, and am convinced that if a game programmer isn't aware of their memory allocations, then their game <i>will</i> drop frames. This is true today, even at 60hz (and things are moving towards 144hz, 240hz, and beyond). Reusing buffers is essential.<p>I think that being aware of when and where your memory comes from is the foundation of performance. The discipline of handling memory carefully tends to naturally lead toward making more performant decisions on everything else.</p>
]]></description><pubDate>Wed, 05 Jun 2024 06:40:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=40582015</link><dc:creator>beautron</dc:creator><comments>https://news.ycombinator.com/item?id=40582015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40582015</guid></item></channel></rss>