<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: materiallie</title><link>https://news.ycombinator.com/user?id=materiallie</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 16:22:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=materiallie" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by materiallie in "Chernobyl wildlife forty years on"]]></title><description><![CDATA[
<p>I thought the show was horrible. It was moralistic, quite on the nose, and the dialogue was pretty corny. There were a lot of obvious appeals to your average NYT and Atlantic type viewer, which is surely the main factor behind its critical acclaim.</p>
]]></description><pubDate>Mon, 27 Apr 2026 06:33:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47918369</link><dc:creator>materiallie</dc:creator><comments>https://news.ycombinator.com/item?id=47918369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47918369</guid></item><item><title><![CDATA[New comment by materiallie in "Systems Programming with Zig"]]></title><description><![CDATA[
<p>Zig certainly has a lot of interesting features and good ideas, but I honestly don't see the point of starting a major project with it. With alternatives like Rust and Swift, memory safety is simply table stakes these days.<p>Yes, I know Zig does a lot of things to help the programmer avoid mistakes. But the last time I looked, it was still <i>possible</i> to make mistakes.<p>The only time I would pick something like C, C++, or Rust is if I am planning to build a multi-million line, performance sensitive project. In which case, I want total memory safety. For most "good enough" use cases, garbage collectors work fine and I wouldn't bother with a system's programming language at all.<p>That leaves me a little bit confused about the value proposition of Zig. I suppose it's a "better C". But like I said, for serious industry projects starting in 2025, memory safety is just tablestakes these days.<p>This isn't meant to be a criticism of Zig or all of the hard work put into the language. I'm all for interesting projects. And certainly there are a lot of interesting ideas in Zig. I'm just not going to use them until they're present in a memory safe language.<p>I am actually a bit surprised by the popularity of Zig on this website, given the strong dislike towards Go. From my perspective, both languages are very similar, from the perspective that they decided to "unsolve already solved problems". Meaning, we <i>know</i> how to guarantee memory safety. Multiple programming languages have implemented this in a variety of ways. Why would I use a new language that takes a problem a language like Rust, Java, or Swift already solved for me, and takes away features (memory safety) that I already have?</p>
]]></description><pubDate>Sat, 04 Oct 2025 16:19:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45474447</link><dc:creator>materiallie</dc:creator><comments>https://news.ycombinator.com/item?id=45474447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45474447</guid></item><item><title><![CDATA[New comment by materiallie in "Why LLMs can't really build software"]]></title><description><![CDATA[
<p>This is a very friendly and cordial response. Given that the parent comment was implying that the creators of Zed don't actually know how to build software. Based on their credentials building Rails crud apps, I suppose.</p>
]]></description><pubDate>Fri, 15 Aug 2025 05:11:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44908833</link><dc:creator>materiallie</dc:creator><comments>https://news.ycombinator.com/item?id=44908833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44908833</guid></item><item><title><![CDATA[New comment by materiallie in "Use Your Type System"]]></title><description><![CDATA[
<p>Put another way: errors tend to either be handled "close by" or "far away", but rarely "in the middle".<p>So Java's checked exceptions force you to write verbose and pointless code in all the wrong places (the "in the middle" code that can't handle and doesn't care about the exception).</p>
]]></description><pubDate>Thu, 24 Jul 2025 19:58:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44675308</link><dc:creator>materiallie</dc:creator><comments>https://news.ycombinator.com/item?id=44675308</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44675308</guid></item><item><title><![CDATA[New comment by materiallie in "LLM Inevitabilism"]]></title><description><![CDATA[
<p>It feels like there's a lot of shifting goalposts. A year ago, the hype was that knowledge work would cease to exist by 2027.<p>Now we are trying to hype up enhanced email autocomplete and data analysis as revolutionary?<p>I agree that those things are useful. But it's not really addressing the criticism. I would have zero criticisms of AI marketing if it was "hey, look at this new technology that can assist your employees and make them 20% more productive".<p>I think there's also a healthy dose of skepticism after the internet and social media age. Those were also society altering technologies that purported to democratize the political and economic system. I don't think those goals were accomplished, although without a doubt many workers and industries were made more productive. That effect is definitely real and I'm not denying that.<p>But in other areas, the last 3 decades of technological advancement have been a resounding failure. We haven't made a dent in educational outcomes or intergenerational poverty, for instance.</p>
]]></description><pubDate>Tue, 15 Jul 2025 16:12:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44572675</link><dc:creator>materiallie</dc:creator><comments>https://news.ycombinator.com/item?id=44572675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44572675</guid></item><item><title><![CDATA[New comment by materiallie in "Human coders are still better than LLMs"]]></title><description><![CDATA[
<p>I think benchmark targeting is going to be a serious problem going forward. The recent Nate Silver podcast on poker performance is interesting. Basically, the LLM models still suck at playing poker.<p>Poker tests intelligence. So what gives? One interesting thing is that for whatever reason, poker performance isn't used a benchmark in the LLM showdown between big tech companies.<p>The models have definitely improved in the past few years. I'm skeptical that there's been a "break-through", and I'm growing more skeptical of the exponential growth theory. It looks to me like the big tech companies are just throwing huge compute and engineering budgets at the existing transformer tech, to improve benchmarks one by one.<p>I'm sure if Google allocated 10 engineers a dozen million dollars to improve Gemini's poker performance, it would increase. The idea before AGI and the exponential growth hypothesis is that you don't have to do that because the AI gets smarter in a general sense all on it's own.</p>
]]></description><pubDate>Fri, 30 May 2025 06:41:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44133564</link><dc:creator>materiallie</dc:creator><comments>https://news.ycombinator.com/item?id=44133564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44133564</guid></item><item><title><![CDATA[New comment by materiallie in "Human coders are still better than LLMs"]]></title><description><![CDATA[
<p>This is my experience, too. As a concrete example, I'll need to write a mapper function to convert between a protobuf type and Go type. The types are mirror reflections of each other, and I feed the complete APIs of both in my prompt.<p>I've yet to find an LLM that can reliability generate mapping code between proto.Foo{ID string} to gomodel.Foo{ID string}.<p>It still saves me time, because even 50% accuracy is still half that I don't have to write myself.<p>But it makes me feel like I'm taking crazy pills whenever I read about AI hype. I'm open to the idea that I'm prompting wrong, need a better workflow, etc. But I'm not a luddite, I've "reached up and put in the work" and am always trying to learn new tools.</p>
]]></description><pubDate>Fri, 30 May 2025 06:08:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44133391</link><dc:creator>materiallie</dc:creator><comments>https://news.ycombinator.com/item?id=44133391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44133391</guid></item></channel></rss>