<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thinkharderdev</title><link>https://news.ycombinator.com/user?id=thinkharderdev</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 10:08:02 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thinkharderdev" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thinkharderdev in "Significant raise of reports"]]></title><description><![CDATA[
<p>I think they are saying what you want them to say. In the past they got a bunch of AI slop and now they are getting a lot of legit bug reports. The implication being that the AI got better at finding (and writing reports of) real bugs.</p>
]]></description><pubDate>Thu, 02 Apr 2026 17:12:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47617200</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=47617200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47617200</guid></item><item><title><![CDATA[New comment by thinkharderdev in "So where are all the AI apps?"]]></title><description><![CDATA[
<p>Are you saying Maccy was vibe-coded or that it was written in Python? I don't think either are true. I've definitely been using it (you're right, it's great!) since before vibe-coding was a thing. And looking at the GitHub it seems to be 100% in Swift.<p>> It's like a C fanatic saying "No useful software can be made using Python", and then asking for a counterexample<p>At which point you could provide them many, many counterexamples?<p>I like AI coding assistants as much as the next red-blooded SWE and find them incredibly useful and a genuine productivity booster, but I think the claims of 10/100/1000x productivity boosts are unsupported by evidence AFAICT. And I certainly know I'm not 10x as productive nor do any of my teammates who have embraced AI seem to be 10x more productive.</p>
]]></description><pubDate>Wed, 25 Mar 2026 21:15:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47523376</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=47523376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47523376</guid></item><item><title><![CDATA[New comment by thinkharderdev in "So where are all the AI apps?"]]></title><description><![CDATA[
<p>> This is flat-earther level<p>Ok, so do you have a counterexample?</p>
]]></description><pubDate>Tue, 24 Mar 2026 18:08:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47506765</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=47506765</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47506765</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Apache Arrow is 10 years old"]]></title><description><![CDATA[
<p>This will obviously depend on which implementation you use. Using the rust arrow-rs crate you at least get panics when you overflow max buffer sizes. But one of my enduring annoyances with arrow is that they use signed integer types for buffer offsets and the like. I understand why it has to be that way since it's intended to be cross-language and not all languages have unsigned integer types. But it does lead to lots of very weird bugs when you are working in a native language and casting back and forth from signed to unsigned types. I spent a very frustrating day tracking down this one in particular <a href="https://github.com/apache/datafusion/issues/15967" rel="nofollow">https://github.com/apache/datafusion/issues/15967</a></p>
]]></description><pubDate>Thu, 12 Feb 2026 19:25:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46993734</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46993734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46993734</guid></item><item><title><![CDATA[New comment by thinkharderdev in "AI Doesn't Reduce Work–It Intensifies It"]]></title><description><![CDATA[
<p>There's an old saying among cyclists attributed to Greg Lemond: "It doesn't get easier, you just go faster"</p>
]]></description><pubDate>Mon, 09 Feb 2026 16:39:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46947345</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46947345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46947345</guid></item><item><title><![CDATA[New comment by thinkharderdev in "How Jeff Bezos Brought Down the Washington Post"]]></title><description><![CDATA[
<p>> Keep in mind, our parents (age specific) and/or their parents parents paid for news and didn't question that setup<p>I don't think this is quite right. Our parents paid for the newspaper but the newspaper was basically the internet of their time. That is where they got sports scores, movie/tv listings, etc. The fact that this was bundled with hard news was mostly a side-effect.</p>
]]></description><pubDate>Thu, 05 Feb 2026 11:54:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46898678</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46898678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46898678</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Jepsen: NATS 2.12.1"]]></title><description><![CDATA[
<p>> To have better performance in benchmarks<p>Yes, exactly.</p>
]]></description><pubDate>Mon, 08 Dec 2025 19:43:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46196686</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46196686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46196686</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Zig's new plan for asynchronous programs"]]></title><description><![CDATA[
<p>That makes sense. I don't know anything about embedded programming really but I thought that it really fundamentally requires async (in the conceptual sense). So you have to structure your program as an event loop no matter what. Wasn't the alleged goal of rust async to be zero-cost in the sense that the program transformation of a future ends up being roughly what you would write by hand if you have to hand-roll a state machine? Of course the runtime itself requires a runtime and I get why something like Tokio would be a non-started in embedded environments, but you can still hand-roll the core runtime and structure the rest of the code with async/await right? Or are you saying that the generated code even without the runtime is too heavy for an embedded environment?</p>
]]></description><pubDate>Thu, 04 Dec 2025 21:56:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46153654</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46153654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46153654</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Show HN: Walrus – a Kafka alternative written in Rust"]]></title><description><![CDATA[
<p>> Except a consumer can discard an unprocessable record?<p>It's not the unproccessable records that are the problem it is the records that are very slow to process (for whatever reason).</p>
]]></description><pubDate>Thu, 04 Dec 2025 16:04:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46149089</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46149089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46149089</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Zig's new plan for asynchronous programs"]]></title><description><![CDATA[
<p>> If it were written with async it would likely have enough other baggage that it wouldn't fit or otherwise wouldn't work<p>I'm unclear what this means. What is the other baggage in this context?</p>
]]></description><pubDate>Wed, 03 Dec 2025 11:53:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46133447</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46133447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46133447</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Zig's new plan for asynchronous programs"]]></title><description><![CDATA[
<p>Right, because this would deadlock. But it seems like Zig would have the same issue. If I am running something in a evented IO system and then I try and do some blocking IO inside it then I will get a deadlock. The idea that you can write libraries that are agnostic to the asynchronous runtime seems fanciful to me beyond trivial examples.</p>
]]></description><pubDate>Wed, 03 Dec 2025 11:50:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46133423</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46133423</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46133423</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Zig's new plan for asynchronous programs"]]></title><description><![CDATA[
<p>Honestly I don't see how that is different than how it works in Rust. Synchronous code is a proper subset of asynchronous code. If you have a streaming API then you can have an implementation that works in a synchronous way with no overhead if you want. For example, if you already have the whole buffer in memory sometimes then you can just use it and the stream will work exactly like a loop that you would write in the sync version.</p>
]]></description><pubDate>Wed, 03 Dec 2025 11:32:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46133280</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46133280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46133280</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Google, Nvidia, and OpenAI"]]></title><description><![CDATA[
<p>> The problem is I still have some of their clothes I bought 10 years ago and their quality trumps premium brands now.<p>I'm skeptical of this claim. Maybe it's true for some particular brand but that's just an artifact of one particular "premium brand" essentially cashing in its brand equity by reducing quality while (temporarily) being able to command a premium price. But it is easier now than at any other time in my life to purchase high-quality clothing that is built to last for decades. You just have to pay for that quality, which is something a lot of people don't want to do.</p>
]]></description><pubDate>Tue, 02 Dec 2025 11:10:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46120047</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=46120047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46120047</guid></item><item><title><![CDATA[New comment by thinkharderdev in "650GB of Data (Delta Lake on S3). Polars vs. DuckDB vs. Daft vs. Spark"]]></title><description><![CDATA[
<p>> It's good to know that OOTB duckdb can replace snowflake et all in these situations, especially with how expensive they are.<p>Does this article demonstrate that though? I get, and agree, that a lot of people are using "big data" tools for datasets that are way too small to require it. But this article consists of exactly one very simple aggregation query. And even then it takes 16m to run (in the best case). As others have mentioned the long execution time is almost certainly dominated by IO because of limited network bandwidth, but network bandwidth is one of the resources you get more of in a distributed computing environment.<p>But my bigger issue is just that real analytical queries are often quite a bit more complicated than a simple count by timestamp. As soon as you start adding non-trivial compute to query, or multiple joins (and g*d forbid you have a nested-loop join in there somewhere), or sorting then the single node execution time is going to explode.</p>
]]></description><pubDate>Fri, 14 Nov 2025 13:31:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45926539</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=45926539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45926539</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Result is all I need"]]></title><description><![CDATA[
<p>This depends a lot of what you are using exceptions for. I think in general the branch on Ok/Err is probably not meaningful performance-wise because the branch predictor will see right through that.<p>But more generally the happy-path/error-path distinction can be a bit murky. From my days writing Java back in the day it was very common to see code where checked exceptions were used as a sort of control flow mechanism, so you end up using the slow path relatively frequently because it was just how you handled certain expected conditions that were arbitrarily designated as "exceptions". The idea behind Result types to me is just that recoverable, expected errors are part of the program's control flow and should be handled through normal code and not some side-channel. Exceptions/panics should be used only for actually exceptional conditions (programming errors which break some expected invariant of the system) and immediately terminate the unit of work that experienced the exception.</p>
]]></description><pubDate>Fri, 31 Oct 2025 13:39:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45771859</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=45771859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45771859</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Asbestosis"]]></title><description><![CDATA[
<p>Happened to me. Bought a house with wood floors in the basement. We had some flooding which ruined the wood and when we ripped it out to replace, turns out the wood floors were installed over the original asbestos tiles. From what I can tell, the asbestos tiles themselves were of no particular danger to us, but once they got wet and started cracking they had to be removed which cost an additional couple thousand dollars on top of replacing he floors.</p>
]]></description><pubDate>Sun, 26 Oct 2025 14:33:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45712231</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=45712231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45712231</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Trump pardons convicted Binance founder"]]></title><description><![CDATA[
<p>Yes</p>
]]></description><pubDate>Thu, 23 Oct 2025 21:20:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45687340</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=45687340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45687340</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Trump pardons convicted Binance founder"]]></title><description><![CDATA[
<p>Trump is definitely the most egregious by a very wide margin, but the pardon power has been abused by every President in my lifetime. It's a truly insane feature of our constitution that needs to be changed.</p>
]]></description><pubDate>Thu, 23 Oct 2025 21:13:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45687251</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=45687251</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45687251</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Today is when the Amazon brain drain sent AWS down the spout"]]></title><description><![CDATA[
<p>I interviewed with Amazon a few years back. The whole thing turned me off. A recruiter reached out and I was interested (it was late 2020 and the money was tempting). But before the first phone screen I had to have a call with the recruiter again, where she gave me a list of things I needed to "study" and was told that "successfully candidates usually spend 5-10 hours preparing for the interview". The study list was the usual list of CS101 topics. I didn't bother preparing and it was a good thing because on the phone screen the guy just asked me some a fairly mundane coding question and then some more general stuff (it was actually a very reasonable interview). Based on that they wanted to proceed to a final interview which was an all-day affair (on zoom of course because this was during the pandemic). But first I had to do ANOTHER 1h call with the recruiter where she gave me ANOTHER list of things I needed to "study" and reminded me that I should spend 5-10h preparing. That was too much for me and I politely declined the opportunity.</p>
]]></description><pubDate>Tue, 21 Oct 2025 10:20:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45654237</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=45654237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45654237</guid></item><item><title><![CDATA[New comment by thinkharderdev in "Why Is SQLite Coded In C"]]></title><description><![CDATA[
<p>> Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.<p>You can implement a linked list in Rust the same as you would in C using raw pointers and some unsafe code. In fact there is one in the standard library.</p>
]]></description><pubDate>Wed, 15 Oct 2025 01:16:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45587006</link><dc:creator>thinkharderdev</dc:creator><comments>https://news.ycombinator.com/item?id=45587006</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45587006</guid></item></channel></rss>