<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: joaohaas</title><link>https://news.ycombinator.com/user?id=joaohaas</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 22:26:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=joaohaas" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by joaohaas in "We sped up bun by 100x"]]></title><description><![CDATA[
<p>With the recent barrage of AI-slop 'speedup' posts, the first thing I always do to see if the post is worth a read is doing a Ctrl+F "benchmark" and seeing if the benchmark makes any fucking sense.<p>99% of the time (such as in this article), it doesn't. What do you mean 'cloneBare + findCommit + checkout: ~10x win'? Does that mean running those commands back to back result in a 10x win over the original? Does that mean that there's a specific function that calls these 3 operations, and that's the improvement of the overall function? What's the baseline we're talking about, and is it relevant at all?<p>Those questions are partially answered on the much better benchmark page[1], but for some reason they're using the CLI instead of the gitlib for comparisons.<p>[1] <a href="https://github.com/hdresearch/ziggit/blob/5d3deb361f03d4aefef29426cf333782fc05d7cf/BENCHMARKS.md#full-workflow" rel="nofollow">https://github.com/hdresearch/ziggit/blob/5d3deb361f03d4aefe...</a></p>
]]></description><pubDate>Thu, 02 Apr 2026 20:01:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619449</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=47619449</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619449</guid></item><item><title><![CDATA[New comment by joaohaas in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>> This is not just product simplification. It is a distribution and deployment strategy.<p>iykyk</p>
]]></description><pubDate>Tue, 31 Mar 2026 20:53:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593336</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=47593336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593336</guid></item><item><title><![CDATA[New comment by joaohaas in "Turning a MacBook into a touchscreen with $1 of hardware (2018)"]]></title><description><![CDATA[
<p>As other people mentioned this is obviously not something I would want in my notebook... but I can still appreciate the cool tech!<p>I can also definitely see this kind of thing being used in things budget outdoor displays, specially if the UI is made to accommodate the lack of accuracy, and the camera is positioned on the side (since these displays are usually vertical).</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:35:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580571</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=47580571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580571</guid></item><item><title><![CDATA[New comment by joaohaas in "We rewrote our Rust WASM parser in TypeScript and it got faster"]]></title><description><![CDATA[
<p>God I hate AI writing.<p>That final summary benchmark means nothing. It mentions 'baseline' value for the 'Full-stream total' for the rust implementation, and then says the `serde-wasm-bindgen` is '+9-29% slower', but it never gives us the baseline value, because clearly the only benchmark it did against the Rust codebase was the per-call one.<p>Then it mentions:
"End result: 2.2-4.6x faster per call and 2.6-3.3x lower total streaming cost."<p>But the "2.6-3.3x" is by their own definition a comparison against the naive TS implementation.<p>I really think the guy just prompted claude to "get this shit fast and then publish a blog post".</p>
]]></description><pubDate>Fri, 20 Mar 2026 23:52:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47462440</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=47462440</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47462440</guid></item><item><title><![CDATA[New comment by joaohaas in "The American Healthcare Conundrum"]]></title><description><![CDATA[
<p>Kinda insane no one else is talking about this.<p>The entire repo reeks of a "Write an extensive analysis comparing the american and japanese medical care systems" prompt.<p>Not saying <i>all</i> the findings are invalid, but most of them are just the LLM trying to justify it, like the life expectancy one.</p>
]]></description><pubDate>Tue, 17 Mar 2026 12:48:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47411932</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=47411932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47411932</guid></item><item><title><![CDATA[New comment by joaohaas in "Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy"]]></title><description><![CDATA[
<p>It follows the same reasoning as when someone purposefully copies code from a codebase into another where the license doesn't allow.
Yes it might be the only viable solution, and most likely no one will ever know you copied it, but if you get found out most maintainers will not merge your PR.</p>
]]></description><pubDate>Tue, 10 Mar 2026 10:45:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47321497</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=47321497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47321497</guid></item><item><title><![CDATA[New comment by joaohaas in "Addressing the adding situation"]]></title><description><![CDATA[
<p>I think most people wouldn't call proof-reading 'assistance'. As in, if I ask a colleague to review my PR, I wouldn't say he assisted me.<p>I've been throwing my PR diffs at Claude over the last few weeks. It spits a <i>lot</i> of useless or straight up wrong stuff, but sometimes among the insanity it manages to get one or another typo that a human missed, and between letting a bug pass or spending extra 10m per PR going through the nothingburguers Claude throws at me, I'd rather lose the 10m.</p>
]]></description><pubDate>Tue, 02 Dec 2025 15:57:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46122531</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=46122531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46122531</guid></item><item><title><![CDATA[New comment by joaohaas in "Dark Pattern Games"]]></title><description><![CDATA[
<p>Overall it feels like unless your game is a linear single-player game, it will fall under multiple of the site's labelled 'dark patterns'. Here are some really bad ones:<p>Infinite Treadmill - Impossible to win or complete the game.<p>Variable Rewards - Unpredictable or random rewards are more addictive than a predictable schedule.<p>Can't Pause or Save - The game does not allow you to stop playing whenever you want.<p>Grinding - Being required to perform repetitive and tedious tasks to advance.<p>Competition - The game makes you compete against other players.</p>
]]></description><pubDate>Mon, 17 Nov 2025 00:38:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45949872</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45949872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45949872</guid></item><item><title><![CDATA[New comment by joaohaas in "CPUs and GPUs to Become More Expensive After TSMC Price Hike in 2026"]]></title><description><![CDATA[
<p>>post decides to use the most crusty GenAI image possible<p>Man, what a cancer. Straight up using the bare TSMC logo here would work just fine.</p>
]]></description><pubDate>Tue, 04 Nov 2025 22:55:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45816835</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45816835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45816835</guid></item><item><title><![CDATA[New comment by joaohaas in "Language models are injective and hence invertible"]]></title><description><![CDATA[
<p>That's exactly what the quoted answer is saying though?</p>
]]></description><pubDate>Thu, 30 Oct 2025 14:46:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45760620</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45760620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45760620</guid></item><item><title><![CDATA[New comment by joaohaas in "Kafka is Fast – I'll use Postgres"]]></title><description><![CDATA[
<p>Even if the data is important, you can enable WAL and make sure the worker/consumer gets items by RPOPLPUSHing to a working queue. This way you can easily requeue the data if the worker ever goes offline mid-process.</p>
]]></description><pubDate>Wed, 29 Oct 2025 17:16:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45749951</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45749951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45749951</guid></item><item><title><![CDATA[New comment by joaohaas in "Kafka is Fast – I'll use Postgres"]]></title><description><![CDATA[
<p>Had the same thoughts, weird it didn't include Kafka numbers.<p>Never used Kafka myself, but we extensively use Redis queues with some scripts to ensure persistency, and we hit throughputs much higher than those in equivalent prod machines.<p>Same for Redis pubsubs, but those are just standard non-persistent pubsubs, so maybe that gives it an upper edge.</p>
]]></description><pubDate>Wed, 29 Oct 2025 15:34:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45748233</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45748233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45748233</guid></item><item><title><![CDATA[New comment by joaohaas in "[dead]"]]></title><description><![CDATA[
<p>So, if I got this right, this is just about re-implementing an existing load balancing algorithm faster...? If so, this is really dumb. As you guys checked out, yes most load balancing algorithms are slow/dumb:<p>>First, we evaluate DeepSeek's open-source EPLB implementation. This employs a greedy bin-packing strategy: experts are sorted by load in descending order, and each is placed onto the least-loaded GPU that has capacity (Figure 3a, Example 1). While simple, the solution is slow because it written in Python and uses a for-loop to performs linear search for finding the best-fit GPU choice.<p>This is because when considering a load balancing algorithm, unless the work being done (in this case by the GPU) lasts only a few ms, the load balancing algorithm being fast will never be the bottleneck. The post does not mention whether this is the case at all.<p>Also, I don't want to sound rude, but if all they managed to get is a 5x increase over a simple python algorithm, I don't think this is impressive at all...? Any rewrite of the 'dumb' algorithm in a language with more memory control and cache continuity should result in much better results.</p>
]]></description><pubDate>Fri, 24 Oct 2025 01:51:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45689857</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45689857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45689857</guid></item><item><title><![CDATA[New comment by joaohaas in "Vite+ – Unified toolchain for the web"]]></title><description><![CDATA[
<p>>and is much better than the mess of Webpack/Rollup/Brunch/Grunt...<p>You do know that Vite uses a lot of these behind the scenes right? Vite in general has much better defaults so that you don't have to configure them most of the time, but anything a bit out of the box will still require messing with the configs extensively.<p>Not like OPs Vite+ changes anything regarding that.</p>
]]></description><pubDate>Fri, 10 Oct 2025 11:57:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45537901</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45537901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45537901</guid></item><item><title><![CDATA[New comment by joaohaas in "Kagi News"]]></title><description><![CDATA[
<p>I think other people mentioned it already, but ideally I'd like to choose a list of languages to leave them as is, and translate the rest.<p>For example, I can speak Portuguese, Spanish, Japanese and English. Ideally I would want news in those languages to keep their original text, while translating news in other languages to a target language.<p>For example, if I set my language as English, Russian news would get translated to English, but Portuguese ones would keep their original text.</p>
]]></description><pubDate>Tue, 30 Sep 2025 23:09:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45432402</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45432402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45432402</guid></item><item><title><![CDATA[New comment by joaohaas in "Google will allow only apps from verified developers to be installed on Android"]]></title><description><![CDATA[
<p>>Vanced and such is more of a First World/Western issue<p>What? I'm from Brazil and Vanced is as big, if not bigger here. In fact, most of my 'first world' friends just pay for YouTube Premium (or whatever it is called), and these kinds of workarounds are mostly used in countries with less purchasing power.</p>
]]></description><pubDate>Tue, 26 Aug 2025 19:35:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45031212</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=45031212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45031212</guid></item><item><title><![CDATA[New comment by joaohaas in "The surprise deprecation of GPT-4o for ChatGPT consumers"]]></title><description><![CDATA[
<p>Yes, it sucks<p>But GPT-4 would have the same problems, since it uses the same image model</p>
]]></description><pubDate>Fri, 08 Aug 2025 18:56:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44840416</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=44840416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44840416</guid></item><item><title><![CDATA[New comment by joaohaas in "Modern Node.js Patterns"]]></title><description><![CDATA[
<p><p><pre><code>  try {
    // Parallel execution of independent operations
    const [config, userData] = await Promise.all([
      readFile('config.json', 'utf8'),
      fetch('/api/user').then(r => r.json())
    ]);
    ...
  } catch (error) {
    // Structured error logging with context
    ...
  }
</code></pre>
This might seem fine at a glance, but a big grip I have with node/js async/promise helper functions is that you can't differ which promise returned/threw an exception.<p>In this example, if you wanted to handle the `config.json` file not existing, you would need to somehow know what kind of error the `readFile` function can throw, and somehow manage to inspect it in the 'error' variable.<p>This gets even worse when trying to use something like `Promise.race` to handle promises as they are completed, like:<p><pre><code>  const result = Promise.race([op1, op2, op3]);
</code></pre>
You need to somehow embed the information about what each promise represents inside the promise result, which usually is done through a wrapper that injects the promise value inside its own response... which is really ugly.</p>
]]></description><pubDate>Mon, 04 Aug 2025 14:52:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44786645</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=44786645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44786645</guid></item><item><title><![CDATA[New comment by joaohaas in "JSON is not a YAML subset (2022)"]]></title><description><![CDATA[
<p>YAML sucking is no excuse to keep using XML. JSON, JSON5 and TOML are all great alternatives for projects.</p>
]]></description><pubDate>Fri, 01 Aug 2025 23:53:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44763706</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=44763706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44763706</guid></item><item><title><![CDATA[New comment by joaohaas in "There is no memory safety without thread safety"]]></title><description><![CDATA[
<p>Your example does not classify as 'undefined behavior'. Something is 'undefined behavior' if it is specified in the language spec, and in such case yes, the language is capable of doing anything including violating memory safety.</p>
]]></description><pubDate>Thu, 24 Jul 2025 17:08:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44673213</link><dc:creator>joaohaas</dc:creator><comments>https://news.ycombinator.com/item?id=44673213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44673213</guid></item></channel></rss>