<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: yfontana</title><link>https://news.ycombinator.com/user?id=yfontana</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 00:09:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=yfontana" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by yfontana in "Changes in the system prompt between Claude Opus 4.6 and 4.7"]]></title><description><![CDATA[
<p>Not GP, but BMAD has several interview techniques in its brainstorming skill. You can invoke it with /bmad-brainstorming, briefly explain the topic you want to explore, then when it asks you to if you want to select a technique, pick something like "question storming". I've had positive experience with this (with Opus 4.7).</p>
]]></description><pubDate>Mon, 20 Apr 2026 07:50:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47831439</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=47831439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47831439</guid></item><item><title><![CDATA[New comment by yfontana in "Do you even need a database?"]]></title><description><![CDATA[
<p>As a data architect I dislike the term NoSQL and often recommend that my coworkers not use it in technical discussions, as it is too vague. Document, key-value and graph DBs are usually considered NoSQL, but they have fairly different use cases (and I'd argue that search DBs like Elastic / OpenSearch are in their own category as well).<p>To me write scaling is the main current advantage of KV and document DBs. They can generally do schema evolution fairly easily, but nowadays so can many SQL DBs, with semi-structured column types. Also, you need to keep in mind that KV and document DBs are (mostly) non-relational. The more relational your data, the less likely you are to actually benefit from using those DBs over a relational, SQL DB.</p>
]]></description><pubDate>Wed, 15 Apr 2026 22:01:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47785906</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=47785906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47785906</guid></item><item><title><![CDATA[New comment by yfontana in "Prism"]]></title><description><![CDATA[
<p>> Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.<p>I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.</p>
]]></description><pubDate>Wed, 28 Jan 2026 08:04:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46792382</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46792382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46792382</guid></item><item><title><![CDATA[New comment by yfontana in "Calling All Hackers: How money works (2024)"]]></title><description><![CDATA[
<p>> - It's backed by nothing.<p>Money is never backed by nothing, or it's worthless. It may not be backed by anything physical, but it's always backed by some form of trust. National currencies are backed by trust in the corresponding government and institutions.</p>
]]></description><pubDate>Wed, 07 Jan 2026 09:22:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46524285</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46524285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46524285</guid></item><item><title><![CDATA[New comment by yfontana in "Calling All Hackers: How money works (2024)"]]></title><description><![CDATA[
<p>Reserves matter even if reserve ratios are zero. If Bank A lends too much money, then when its customers spend that money, a lot of it will end up deposited at other banks. These banks will then ask Bank A for reserves (as in, central bank money) to clear the inter-bank transfers, which Bank A will need to borrow from the central bank, at a cost.</p>
]]></description><pubDate>Wed, 07 Jan 2026 09:20:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46524272</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46524272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46524272</guid></item><item><title><![CDATA[New comment by yfontana in "Calling All Hackers: How money works (2024)"]]></title><description><![CDATA[
<p>This probably won't make you feel any better, but banks don't really loan out money that's not theirs. When they lend money, they literally create it out of thin air. Creating that money has a cost, which is what ultimately limits how much they can lend, and having more deposits can lower that cost somewhat, but there's no direct connection between the money you deposit in your account and the money that the bank lends to someone else.</p>
]]></description><pubDate>Wed, 07 Jan 2026 09:04:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46524186</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46524186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46524186</guid></item><item><title><![CDATA[New comment by yfontana in "I failed to recreate the 1996 Space Jam website with Claude"]]></title><description><![CDATA[
<p>If I were to do this (and I might give it a try, this is quite an interesting case), I would try to run a detection model on the image, to find bounding boxes for the planets and their associated text. Even a small model running on CPU should be able to do this relatively quickly.</p>
]]></description><pubDate>Mon, 08 Dec 2025 16:23:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46194194</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46194194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46194194</guid></item><item><title><![CDATA[New comment by yfontana in "The fuck off contact page"]]></title><description><![CDATA[
<p>On the professional side, they also often let you interact with their experts and architects directly, as part of your  support contract. With most other companies, you either have to go through front-office support exclusively, or pay extra for Professional Services.</p>
]]></description><pubDate>Mon, 08 Dec 2025 11:13:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46190976</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46190976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46190976</guid></item><item><title><![CDATA[New comment by yfontana in "Bag of words, have mercy on us"]]></title><description><![CDATA[
<p>> I’m downplaying because I have honestly been burned by these tools when I’ve put trust in their ability to understand anything, provide a novel suggestion or even solve some basic bugs without causing other issues.?<p>I've had that experience plenty of times with actual people... 
LLMs don't "think" like people do, that much is pretty obvious. But I'm not at all sure whether what they do can be called "thinking" or not.</p>
]]></description><pubDate>Mon, 08 Dec 2025 08:52:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46189965</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46189965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46189965</guid></item><item><title><![CDATA[New comment by yfontana in "Super fast aggregations in PostgreSQL 19"]]></title><description><![CDATA[
<p>> In the examples given, it’s much faster, but is that mostly due to the missing indexes? I’d have thought that an optimal approach in the colour example would be to look at the product.color_id index, get the counts directly from there and you’re pretty much done.<p>So I tried to test this (my intuition being that indexes wouldn't change much, at best you could just do an index scan instead of a seq scan), and I couldn't understand the plans I was getting, until I realized that the query in the blog post has a small error:<p>> AND c1.category_id = c1.category_id<p>should really be<p>> AND p.category_id = c1.category_id<p>otherwise we're doing a cross-product on the category. Probably doesn't really change much, but still a bit of an oopsie. Anyway, even with the right join condition an index only reduces execution time by about 20% in my tests, through an index scan.</p>
]]></description><pubDate>Wed, 03 Dec 2025 08:58:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46132064</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46132064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46132064</guid></item><item><title><![CDATA[New comment by yfontana in "Super fast aggregations in PostgreSQL 19"]]></title><description><![CDATA[
<p>Interestingly, "aggregate first, join later" has been the standard way of joining fact tables in BI tools for a long time. Since fact tables are typically big and also share common dimensions, multi-fact joins for drill-across are best done by first aggregating on those common dimensions, then joining on them.<p>Makes you wonder how many cases there are out there of optimizations that feel almost second nature in one domain, but have never been applied to other domains because no one thought of it.</p>
]]></description><pubDate>Wed, 03 Dec 2025 08:31:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46131757</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=46131757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46131757</guid></item><item><title><![CDATA[New comment by yfontana in "I’m worried that they put co-pilot in Excel"]]></title><description><![CDATA[
<p>> - It says it's done when its code does not even work, sometimes when it does not even compile.<p>> - When asked to fix a bug, it confidently declares victory without actually having fixed the bug.<p>You need to give it ways to validate its work. A junior dev will also give you code that doesn't compile or should have fixed a bug but doesn't if they don't actually compile the code and test that the bug is truly fixed.</p>
]]></description><pubDate>Wed, 05 Nov 2025 17:05:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45825193</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=45825193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45825193</guid></item><item><title><![CDATA[New comment by yfontana in "GPU Hot: Dashboard for monitoring NVIDIA GPUs on remote servers"]]></title><description><![CDATA[
<p>I think I was shadow-banned because my very first comment on the site was slightly snarky, and have now been unbanned.</p>
]]></description><pubDate>Thu, 09 Oct 2025 19:29:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45532036</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=45532036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45532036</guid></item><item><title><![CDATA[New comment by yfontana in "GPU Hot: Dashboard for monitoring NVIDIA GPUs on remote servers"]]></title><description><![CDATA[
<p>Properly measuring "GPU load" is something I've been wondering about, as an architect who's had to deploy ML/DL models but is still relatively new at it. With CPU workloads you can generally tell from %CPU, %Mem and IOs how much load your system is under. But with GPU I'm not sure how you can tell, other than by just measuring your model execution times. I find it makes it hard to get an idea whether upgrading to a stronger GPU would help and by how much. Are there established ways of doing this?</p>
]]></description><pubDate>Thu, 09 Oct 2025 14:10:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=45527960</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=45527960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45527960</guid></item><item><title><![CDATA[New comment by yfontana in "Gemini 2.5 Flash Image"]]></title><description><![CDATA[
<p>Open source models like Flux Kontext or Qwen image edit wouldn't refuse, but you need to either have a sufficiently strong GPU or get one in the cloud (not difficult nor expensive with services like runpod), then set up your own processing pipeline (again, not too difficult if you use ComfyUI). Results won't be SOTA, but they shouldn't be too far off.</p>
]]></description><pubDate>Wed, 27 Aug 2025 07:57:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45036718</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=45036718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45036718</guid></item><item><title><![CDATA[New comment by yfontana in "Grok 4"]]></title><description><![CDATA[
<p>I pay for chatgpt because, in my experience, o3 and o4 are currently the best at combining reasoning with information retrieval from web searches. They're the best models I've tried at emulating the way I search for information (evaluating source quality, combining and contrasting information from several sources, refining searches, etc.), and using the results as part of a reasoning process. It's not necessarily significant for coding, but it is for designing.</p>
]]></description><pubDate>Fri, 11 Jul 2025 09:02:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44529904</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=44529904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44529904</guid></item><item><title><![CDATA[New comment by yfontana in "I got fooled by AI-for-science hype–here's what it taught me"]]></title><description><![CDATA[
<p>From the article:<p>> Besides protein folding, the canonical example of a scientific breakthrough from AI, a few examples of scientific progress from AI include:1<p>>    Weather forecasting, where AI forecasts have had up to 20% higher accuracy (though still lower resolution) compared to traditional physics-based forecasts.<p>>    Drug discovery, where preliminary data suggests that AI-discovered drugs have been more successful in Phase I (but not Phase II) clinical trials. If the trend holds, this would imply a nearly twofold increase in end-to-end drug approval rates.</p>
]]></description><pubDate>Tue, 20 May 2025 07:52:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44038911</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=44038911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44038911</guid></item><item><title><![CDATA[New comment by yfontana in "SMS 2FA is not just insecure, it's also hostile to mountain people"]]></title><description><![CDATA[
<p>> It’s insane to me that maybe every bank I use requires SMS 2FA, but random services I use support apps.<p>It never ceases to surprise me how much American banks always seem to lag behind with regards to payment tech. My (european) bank started sending hardware TOTP tokens to whoever requested one like a decade ago. They've since switched to phone app MFA.</p>
]]></description><pubDate>Wed, 14 May 2025 16:20:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43986332</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=43986332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43986332</guid></item><item><title><![CDATA[New comment by yfontana in "PDF to Text, a challenging problem"]]></title><description><![CDATA[
<p>I've been working on extracting text from some 20 million PDFs, with just about every type of layout you can imagine. We're using a similar approach (segmentation / OCR), but with PyMuPDF.<p>The full extract is projected to run for several days on a GPU cluster, at a cost of like 20-30k (can't remember the exact number but it's in that ballpark). When you can afford this kind of compute, text extraction from PDFs isn't quite a fully solved problem, but we're most of the way there.<p>What the article in the OP tries to do is, as far as I understand, somewhat different. It's trying to use much simpler heuristics to get acceptable results cheaper and faster, and this is definitely an open issue.</p>
]]></description><pubDate>Tue, 13 May 2025 21:45:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43978106</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=43978106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43978106</guid></item><item><title><![CDATA[New comment by yfontana in "Universe expected to decay in 10⁷⁸ years, much sooner than previously thought"]]></title><description><![CDATA[
<p>We don't know for sure that the universe is a closed system.</p>
]]></description><pubDate>Mon, 12 May 2025 15:35:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43964212</link><dc:creator>yfontana</dc:creator><comments>https://news.ycombinator.com/item?id=43964212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43964212</guid></item></channel></rss>