<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lgiordano_notte</title><link>https://news.ycombinator.com/user?id=lgiordano_notte</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 09:08:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lgiordano_notte" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lgiordano_notte in "LegoGPT: Generating Physically Stable and Buildable Lego"]]></title><description><![CDATA[
<p>Agree with this. Constraining generation with physics, legality, or even tooling limits turns the model into a search-and-validate engine instead of a word predictor. Closer to program synthesis.<p>The real value is upstream: defining a problem space so well that the model is boxed into generating something usable.</p>
]]></description><pubDate>Fri, 09 May 2025 12:50:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43936168</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43936168</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43936168</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "A flat pricing subscription for Claude Code"]]></title><description><![CDATA[
<p>Shift feels real. LLMs don't replace devs, but they do compress the value curve. The top 10% get even more leverage, and the bottom 50% become harder to justify.<p>What worries me isn't layoffs but that entry-level roles become rare, and juniors stop building real intuition because the LLM handles all the hard thinking.<p>You get surface-level productivity but long-term skill rot.</p>
]]></description><pubDate>Fri, 09 May 2025 12:46:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43936127</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43936127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43936127</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Void: Open-source Cursor alternative"]]></title><description><![CDATA[
<p>Cursor’s doc indexing is acc one of the few AI coding features that feels like it saves time. Embedding full doc sites, deduping nav/header junk, then letting me reference @docs inline actually improves context grounding instead of guessing APIs.</p>
]]></description><pubDate>Fri, 09 May 2025 12:41:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=43936084</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43936084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43936084</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "AI Is Making Developers Lazy: RIP Core Coding Skills"]]></title><description><![CDATA[
<p>LLMs shift the bottleneck - becomes less about typing code, more about spotting when something’s subtly wrong. Still need real judgment just applied to different layers. The skills that atrophy are surface-level. The deeper ones (debugging, systems thinking, knowing what not to trust) become more important.</p>
]]></description><pubDate>Thu, 08 May 2025 11:17:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43925074</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43925074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43925074</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Web search on the Anthropic API"]]></title><description><![CDATA[
<p>Don't think the limit is in what LLMs can evaluate - given the right context, they’re good at assessing quality. The problem is what actually gets retrieved and surfaced in the first place. If the upstream search doesn’t rank high-quality or relevant material well, LLM never sees it. It's not a judgment problem, more of a selection problem.</p>
]]></description><pubDate>Thu, 08 May 2025 11:05:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=43925012</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43925012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43925012</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Why LLMs Are Not (Yet) the Silver Bullet for Unstructured Data Processing"]]></title><description><![CDATA[
<p>Basically treating extraction as an adaptive loop instead of a static function. If first parse fails or looks incomplete, tweak the prompt, inject more context, or switch strategies. Memory helps carry forward partial wins so you don’t start from scratch. We’ve seen the same pattern in agentic web environments. Structured retries, context propagation, and memory turn brittle flows into robust automation, especially with high-variance input and fuzzy schemas.</p>
]]></description><pubDate>Wed, 07 May 2025 12:39:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43914884</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43914884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43914884</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Claude's system prompt is over 24k tokens with tools"]]></title><description><![CDATA[
<p>Pretty cool.
However truly reliable, scalable LLM systems will need structured, modular architectures, not just brute-force long prompts. Think agent architectures with memory, state, and tool abstractions etc...not just bigger and bigger context windows.</p>
]]></description><pubDate>Wed, 07 May 2025 12:18:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=43914700</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43914700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43914700</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "OpenAI reaches agreement to buy Windsurf for $3B"]]></title><description><![CDATA[
<p>Value isn’t just the editor, it’s the workflow. Letting LLMs plan and act across multi-step flows is a hard problem, and Windsurf figured out a dev-focused version of that. Gains to be made in browser automation once you add structure, retries, and context. Feels like a bet on that pattern becoming default.
But yeah as others said, highly doubt that's $3B in hard cash, more likely a roll-up of shares etc.</p>
]]></description><pubDate>Wed, 07 May 2025 12:01:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43914581</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43914581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43914581</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Launch HN: Exa (YC S21) – The web as a database"]]></title><description><![CDATA[
<p>Really cool direction. The embedding-first + agentic verification pipeline resonates, similar pattern worked well for us in the web interaction space.</p>
]]></description><pubDate>Tue, 06 May 2025 17:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=43907302</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43907302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43907302</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Why Google is losing its iron grip on search, and what I use now instead"]]></title><description><![CDATA[
<p>Shift isn't just about competitors gaining ground but about users increasingly bypassing traditional search entirely. Between Reddit, Perplexity, ChatGPT, and direct domain knowledge, more queries are being fragmented across tools that aren't indexed as 'search engines' but functionally serve that role.</p>
]]></description><pubDate>Tue, 06 May 2025 16:56:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43907243</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43907243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43907243</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Why LLMs Are Not (Yet) the Silver Bullet for Unstructured Data Processing"]]></title><description><![CDATA[
<p>In my experience the key friction point has been schema stability vs input variance. Had better luck treating mapping as a dynamic planning problem with retries and memory.</p>
]]></description><pubDate>Tue, 06 May 2025 16:55:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43907226</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43907226</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43907226</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Semantic unit testing: test code without executing it"]]></title><description><![CDATA[
<p>Agreed. Catching mismatches between doc and implementation is still valuable, just wouldn’t want people to rely on it as a safety net when the docs themselves might be inaccurate/incomplete. As a complement to traditional tests though seems like a solid addition.</p>
]]></description><pubDate>Mon, 05 May 2025 16:35:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43896895</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43896895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43896895</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Semantic unit testing: test code without executing it"]]></title><description><![CDATA[
<p>A breakdown would be interesting. I can’t give you hard numbers, but in our case scaffolding was most of the work. Getting the model to act reliably meant building structured abstractions, retries, output validation, context tracking, etc. Once that’s in place you start saving time per task, but there’s a cost up front.</p>
]]></description><pubDate>Mon, 05 May 2025 16:32:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43896857</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43896857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43896857</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "AI Meets WinDBG"]]></title><description><![CDATA[
<p>Curious how you're handling multi-step flows or follow-ups, seems like thats where MCP could really shine especially compared to brittle CLI scripts. We've seen similar wins with browser agents once structured actions and context are in place.</p>
]]></description><pubDate>Mon, 05 May 2025 11:49:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43894024</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43894024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43894024</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Semantic unit testing: test code without executing it"]]></title><description><![CDATA[
<p>LLM-based coding only really works when wrapped in structured prompts, constrained outputs, external checks etc. The systems that work well aren’t just 'LLM take the wheel' architecture, they’re carefully engineered pipelines. Most success stories are more about that scaffolding than the model itself.</p>
]]></description><pubDate>Mon, 05 May 2025 11:40:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43893936</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43893936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43893936</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Semantic unit testing: test code without executing it"]]></title><description><![CDATA[
<p>Treating docstrings as the spec and asking an LLM to flag mismatches feels promising in theory but personally I'd b wary of overfitting to underspecified docs. Might be useful as a lint-like signal, but hard to see it replacing real tests just yet.</p>
]]></description><pubDate>Mon, 05 May 2025 11:36:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43893902</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43893902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43893902</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "I'd rather read the prompt"]]></title><description><![CDATA[
<p>If you outsource that to a model, you often end up with words but shallow or no understanding. Writing forces you to clarify your ideas. 
LLMs substitute genuine thinking with surface-level prose, which might sound alright but often lacks depth behind it.</p>
]]></description><pubDate>Mon, 05 May 2025 11:21:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=43893786</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43893786</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43893786</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "Show HN: Blast – Fast, multi-threaded serving engine for web browsing AI agents"]]></title><description><![CDATA[
<p>Looks really cool.
Curious how you're handling action abstraction? 
We've found that semantically parsing the DOM to extract high-level intents—like "click 'Continue'" instead of 'click div#xyz' helps reduce hallucination and makes agent planning more robust.</p>
]]></description><pubDate>Fri, 02 May 2025 19:08:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=43873595</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43873595</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43873595</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "How to live an intellectually rich life"]]></title><description><![CDATA[
<p>In trying to live an intellectually rich life, there's a risk of adding too much noise. Chasing more input, more ideas, more learning.
Sometimes less really is more.
Depth often comes not from adding, but from subtracting. Clear away the noise, and what’s left tends to have 'meaning'.
Personally I prefer a deep life to a rich life, but maybe that's just semantics...</p>
]]></description><pubDate>Fri, 02 May 2025 16:44:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43872077</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43872077</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43872077</guid></item><item><title><![CDATA[New comment by lgiordano_notte in "The language brain matters more for programming than the math brain? (2020)"]]></title><description><![CDATA[
<p>Makes sense that verbal ability would line up more with success in CS, especially when math scores are already high across the board. A lot of programming leans on language-type skills: reading and understanding code, navigating docs, naming things clearly, writing maintainable logic etc.<p>The field probably does itself a disservice by overemphasising math. That framing can push people away who might actually do really well, especially those strong in reasoning, abstraction, or communication. Linked study is a good reminder to rethink how we present programming imo.</p>
]]></description><pubDate>Fri, 02 May 2025 16:32:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43871924</link><dc:creator>lgiordano_notte</dc:creator><comments>https://news.ycombinator.com/item?id=43871924</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43871924</guid></item></channel></rss>