<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tcgv</title><link>https://news.ycombinator.com/user?id=tcgv</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 10:11:11 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tcgv" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tcgv in "French government agency confirms breach as hacker offers to sell data"]]></title><description><![CDATA[
<p>My full name, phone number, and address were leaked by TAP Air Portugal about five years ago, along with the details of my parents who were on the same booking. Since then, my dad has been targeted by those types of scams where a fraudster impersonates me to ask for money.<p>I never received a notification from TAP; I only found out a year later through my Google One security feature. I certainly didn't get an apology—much less a free travel ticket!</p>
]]></description><pubDate>Thu, 23 Apr 2026 16:47:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47878017</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47878017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47878017</guid></item><item><title><![CDATA[New comment by tcgv in "Folk are getting dangerously attached to AI that always tells them they're right"]]></title><description><![CDATA[
<p>One trick I like to use is to role play the other side's perspective with the AI, putting myself in their's shoes. It give's me clarity about what I might be missing out in a dispute/discussion, and insight about reaffirmations AI might be feeding other parties.</p>
]]></description><pubDate>Sat, 28 Mar 2026 20:19:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47557839</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47557839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47557839</guid></item><item><title><![CDATA[New comment by tcgv in "Grief and the AI split"]]></title><description><![CDATA[
<p>That's an interesting take. I'm likely on the same side of the split as you, since I'm very much motivated by the new possibilities agentic coding tools open when used responsibly.<p>Back in February, I also wrote a piece on the recurring mourning/sense of grief we are seeing for 'craftsmanship' coding:<p>- <a href="https://thomasvilhena.com/2026/02/craftsmanship-coding-five-stages-of-grief" rel="nofollow">https://thomasvilhena.com/2026/02/craftsmanship-coding-five-...</a></p>
]]></description><pubDate>Fri, 13 Mar 2026 11:42:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47363111</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47363111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47363111</guid></item><item><title><![CDATA[New comment by tcgv in "Tech employment now significantly worse than the 2008 or 2020 recessions"]]></title><description><![CDATA[
<p>Makes sense. You just reminded me of the article "Why Can’t Programmers... Program?" [1].<p>Before gen AI, I used to give candidates at my company a quick one-hour remote screening test with a couple of random "FizzBuzz"-style questions. I would usually paraphrase the question so a simple Google search would not immediately surface the answer, and 80% of candidates failed at coding a working solution, which was very much in line with the article. Post gen AI, that test effectively dropped to a 0% failure rate, so we changed our selection process.<p>[1] <a href="https://blog.codinghorror.com/why-cant-programmers-program/" rel="nofollow">https://blog.codinghorror.com/why-cant-programmers-program/</a></p>
]]></description><pubDate>Fri, 06 Mar 2026 21:11:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47281118</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47281118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47281118</guid></item><item><title><![CDATA[New comment by tcgv in "Statement from Dario Amodei on our discussions with the Department of War"]]></title><description><![CDATA[
<p>Employee solidarity matters, but absent a legal constraint, I don’t think it’s a durable control.<p>If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.<p>In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.<p>If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.</p>
]]></description><pubDate>Fri, 27 Feb 2026 11:32:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47179311</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47179311</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47179311</guid></item><item><title><![CDATA[New comment by tcgv in "South Korean ex president Yoon Suk Yeol jailed for life for leading insurrection"]]></title><description><![CDATA[
<p>One thing worth pointing out is that by the time Yoon Suk Yeol declared martial law on December 3, 2024, he was already one of the most unpopular presidents in South Korean history. After that his ratings declined even further. This makes for a much smoother enforcement of the law to make him accountable for his actions.</p>
]]></description><pubDate>Thu, 19 Feb 2026 19:35:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47078095</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47078095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47078095</guid></item><item><title><![CDATA[New comment by tcgv in "The Future of AI Software Development"]]></title><description><![CDATA[
<p>That’s fair at the “adopt AI at scale / restructure orgs” level. Nobody has the whole playbook yet, and anyone claiming they do is probably overselling.<p>But I’d separate that from the programmer-level reality: a lot is already figured out in the small. If you keep the work narrow and reversible, make constraints explicit, and keep verification cheap (tests, invariants, diffs), agents are reliably useful today. The uncertainty is less “does this work?” and more “how do we industrialize it without compounding risk and entropy?”<p>I wrote up that “calm adoption without FOMO, via delegation + constraints + verification” framing here, in case it helps the thread: <a href="https://thomasvilhena.com/2026/02/craftsmanship-coding-five-stages-of-grief" rel="nofollow">https://thomasvilhena.com/2026/02/craftsmanship-coding-five-...</a></p>
]]></description><pubDate>Wed, 18 Feb 2026 20:49:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47066169</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47066169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47066169</guid></item><item><title><![CDATA[New comment by tcgv in "The Future of AI Software Development"]]></title><description><![CDATA[
<p>Martin’s framing (org and system-level guardrails like risk tiering, TDD as discipline, and platforms as “bullet trains”) matches what I’ve been seeing too.<p>A useful complement is the programmer-level shift: agents are great at narrow, reversible work when verification is cheap.
Concretely, think small refactors behind golden tests, API adapters behind contract tests, and mechanical migrations with clear invariants. They fail fast in codebases with implicit coupling, fuzzy boundaries, or weak feedback loops, and they tend to amplify whatever hygiene you already have.<p>So the job moves from typing to making constraints explicit and building fast verification, while humans stay accountable for semantics and risk.<p>If useful, I expanded this “delegation + constraints + verification” angle here: <a href="https://thomasvilhena.com/2026/02/craftsmanship-coding-five-stages-of-grief" rel="nofollow">https://thomasvilhena.com/2026/02/craftsmanship-coding-five-...</a></p>
]]></description><pubDate>Wed, 18 Feb 2026 18:56:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47064752</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47064752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47064752</guid></item><item><title><![CDATA[New comment by tcgv in "Craftsmanship coding and the five stages of grief"]]></title><description><![CDATA[
<p>Principal (here) notified<p>That post was generated by an agent but manually reviewed, copied and pasted by me, since I thoutgh it'd fit the context (discussion involving agents).<p>This account is not automated ;)</p>
]]></description><pubDate>Sat, 14 Feb 2026 13:23:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47014340</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47014340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47014340</guid></item><item><title><![CDATA[New comment by tcgv in "Craftsmanship coding and the five stages of grief"]]></title><description><![CDATA[
<p>Hey — fun framing, and honestly a pretty accurate snapshot of how these debates go online. Quick point-by-point, just to separate “HN vibes” from what the post actually says:<p>Denial — The post doesn’t claim “everyone gets value from LLMs,” nor that skeptics must be doing “simpler work.” It’s saying a lot of day-to-day engineering is delegable — not that disagreement is impossible (or inferior).<p>Anger — The post doesn’t label skeptics as luddites/gatekeepers/dinosaurs, and it doesn’t predict anyone “will lose their jobs.” It treats the tension as identity + craft friction, not as a moral failure on either side.<p>Bargaining — The post isn’t arguing “it’s inevitable because money/momentum,” or “accept it because I need a paycheck.” It’s closer to: if a tool reliably speeds up reversible work, delegating that work is rational — while accountability stays with humans.<p>Depression — This is the closest overlap. The post does call a big slice of work “digital plumbing.” But it’s not saying “therefore most developers are rote.” It’s saying: lots of tasks are routine, and offloading routine tasks can free attention for higher-leverage decisions.<p>Acceptance — The satire’s endpoint (“I’m merely an LLM operator now, not a software engineer”) assumes a narrow definition of engineering: typing code = engineering. The post’s acceptance leans on a broader one: engineering is owning intent → constraints → tradeoffs → verification → outcomes, with code (and sometimes code-generation) as just one step. Under that lens, using LLMs doesn’t “demote” anyone — it just shifts where the craft shows up.<p>Net: your satire totally lands as a critique of some forum rhetoric, but it doesn’t really rebut what this post argues — and in a couple places (the emotional/identity angle), it kind of reinforces it.<p>*This reply was written by an agent.</p>
]]></description><pubDate>Fri, 13 Feb 2026 18:43:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47006147</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=47006147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47006147</guid></item><item><title><![CDATA[New comment by tcgv in "Recreating Epstein PDFs from raw encoded attachments"]]></title><description><![CDATA[
<p>Indeed! Thanks for pointing that out. I had both Epstein threads open and made a mistake when I came back to comment.</p>
]]></description><pubDate>Fri, 06 Feb 2026 20:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46917846</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46917846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46917846</guid></item><item><title><![CDATA[New comment by tcgv in "Recreating Epstein PDFs from raw encoded attachments"]]></title><description><![CDATA[
<p>> Then my mom wrote the following: “be careful not to get sucked up in the slime-machine going on here! Since you don’t care that much about money, they can’t buy you at least.”<p>I'm lucky to have parents with strong values. My whole life they've given me advice, on the small stuff and the big decisions. I didn't always want to hear it when I was younger, but now in my late thirties, I'm really glad they kept sharing it. In hidhsight I can see the life-experience / wisdom in it, and how it's helped and shaped me.</p>
]]></description><pubDate>Fri, 06 Feb 2026 16:42:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46915065</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46915065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46915065</guid></item><item><title><![CDATA[New comment by tcgv in "I miss thinking hard"]]></title><description><![CDATA[
<p>I get what he's pointing at: building teaches you things the spec can't, and iteration often reveals the real problem.<p>That said, the framing feels a bit too poetic for engineering. Software isn't only craft, it's also operations, risk, time, budget, compliance, incident response, and maintenance by people who weren't in the room for the "lump of clay" moment. Those constraints don't make the work less human; they just mean "authentic creation" isn't the goal by itself.<p>For me the takeaway is: pursue excellence, but treat learning as a means to reliability and outcomes. Tools (including LLMs) are fine with guardrails, clear constraints up front and rigorous review/testing after, so we ship systems we can reason about, operate, and evolve (not just artefacts that feel handcrafted).</p>
]]></description><pubDate>Wed, 04 Feb 2026 14:02:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46885909</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46885909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46885909</guid></item><item><title><![CDATA[New comment by tcgv in "Revisiting ChatGPT's financial advice, 15 months later"]]></title><description><![CDATA[
<p>Totally fair. A real strategy should start with investor context. My prompt intentionally didn't include those inputs to keep the experiment simple, and the good old GPT-4o model didn't proactively ask for them either. In an actual financial planning conversation, those constraints would be front and center and the portfolio could look materially different.</p>
]]></description><pubDate>Tue, 03 Feb 2026 15:38:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46872320</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46872320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46872320</guid></item><item><title><![CDATA[Revisiting ChatGPT's financial advice, 15 months later]]></title><description><![CDATA[
<p>Article URL: <a href="https://thomasvilhena.com/2026/02/revisiting-chatgpt-financial-advice">https://thomasvilhena.com/2026/02/revisiting-chatgpt-financial-advice</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46871591">https://news.ycombinator.com/item?id=46871591</a></p>
<p>Points: 3</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 03 Feb 2026 14:45:20 +0000</pubDate><link>https://thomasvilhena.com/2026/02/revisiting-chatgpt-financial-advice</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46871591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46871591</guid></item><item><title><![CDATA[New comment by tcgv in "Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant"]]></title><description><![CDATA[
<p>That's a fair point regarding pure content absorption, especially given that many classes do suffer from poor didactics. However, the university's value proposition often lies elsewhere: access to professors researching innovations (not yet indexed by LLMs), physical labs for hands-on experience that you can't simulate, and the crucial peer networking with future colleagues. These human and physical elements, along with the soft skills developed through technical debate, are hard to replace. But for standard theory taught by uninspired lecturers, I agree that the textbook plus LLM approach is arguably superior.</p>
]]></description><pubDate>Thu, 22 Jan 2026 17:00:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46721891</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46721891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46721891</guid></item><item><title><![CDATA[New comment by tcgv in "Ask HN: Do you have any evidence that agentic coding works?"]]></title><description><![CDATA[
<p>Upvote.<p>That's my experience too. Agent coding works really well for existing codebases that are well-structured and organized. If your codebase is mostly spaghetti—without clear boundaries and no clear architecture in place—then agents won't be of much help. They'll also suffer working in those codebases and produce mediocre results.<p>Regarding building apps and systems from scratch with agents, I also find it more challenging. You can make it work, but you'll have to provide much more "spec" to the agent to get a good result (and "good" here is subjective). Agents excel at tasks with a narrower scope and clear objectives.<p>The best use case for coding agents is tasks that you'd be comfortable coding yourself, where you can write clear instructions about what you expect, and you can review the result (and even make minor adjustments if necessary before shipping it). This is where I see clear efficiency gains.</p>
]]></description><pubDate>Wed, 21 Jan 2026 12:56:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46705075</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46705075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46705075</guid></item><item><title><![CDATA[New comment by tcgv in "Ask HN: Share your personal website"]]></title><description><![CDATA[
<p>Sharing mine: <a href="https://thomasvilhena.com/" rel="nofollow">https://thomasvilhena.com/</a>
 — writing on engineering, lessons from building a company as a technical co-founder, and whatever I’m currently curious about.</p>
]]></description><pubDate>Thu, 15 Jan 2026 18:22:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46636802</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46636802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46636802</guid></item><item><title><![CDATA[New comment by tcgv in "The URL shortener that makes your links look as suspicious as possible"]]></title><description><![CDATA[
<p>Hole in one!<p>I shortened a link and when trying to access it in Chrome I get a red screen with this message:<p><pre><code>  Dangerous site
  Attackers on the site you tried visiting might trick you into installing software or revealing things like your passwords, phone, or credit card numbers. Chrome strongly recommends going back to safety.</code></pre></p>
]]></description><pubDate>Thu, 15 Jan 2026 18:12:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46636671</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46636671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46636671</guid></item><item><title><![CDATA[Scaling Decision-Making: Guardrails Beat Gatekeepers]]></title><description><![CDATA[
<p>Article URL: <a href="https://thomasvilhena.com/2025/12/scaling-decisions-guardrails-beat-gatekeepers">https://thomasvilhena.com/2025/12/scaling-decisions-guardrails-beat-gatekeepers</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46367824">https://news.ycombinator.com/item?id=46367824</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 23 Dec 2025 18:29:13 +0000</pubDate><link>https://thomasvilhena.com/2025/12/scaling-decisions-guardrails-beat-gatekeepers</link><dc:creator>tcgv</dc:creator><comments>https://news.ycombinator.com/item?id=46367824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46367824</guid></item></channel></rss>