<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gombosg</title><link>https://news.ycombinator.com/user?id=gombosg</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 18:49:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gombosg" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gombosg in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>I started my career as a machine designer (mechanical engineering), designing some machines for FMCG factories.<p>It wasn't that much different from SWE - mostly looking up catalogs, connecting certain pre-made pieces together with custom parts and lots of testing of the final plan to make sure there are no collisions and every movement is constrained properly.<p>95% of the time no load or sizing calculations were necessary - we just oversized everything based on tacit knowledge (the greybeards reviewing the plans) since these machines were not mass produced and choosing somewhat bigger parts was not expensive given that these machines would operate and produce value 24/7 for years.<p>(I hope the analogy to software engineering is visible!)<p>What I'm saying is that the level of "engineering rigor" heavily depends on the field where engineers are operating within. Even certain SWE fields (healthcare, finance, aviation etc.) have more regulation and require more rigor than others.</p>
]]></description><pubDate>Mon, 27 Apr 2026 11:08:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920073</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=47920073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920073</guid></item><item><title><![CDATA[New comment by gombosg in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>Right at the top: "That distinction matters more than people think." That's basically telltale AI :)<p>Also the entire framing around "judgment" and "taste" is what LLMs love to parrot about the topic.<p>There are fair arguments in the post but I totally agree that "writing is thinking" and also holding myself to "if you didn't bother to write it, why would I bother to read it"?</p>
]]></description><pubDate>Mon, 27 Apr 2026 10:57:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920006</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=47920006</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920006</guid></item><item><title><![CDATA[New comment by gombosg in "I was interviewed by an AI bot for a job"]]></title><description><![CDATA[
<p>I'm sorry for your experience, but loved the painting at the end... :)</p>
]]></description><pubDate>Wed, 11 Mar 2026 22:12:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47342871</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=47342871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47342871</guid></item><item><title><![CDATA[New comment by gombosg in "We should revisit literate programming in the agent era"]]></title><description><![CDATA[
<p>I think you're right, ephemeral code would be the concept that you have (I'm hand-waving) "the spec", that specifies what the code should be doing and the AI could regenerate the code any time based on it.<p>I'm also baffled by this concept and fundamentally believe that code _should be_ the ground truth (the spec), hence it should be human readable. That's what "clean code" would be about, choosing tools and abstractions so that code is consumable for humans and easy to reason about, debug and extend.<p>If we let go of that and rely on LLMs entirely... not sure where that would land, since computers ultimately execute the code - and the company is liable for the results of that code being executed -, not the plain language "specs".</p>
]]></description><pubDate>Mon, 09 Mar 2026 11:43:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47307774</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=47307774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47307774</guid></item><item><title><![CDATA[New comment by gombosg in "The happiest I've ever been"]]></title><description><![CDATA[
<p>Exactly, basically then every desk or office job means sitting next to a box?</p>
]]></description><pubDate>Sat, 28 Feb 2026 19:51:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47199448</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=47199448</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47199448</guid></item><item><title><![CDATA[New comment by gombosg in "Layoffs at Block"]]></title><description><![CDATA[
<p>I still don't get it.<p>If AI <i>really</i> improves efficiency and allows the company's employees to produce more, better products faster and thus increase the competitiveness of a company... then why does said company fire (half of!) its staff instead of, well, producing more, better products faster, thus increasing its competitiveness?<p>Am I naive or is AI a lie when marked as a cause?<p>Why is it that us employees are gaslighted with the FOMO of "if you don't adopt AI to produce more, then you'll be replaced by employees who do", and why do these executives don't feel "if you fire half of your employees for <i>whatever reason</i>, you'll be outcompeted by companies who... simply didn't?"</p>
]]></description><pubDate>Thu, 26 Feb 2026 23:16:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47173588</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=47173588</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47173588</guid></item><item><title><![CDATA[New comment by gombosg in "Writing code is cheap now"]]></title><description><![CDATA[
<p>Love your approach and that you actually have "before vs. after" numbers to back it up!<p>I personally also use AI in a similar way, strongly guiding it instead of vibe-coding. It reduces frustration because it surely "types" faster and better than me, including figuring out some syntax nuances.<p>But often I jump in and do some parts by myself. Either "starting" something (creating a directory, file, method etc.) to let the LLM fill in the "boring" parts, or "finishing" something by me filling in the "important" parts (like business logic etc.).<p>I think it's way easier to retain authorship and codebase understanding this way, and it's more fun as well (for me).<p>But in the industry right now there is a heavy push for "vibe coding".</p>
]]></description><pubDate>Wed, 25 Feb 2026 11:31:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47150172</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=47150172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47150172</guid></item><item><title><![CDATA[New comment by gombosg in "How to effectively write quality code with AI"]]></title><description><![CDATA[
<p>I think there are four fundamental issues here for us...<p>1. There are actually less software jobs out there, with huge layoffs still going on, so software engineering as a profession doesn't seem to profit from AI.<p>2. The remaining engineers are expected by their employers to ship more. Even if they can manage that using AI, there will be higher pressure and higher stress on them, which makes their work less fulfilling, more prone to burnout etc.<p>3. Tied to the previous - this increases workism, measuring people, engineers by some output benchmark alone, treating them more like factory workers instead of expert, free-thinking individuals (often with higher education degrees). Which again degrades this profession as a whole.<p>3. Measuring developer productivity hasn't really been cracked before either, and still after AI, there is not a lot of real data proving that these tools actually make us more productive, whatever that may be. There is only anecdotal evidence: I did this in X time, when it would have taken me otherwise Y time - but at the same time it's well known that estimating software delivery timelines is next to impossible, meaning, the estimation of "Y" is probably flawed.<p>So a lot of things going on apart from "the world will surely need more software".</p>
]]></description><pubDate>Sat, 07 Feb 2026 19:32:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46926841</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=46926841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46926841</guid></item><item><title><![CDATA[New comment by gombosg in "How to effectively write quality code with AI"]]></title><description><![CDATA[
<p>I love using LLMs as well as rubber ducks - what does this piece of code do? How would you do X with Y? etc.<p>The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.</p>
]]></description><pubDate>Sat, 07 Feb 2026 08:49:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46922369</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=46922369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46922369</guid></item><item><title><![CDATA[New comment by gombosg in "How to effectively write quality code with AI"]]></title><description><![CDATA[
<p>I can totally relate to your experience.<p>I started this career because I liked writing code. I no longer write a lot of code as a lead, but I use writing code to learn, to gain a deeper understanding of the problem domain etc. I'm not the type who wants to write specs for every method and service but rather explore and discover and draft and refactor by... well, coding. I'm amazed at creating and reading beautiful, stylish, working code that tells a story.<p>If that's taken away, I'm not sure how I could retain my interest in this profession. Maybe I'll need to find something else, but after almost a decade this will be a hard shift.</p>
]]></description><pubDate>Sat, 07 Feb 2026 08:41:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46922329</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=46922329</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46922329</guid></item><item><title><![CDATA[New comment by gombosg in "The Codex app illustrates the shift left of IDEs and coding GUIs"]]></title><description><![CDATA[
<p>I think this analogy to assembly is flawed.<p>Compilers predictably transform one kind of programming language code to CPU (or VM) instructions. Transpilers predictably transform one kind of programming language to another.<p>We introduced various instruction architectures, compiler flags, reproducible builds, checksums exactly to make sure that whatever build artifact that's produced is super predictable and dependable.<p><i>That reproducibility</i> is how we can trust our software and that's why we don't need to care about assembly (or JVM etc.) specifics 99% of the time. (Heck, I'm not familiar with most of it.)<p>Same goes for libraries and frameworks. We can trust their abstractions because someone put years or decades into developing, testing and maintaining them and the community has audited them if they are open-source.<p>It takes a whole lot of hand-waving to traverse from this point to LLMs - which are stochastic by nature - transforming natural language instructions (even if you call it "specs", it's fundamentally still a text prompt!) to dependable code "that you don't need to read" i.e. a black box.</p>
]]></description><pubDate>Wed, 04 Feb 2026 22:45:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46892963</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=46892963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46892963</guid></item><item><title><![CDATA[New comment by gombosg in "The Gorman Paradox: Where Are All the AI-Generated Apps?"]]></title><description><![CDATA[
<p>I think that just because AI won't be as good for tech as initially promised, it still has penetration potential in the wider economy.<p>OK I don't have numbers to back it up but I wouldn't be surprised if most of the investment and actual AI use was not tech (software engineering), but other use cases.</p>
]]></description><pubDate>Sun, 14 Dec 2025 14:31:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46263273</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=46263273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46263273</guid></item><item><title><![CDATA[New comment by gombosg in "Zed is now available on Windows"]]></title><description><![CDATA[
<p>I also backed out from using Zed a couple months ago, but since last week, Linux font rendering looks good to me both on full HD and HiDPI displays.</p>
]]></description><pubDate>Thu, 16 Oct 2025 11:55:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45604274</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=45604274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45604274</guid></item><item><title><![CDATA[New comment by gombosg in "Show HN: Engineering.fyi – Search across tech engineering blogs in one place"]]></title><description><![CDATA[
<p>I kind of miss the RSS days when you just had your own news/blog aggregator without the annoyance of Substack, Medium or anything else.</p>
]]></description><pubDate>Sun, 10 Aug 2025 21:48:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44858600</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=44858600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44858600</guid></item><item><title><![CDATA[New comment by gombosg in "Everything is Ghibli"]]></title><description><![CDATA[
<p>Agreed - computer music compared to live music is what, say, Adobe Illustrator is to drawing. Or a Wacom drawing table, but definitely not prompting AI to draw for you.<p>Whether drawing (writing etc.) through AI counts as drawing (as making art) is a debate we have to resolve in the upcoming future.</p>
]]></description><pubDate>Tue, 01 Apr 2025 11:42:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=43545579</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=43545579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43545579</guid></item><item><title><![CDATA[New comment by gombosg in "Tim O'Reilly – The End of Programming as We Know It"]]></title><description><![CDATA[
<p>Who am I to debate with Tim O'Reilly?<p>But this made my mind explode:<p>> So yes, let’s be bold and assume that AI codevelopers make programmers ten times as productive. (Your mileage may vary, depending on how eager your developers are to learn new skills.)<p>Has anyone ever seen this hypothetical 10x AI developer? Why do we always back into such hand-wavy arguments when talking about the efficiency of AI-supported software engineering?<p>Here's what I think the flaw is in all the AI hype's arguments, including the one in this article (I hope Tim O'Reilly can withstand this small amount of debate).<p>Currently, LLM AIs are stochastic parrots and they don't offer creating levels of abstractions, i.e. creatively and responsively packaging ideas into some higher level form that can be reused.<p>All the examples in the article did offer a higher level of abstraction: assembly, high-level programming languages, libraries & frameworks like React, database systems etc.<p>AIs don't offer abstractions. They are not creative, they don't have "better ideas" than what their training data contains. They don't take responsibility for their work.<p>Us engineers at our company have all tried and are using some AI tools but they don't nearly work as well as management would think so. They make us 10%, maybe in the best case 20% more efficient, but not 10x efficient or anything.</p>
]]></description><pubDate>Sun, 09 Feb 2025 10:23:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42989772</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=42989772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42989772</guid></item><item><title><![CDATA[New comment by gombosg in "Epic Allows Internet Archive to Distribute Unreal and Unreal Tournament Forever"]]></title><description><![CDATA[
<p>Let's just call mutators as 'mixins' :)</p>
]]></description><pubDate>Wed, 20 Nov 2024 10:10:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=42192418</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=42192418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42192418</guid></item><item><title><![CDATA[New comment by gombosg in "Liero – Sling'n'shoot Worms Game"]]></title><description><![CDATA[
<p>So many good memories from high school! Gaming in the computer lab was banned in theory and the teacher always tried to delete any games found on these machines. So we always kept about a dozen 'hidden' copies on each machine.</p>
]]></description><pubDate>Thu, 28 Dec 2023 22:03:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=38799291</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=38799291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38799291</guid></item><item><title><![CDATA[New comment by gombosg in "The big TDD misunderstanding (2022)"]]></title><description><![CDATA[
<p>I think that unit tests are super valuable because when used properly, they serve as micro-specifications for each component involved.<p>These would be super hard to backfill later, because usually only the developer who implements them knows everything about the units (services, methods, classes etc.) in question.<p>With a strongly typed language, a suite of fast unit tests can already be in feature parity with a much slower integration test, because even if mocked out, they essentially test the whole call chain.<p>They can offer even more, because unit tests are supposed to test edge cases, all error cases, wrong/malformed/null inputs etc. By using integration tests only, as the call chain increases on the inside, it would take an exponentially higher amount of integration tests to cover all cases. (E.g. if a call chain contains 3 services, with 3 outcomes each, theoretically it could take up to 27 integration test cases to cover them all.)<p>Also, ballooning unit test sizes or resorting to unit testing private methods give the developer feedback that the service is probably not "single responsibility" enough, providing incentive to split and refactor it. This leads to a more maintainable service architecture, that integration tests don't help with.<p>(Of course, let's not forget that this kind of unit testing is probably only reasonable on the backend. On the frontend, component tests from a functional/user perspective probably bring better results - hence the popularity of frameworks like Storybook and Testing Library. I consider these as integration rather than unit tests.)</p>
]]></description><pubDate>Sun, 19 Nov 2023 12:41:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=38332138</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=38332138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38332138</guid></item><item><title><![CDATA[New comment by gombosg in "Everything that uses configuration files should report where they're located"]]></title><description><![CDATA[
<p>Yes, this is why we love functional programming! "What happened along the way" equals to the call stack, as long as there is no field mutation involved.<p>And, of course, async/non-blocking calls, as tracing a call along different threads or promises may not be available all the time.</p>
]]></description><pubDate>Sun, 25 Jun 2023 15:58:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=36469610</link><dc:creator>gombosg</dc:creator><comments>https://news.ycombinator.com/item?id=36469610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36469610</guid></item></channel></rss>