<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: captainkrtek</title><link>https://news.ycombinator.com/user?id=captainkrtek</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 12:09:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=captainkrtek" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by captainkrtek in "Corruption erodes social trust more in democracies than in autocracies"]]></title><description><![CDATA[
<p>Was talking about this with some colleagues who are from Ukraine, Russia, and other countries.<p>In the US, it seems corruption is only allowed at the top. If you tried to bribe your way out of a traffic ticket as a regular person, you'd get in big trouble, then meanwhile the president pardons wealthy fraudsters [1].<p>Meanwhile, in countries like Russia, everyone can get in on the action. A colleague of mine told me if he were to get drafted to the war, he knew exactly how much to pay and who to pay off locally to get his name off the list. It's equal opportunity corruption.<p>[1] - <a href="https://techcrunch.com/2025/03/28/nikola-founder-trevor-milton-pardoned-by-trump/" rel="nofollow">https://techcrunch.com/2025/03/28/nikola-founder-trevor-milt...</a></p>
]]></description><pubDate>Mon, 16 Mar 2026 16:02:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47400793</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=47400793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47400793</guid></item><item><title><![CDATA[New comment by captainkrtek in "Ask HN: How do you review gen-AI created code?"]]></title><description><![CDATA[
<p>On my team I've been adding additional linters and analyzers (some I've written with Claude) to run at CI or locally to prevent codified "bad patterns" from entering our systems. This has been nice as a backstop, as I can't enforce what everyone's Claude prompts and local workflows are, but we can agree what CI checks run before merging. Not a 100% solution, but it has been helpful so far.</p>
]]></description><pubDate>Thu, 12 Mar 2026 05:17:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47346787</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=47346787</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47346787</guid></item><item><title><![CDATA[New comment by captainkrtek in "Ask HN: How do you review gen-AI created code?"]]></title><description><![CDATA[
<p>I've been toying with this too.<p>I added a Claude skill (/gather-history) that consolidates the history of our session(s) specific to the change into a series of: decision log, involvement (how much did I write vs. AI, how many refinement iterations, reviews, etc.) that I can then include in the PR. So far this has been helpful for my colleagues to understand how I arrived at the change and how thoroughly it's been developed.</p>
]]></description><pubDate>Thu, 12 Mar 2026 05:15:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47346776</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=47346776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47346776</guid></item><item><title><![CDATA[Ask HN: How do you review gen-AI created code?]]></title><description><![CDATA[
<p>I've posed this in a couple comments, but want to get a bigger thread going.<p>There are some opinions that using LLMs to write code is just a new high level language we are dealing in as engineers. However, this leads to a disconnect come code-review time, in that the reviewed code is an artifact of the process that created it. If we are now expressing ourselves via natural language, (prompting, planning, writing, as the new "programming language"), but only putting the generated artifact (the actual code) up for review, how do we review it completely?<p>I struggle with what feels like a missing piece these days of lacking the context around how the change was produced, the plans, the prompting, to understand how an engineer came to this specific code change as a result. Did they one-shot this? did they still spend hours prompting/iterating/etc.? something in-between?<p>The summary in the PR often says what the change is, but doesn't contain the full dialog or how we arrived at this specific change (tradeoffs, alternatives, etc.)<p>How do you review PRs in your organization given this? Any rules/automation/etc. you institute?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47330747">https://news.ycombinator.com/item?id=47330747</a></p>
<p>Points: 8</p>
<p># Comments: 9</p>
]]></description><pubDate>Wed, 11 Mar 2026 01:08:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330747</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=47330747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330747</guid></item><item><title><![CDATA[New comment by captainkrtek in "Levels of Agentic Engineering"]]></title><description><![CDATA[
<p>There seems to be so much value in planning, but in my organization, there is no artifact of the plan aside from the code produced and whatever PR description of the change summary exists. It makes it incredibly difficult to assess the change in isolation of its' plan/process.<p>The idea that Claude/Cursor are the new high level programming language for us to work in introduces the problem that we're not actually committing code in this "natural language", we're committing the "compiled" output of our prompting. Which leaves us reviewing the "compiled code" without seeing the inputs (eg: the plan, prompt history, rules, etc.)</p>
]]></description><pubDate>Wed, 11 Mar 2026 00:58:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330678</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=47330678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330678</guid></item><item><title><![CDATA[New comment by captainkrtek in "After outages, Amazon to make senior engineers sign off on AI-assisted changes"]]></title><description><![CDATA[
<p>One challenge with code review as an antidote to poor quality gen-AI code, is that we largely see only the code itself, not the process or inputs.<p>In the pre-gen-AI days, if an engineer put up a PR, it implied (somewhat) they wrote their code, reviewed it implicitly as they wrote it, and made choices (ie: why is this the best approach).<p>If Claude is just the new high level programming language, in terms of prompting in natural language, the challenge is that we're not reviewing the natural language, we're reviewing the machine code without knowing what the inputs were. I'm not sure of a solution to this, but something along the lines of knowing the history of the prompting that ultimately led to the PR, the time/tokens involved, etc. may inform the "quality" or "effort" spent in producing the PR. A one-shotted feature vs. a multi-iteration feature may produce the same lines of code and general shape, but one is likely to be higher "quality" in terms of minimal defects.<p>Along the same lines, when I review some gen-AI produced PR, it feels like I'm reading assembly and having to reverse how we got here. It may be code that runs and is perfectly fine, but I can't tell what the higher level inputs were that produced it, and if they were sufficient.</p>
]]></description><pubDate>Wed, 11 Mar 2026 00:55:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330659</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=47330659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330659</guid></item><item><title><![CDATA[New comment by captainkrtek in "Apple Vision Pro production reportedly axed, marketing cut by more than 95%"]]></title><description><![CDATA[
<p>Will there be an interest in vision based wearables?<p>Google Glasses - dead<p>Apple Vision Pro - dead<p>FB/Meta x RayBan - dead soon(?)<p>It seems they can’t get over the social hurdle of having a camera strapped to your face, and the effects of that on people around you. I think the tech is neat, but not socially accepted as a concept to make it viable. My sister is big into tiktok and filming all the time, and it personally makes me hesitant to be nearby as I’m not comfortable being filmed all the time.</p>
]]></description><pubDate>Sat, 03 Jan 2026 02:12:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46472097</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46472097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46472097</guid></item><item><title><![CDATA[New comment by captainkrtek in "Ask HN: What tech job would let me get away with the least real work possible?"]]></title><description><![CDATA[
<p>50+ weeks? so a year?<p>I've been in big tech for 12+ years now. The first handful of years are definitely a grind to earn your spot, get a couple promos. After that though, it can become quite a bit easier to coast if that's what you're looking for. People know you, know you're probably valuable cause you're "senior" or "staff" and still here, and likely leave you alone. But yeah, as a newer engineer these days, it still requires the initial commitment to earn the privilege of coasting in a big tech company.</p>
]]></description><pubDate>Fri, 02 Jan 2026 22:57:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46470549</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46470549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46470549</guid></item><item><title><![CDATA[New comment by captainkrtek in "AI is forcing us to write good code"]]></title><description><![CDATA[
<p>My biggest problem with usage of an LLM in coding is that it removes engineers from understanding the true implementation of a system.<p>Over the years, I learned that a lot of one's value as an engineer can come from knowing how things <i>actually</i> work. I've been in many meetings with very senior engineers postulating how something works arguing back and forth, when quietly one engineer taps away on their laptop, then spins it around to say "no, this is the code here, this is how it actually works".</p>
]]></description><pubDate>Tue, 30 Dec 2025 20:43:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46437764</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46437764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46437764</guid></item><item><title><![CDATA[New comment by captainkrtek in "2 in 3 Americans think AI will cause major harm to humans in the next 20 years [pdf] (2024)"]]></title><description><![CDATA[
<p>Agreed, seen a number of short form news pieces / docs on the effects of datacenter development across different parts of America. Pollution, noise, lights, water impacts, energy costs, etc. not a lot to like from them, and they create very few jobs in relation to the community.</p>
]]></description><pubDate>Sun, 28 Dec 2025 17:27:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46412681</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46412681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46412681</guid></item><item><title><![CDATA[New comment by captainkrtek in "The RAM shortage comes for us all"]]></title><description><![CDATA[
<p>I'm no economist, but if (when?) the AI bubble bursts and demand collapses at the price point memory and other related components are at, wouldn't price recover?<p>not trying to argue, just curious.</p>
]]></description><pubDate>Thu, 04 Dec 2025 19:47:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46151984</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46151984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46151984</guid></item><item><title><![CDATA[New comment by captainkrtek in "Zig quits GitHub, says Microsoft's AI obsession has ruined the service"]]></title><description><![CDATA[
<p>As a customer of GitHub actions, anecdotally feels like Github experiences issues frequently enough to make this not a problem.</p>
]]></description><pubDate>Wed, 03 Dec 2025 21:34:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46140469</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46140469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46140469</guid></item><item><title><![CDATA[New comment by captainkrtek in "Everyone in Seattle hates AI"]]></title><description><![CDATA[
<p>I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.<p>I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.<p>It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.<p>It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.<p>I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."</p>
]]></description><pubDate>Wed, 03 Dec 2025 21:29:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46140401</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46140401</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46140401</guid></item><item><title><![CDATA[New comment by captainkrtek in "Kohler Can Access Pictures from "End-to-End Encrypted" Toilet Camera"]]></title><description><![CDATA[
<p>I think the obvious things are:<p>- Deviation in consistency/texture/color/etc.<p>- Obvious signs related to the above (eg: diarrhea, dehydration, blood in stool).<p>Ultimately though, you can get the same results by just looking down yourself and being curious if things look off...<p>tldr: this feels like literal internet-of-shit IoT stuff.</p>
]]></description><pubDate>Wed, 03 Dec 2025 02:46:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46129714</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46129714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46129714</guid></item><item><title><![CDATA[New comment by captainkrtek in "Flight disruption warning as Airbus requests modifications to 6k planes"]]></title><description><![CDATA[
<p>Do you think they're using the guise of "its solar radiation" as cover to do a software update to fix a more problematic "bug", and perhaps tangentially there are some changes in said-update to improve some error correcting type code (eg: related to detecting spurious bit flips).</p>
]]></description><pubDate>Fri, 28 Nov 2025 23:11:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46083633</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46083633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46083633</guid></item><item><title><![CDATA[New comment by captainkrtek in "The Math of Why You Can't Focus at Work"]]></title><description><![CDATA[
<p>This has not been my experience at least at the more remote-friendly places I worked. However, I can see this at companies with different culture / pace / attitude.<p>My most recent role the entire company of ~200 was remote, and so there was rarely the expectation of immediacy in a response. If something was truly urgent you'd be paged.</p>
]]></description><pubDate>Fri, 28 Nov 2025 23:09:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46083615</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46083615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46083615</guid></item><item><title><![CDATA[New comment by captainkrtek in "The Math of Why You Can't Focus at Work"]]></title><description><![CDATA[
<p>This is excellent and aligns with my own experience.<p>During my day I try to minimize interruptions by batching them. I will largely ignore Slack, and as notifications come in I glance and determine quickly if it really is urgent or if it can wait. If it can wait, I will punt all of those messages to a "remind me later" of a few hours, and get back to my task. I think this keeps my "recovery time" small as I'm not looking too close at these messages. It's not perfect, but definitely helps over pausing my "real work" to fully dive into each notification or ask.</p>
]]></description><pubDate>Fri, 28 Nov 2025 17:34:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46080770</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46080770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46080770</guid></item><item><title><![CDATA[New comment by captainkrtek in "AI Adoption Rates Starting to Flatten Out"]]></title><description><![CDATA[
<p>No no, we just need to put even more money in.</p>
]]></description><pubDate>Fri, 28 Nov 2025 16:29:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46080079</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46080079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46080079</guid></item><item><title><![CDATA[New comment by captainkrtek in "OpenAI needs to raise at least $207B by 2030"]]></title><description><![CDATA[
<p>It's so sad how much money leaders will effortlessly pump into something like this, when we still have existential threats of climate change, incurable diseases, poverty, housing, and so on.<p>Meanwhile ungodly amounts of money are being used so some boomer can generate a AI video of a baby riding a puppy.</p>
]]></description><pubDate>Wed, 26 Nov 2025 16:43:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46059345</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46059345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46059345</guid></item><item><title><![CDATA[New comment by captainkrtek in "We stopped roadmap work for a week and fixed bugs"]]></title><description><![CDATA[
<p>A company I worked at also did this, though there was no limits. Some folks would choose to spend the whole week working on a larger refactor, for example, I unified all of our redis usage to use a single modern library compared to the mess of 3 libraries of various ages across our codebase. This was relatively easy, but tedious, and required some new tests/etc.<p>Overall, I think this kind of thing is very positive for the health of building software, and morale to show that it is a priority to actually address these things.</p>
]]></description><pubDate>Mon, 24 Nov 2025 05:06:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46030579</link><dc:creator>captainkrtek</dc:creator><comments>https://news.ycombinator.com/item?id=46030579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46030579</guid></item></channel></rss>