<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: zkry</title><link>https://news.ycombinator.com/user?id=zkry</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 09:30:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=zkry" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by zkry in "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"]]></title><description><![CDATA[
<p>At least this Schadenfreude is better than the Schadenfreude AI boosters get when people are made redundant to AI. I can totally see some people getting warm fuzzies, scolling Tiktok, watching people crying having lost not only their job, but their entire career.<p>Im not even exaggerating, you can see these types of comments on social media</p>
]]></description><pubDate>Mon, 16 Feb 2026 09:54:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033067</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=47033067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033067</guid></item><item><title><![CDATA[New comment by zkry in "Andrew Ng says bottleneck in AI startups isn't coding – it's product management"]]></title><description><![CDATA[
<p>I don't think this line of reasoning holds. The only thing people should look at are peer reviewed studies, lots of them ideally, and with no conflict of interest. Who's getting productivity gains? What kinds of work are they doing? What doesn't work so well? All of these questions should be investigated by studies. People feeling productivity gains doesn't imply the gains exist.<p>Otherwise it sounds like "many people have had their lives changed by {insert philosophical/religious movement}, so if you're not finding it true you should look into what's wrong with you."</p>
]]></description><pubDate>Sat, 30 Aug 2025 16:07:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45075757</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=45075757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45075757</guid></item><item><title><![CDATA[New comment by zkry in "Do I not like Ruby anymore? (2024)"]]></title><description><![CDATA[
<p>And I have to say that the whole trope of "Emacs may be able to do anything but you have to configure a lot to get it to work" has has got to be pure exaggeration at this point with things like eglot. I had the most painless experience setting up LSP for Java (among many others).</p>
]]></description><pubDate>Tue, 26 Aug 2025 10:56:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45024850</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=45024850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45024850</guid></item><item><title><![CDATA[New comment by zkry in "Software Rot"]]></title><description><![CDATA[
<p>I was thinking the exact same thing. As long as you're not depending on any external packages things are very stable. Like, if you're package depends on adding advice to some other package's random internal function, then yeah, it could easily break.<p>It's a great feeling knowing any tool I write in Elisp will likely work for the rest of my life as is.</p>
]]></description><pubDate>Wed, 06 Aug 2025 08:37:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44809401</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44809401</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44809401</guid></item><item><title><![CDATA[New comment by zkry in "6 weeks of Claude Code"]]></title><description><![CDATA[
<p>On the other hand though, automated refactoring like in IntelliJ can scale practically infinitely, are extremely low cost, and are gauranteed to never make any mistakes.<p>Not saying this is more useful per se, just saying that different approaches have their pros and cons.</p>
]]></description><pubDate>Sat, 02 Aug 2025 14:41:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44767988</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44767988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44767988</guid></item><item><title><![CDATA[New comment by zkry in "Jane Street's sneaky retention tactic"]]></title><description><![CDATA[
<p>From reading this, it sounds more like a management problem more than anything else. For example, retention goals should be such that all a companies experts (at anything, not just language) don't evaporate overnight and hiring goals should be such that experts are retrained and re-hired.<p>I think the analogy is also off a bit. I't be more apt to say a good surgeon should be expected to use electrosurgical units from different manufacturers, which is a completely fair expectation.</p>
]]></description><pubDate>Sun, 29 Jun 2025 05:12:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44410520</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44410520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44410520</guid></item><item><title><![CDATA[New comment by zkry in "The ‘white-collar bloodbath’ is all part of the AI hype machine"]]></title><description><![CDATA[
<p>I find that this is on point. I've seen a lot of charts on the AI-hype side of things showing exponential growth of AI agent fleets being used for software development (starting in 2026 of course). Take this article for example: <a href="https://sourcegraph.com/blog/revenge-of-the-junior-developer" rel="nofollow">https://sourcegraph.com/blog/revenge-of-the-junior-developer</a><p>Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?</p>
]]></description><pubDate>Sat, 31 May 2025 14:33:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44144542</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44144542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44144542</guid></item><item><title><![CDATA[New comment by zkry in "Edamagit: Magit for VSCode"]]></title><description><![CDATA[
<p>Magit is truly a magnificent application and it's telling how it's ideas are ported to other editors.<p>Reference to the previously posted "You Can Choose Tools That Make You Happy"<p>> Emacs is a Gnostic cult. And you know what? That’s fine. In fact, it’s great. It makes you happy, what else is needed? You are allowed to use weird, obscure, inconvenient, obsolescent, undead things if it makes you happy.<p>Juxtaposing Emacs with the adjectives obsolescent, and undead is sad to read. Emacs is constantly reinventing and readapting itself, and just like Emacs takes and incorporates the best ideas from other tools, ideas from Emacs find their way to other environments.</p>
]]></description><pubDate>Thu, 29 May 2025 08:49:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44124192</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44124192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44124192</guid></item><item><title><![CDATA[New comment by zkry in "Ask HN: Anyone struggling to get value out of coding LLMs?"]]></title><description><![CDATA[
<p>Use cases like the ones you mentioned having are truly amazing. It's a shame that the AI hype machine has left us thinking of these use cases as practically nothing, leaving us disappointed.<p>My belief is that true utility will make itself apparent and won't have to be forced. The usages of LLMs that provide immense utility have already spread across most the industry.</p>
]]></description><pubDate>Mon, 26 May 2025 16:31:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44098961</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44098961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44098961</guid></item><item><title><![CDATA[New comment by zkry in "At Amazon, some coders say their jobs have begun to resemble warehouse work"]]></title><description><![CDATA[
<p>I feel like managers are having a heyday over tools like cursor having a user-by-user breakdown on AI code generation stats. I feel this is only the beginning and a whole new world of in-editor workplace monitoring will pop up.</p>
]]></description><pubDate>Mon, 26 May 2025 09:35:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44095629</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44095629</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44095629</guid></item><item><title><![CDATA[New comment by zkry in "At Amazon, some coders say their jobs have begun to resemble warehouse work"]]></title><description><![CDATA[
<p>Even before AI I've always had the perception that writing software felt more intellectually on the level of plumbing. AI just feels like a having one of those fancy new tools that tradespersons may use.</p>
]]></description><pubDate>Mon, 26 May 2025 09:09:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44095473</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44095473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44095473</guid></item><item><title><![CDATA[New comment by zkry in "Getting AI to write good SQL"]]></title><description><![CDATA[
<p>Im curious why there's this sentiment in regarding advances in AI. High level programming languages didnt in the least bit take away the value of the SW profession, despite allowing a vast number more people to write software.<p>The amount and complexity of software will expand to its very outer bounds for which specialists will be required.</p>
]]></description><pubDate>Sat, 17 May 2025 11:24:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44013509</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=44013509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44013509</guid></item><item><title><![CDATA[New comment by zkry in "Ask HN: Cursor or Windsurf?"]]></title><description><![CDATA[
<p>Ironically LLMs have made Emacs even more relevant. The model LLMs use (text) happens to match up with how Emacs represents everything (text in buffers). This opens up Emacs to becoming the agentic editor par excellence. Just imagine, some macro magic acound a defcommand and voila, the agent can do exactly what a user can.  If only such a project could have the funding like Cursor does...</p>
]]></description><pubDate>Mon, 12 May 2025 09:14:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43961064</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=43961064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43961064</guid></item><item><title><![CDATA[New comment by zkry in "Many of the Pokemon playtest cards were likely printed in 2024"]]></title><description><![CDATA[
<p>Surely though limiting the government's positive freedom of ubiquitous surveillance, like this example of printers, is something that I'm sure would be resoundingly popular in a democratic society. This seems as clear cut as limiting the freedom to dump toxic chemicals into water supplies.</p>
]]></description><pubDate>Fri, 31 Jan 2025 07:49:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42885494</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=42885494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42885494</guid></item><item><title><![CDATA[New comment by zkry in "Byzantine-Sassanian War (602-628 CE): The Last Great War of Antiquity (2023)"]]></title><description><![CDATA[
<p>Wouldn't it be more correct to say that <i>Muslims</i> spread through conventional warefare and <i>Islam</i> spread through proselytization and  incentivising conversion? I would imagine Muslim empires could expand without conversion (as they most definitely did in some areas) and Islam spread without a political presence.<p>Like, I always thought that the Umayyad elites sometimes didn't even want people to convert, lest their privilege become diluted.</p>
]]></description><pubDate>Mon, 20 Jan 2025 15:34:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42769690</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=42769690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42769690</guid></item><item><title><![CDATA[New comment by zkry in "Israel, Hamas reach ceasefire deal to end 15 months of war in Gaza"]]></title><description><![CDATA[
<p>>  Everything in this part of the world is on a rinse snd repeat cycle ever since the Assyrians and the Babylonians<p>That's an incredible statement, as if the rest of the world is somehow different. The only thing special about these regions is that they've had complex states for longer, so of course state-based warfare would go back farther.<p>On another level, there absolutely have been periods of stability in regions of the middle east, for periods of time we would consider long.</p>
]]></description><pubDate>Thu, 16 Jan 2025 08:38:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42722913</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=42722913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42722913</guid></item><item><title><![CDATA[New comment by zkry in "How I program with LLMs"]]></title><description><![CDATA[
<p>Logically this makes sense: every model has a context size and complexity capacity where it will no longer be able to function properly.  Any usage of said model will accelerate the approach to this limit.  Once the limit is reached, the LLM is no longer as helpful as it was.<p>I work on full blown legacy apps and needless to say I don't even bother with LLMs when working on these most of the time.</p>
]]></description><pubDate>Tue, 07 Jan 2025 07:26:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42620189</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=42620189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42620189</guid></item><item><title><![CDATA[New comment by zkry in "A messy experiment that changed how I think about AI code analysis"]]></title><description><![CDATA[
<p>> Furthermore, the article is so bereft of detail and gushes so profusely about the success and virtues of their newly minted "senior level" AI that I can't help but wonder if they're selling something...<p>With all the money in the AI space these days, my prior probability for an article extolling the virtues of AI actually trying to sell something is rather high.<p>I just want a few good unbiased academic studies on the effects of various AI systems on things like delivery time (like are AI systems preventing IT projects from going overtime on a fat-tailed distribution? is it possible with AI to put end to the chapter of software engineering projects going disastrously overtime/overbudget?)</p>
]]></description><pubDate>Sun, 05 Jan 2025 16:54:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=42603052</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=42603052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42603052</guid></item><item><title><![CDATA[New comment by zkry in "The biggest AI flops of 2024"]]></title><description><![CDATA[
<p>> In February, a Canadian small-claims tribunal upheld the customer’s legal complaint, despite the airline’s assertion that the chatbot was a “separate legal entity that is responsible for its own actions.”<p>Wait, companies are claiming that large language models are separate legal entities??</p>
]]></description><pubDate>Thu, 02 Jan 2025 10:53:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42573419</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=42573419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42573419</guid></item><item><title><![CDATA[New comment by zkry in "Things we learned about LLMs in 2024"]]></title><description><![CDATA[
<p>I get how I could be wrong on that front. I guess what I was trying to say was that there needs to be legible, predictable infrastructure for these AI systems to work well.  I actually think that an LLM workflow in a constrained, well understood environment would be amazingly good too.<p>I've been driving a lot in Istanbul lately and I'm not holding my breath for autonomous vehicles any time soon.</p>
]]></description><pubDate>Thu, 02 Jan 2025 10:41:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=42573353</link><dc:creator>zkry</dc:creator><comments>https://news.ycombinator.com/item?id=42573353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42573353</guid></item></channel></rss>