<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: est31</title><link>https://news.ycombinator.com/user?id=est31</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 09:29:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=est31" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by est31 in "German men 18-45 need military permit for extended stays abroad"]]></title><description><![CDATA[
<p>Not a lawyer but the German constitution, Article 12a, speaks of men above 18, not of citizens, or even residents of Germany.<p><a href="https://www.gesetze-im-internet.de/englisch_gg/englisch_gg.html#p0069" rel="nofollow">https://www.gesetze-im-internet.de/englisch_gg/englisch_gg.h...</a><p>So that article can in theory be used to conscript any man, citizen or not, living in Germany or not.<p>The Wehrpflichtgesetz, which is a simple law and requires just the 50% Bundestag majority to have it changed, refines this very wide constitutional power in article 1, to require men who hold German citizenship above 18.<p><a href="https://www.gesetze-im-internet.de/wehrpflg/BJNR006510956.html#BJNR006510956BJNG000108310" rel="nofollow">https://www.gesetze-im-internet.de/wehrpflg/BJNR006510956.ht...</a><p>Article 3 refines it even further to folks below 45 or 60, depending on the severity of the situation.<p>But yes, in theory it can be changed to include any non-German citizen man, people aged 80, living inside of Germany since a while or never having been to Germany ever, or just random men who happen to change flights at FRA.</p>
]]></description><pubDate>Sat, 04 Apr 2026 23:46:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47644716</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47644716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47644716</guid></item><item><title><![CDATA[New comment by est31 in "DRAM pricing is killing the hobbyist SBC market"]]></title><description><![CDATA[
<p>This one might last longer. The AI race is on, and the US tries its best to make it as expensive for China as possible to participate in it. Every dollar China spends on GPUs they get at markup is one not spent on building navy ships.<p>If there is an escalation over Taiwan, then that will cause the loss of most of the world's high grade chip manufacturing capacity. TSMC is busy doing technology transfers into the US, but it is going to take time, those fabs won't have capacity for the whole world, and they still heavily depend on Taiwan based engineers if something goes wrong etc.<p>Just like with COVID you don't know how long this shortage will last.</p>
]]></description><pubDate>Thu, 02 Apr 2026 01:48:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47609093</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47609093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47609093</guid></item><item><title><![CDATA[New comment by est31 in "How the AI Bubble Bursts"]]></title><description><![CDATA[
<p>It's a loss leader but this is normal. Same has happened with Uber, Airbnb, Amazon, etc. Using VC money to buy marketshare and once you have it, you can milk it.<p>The question is more around the moats that these companies have and it seems to me while their models are amazing technology, they don't really have a moat. The open/chinese models still continuously catch up to the american ones.</p>
]]></description><pubDate>Mon, 30 Mar 2026 13:13:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47573877</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47573877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47573877</guid></item><item><title><![CDATA[New comment by est31 in "Coding agents could make free software matter again"]]></title><description><![CDATA[
<p>> And how this end is closer with LLMs?<p>The blog post of this thread argues that now, even average users have the ability to modify GPL'd code thanks to LLMs. The bigger advantage though is that one can use it to break open software monopolies in the first place.<p>A lot of such monopolies are based on proprietary formats.<p>If LLM swarms can build a browser (not from scratch) and C compiler (from scratch), they can also build an LLVM backend for a bespoke architecture that only has a proprietary C compiler for it. They can also build adobe software replacements, pdf editors, debug/fix linux driver issues, etc.</p>
]]></description><pubDate>Mon, 30 Mar 2026 13:04:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47573774</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47573774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47573774</guid></item><item><title><![CDATA[New comment by est31 in "Coding agents could make free software matter again"]]></title><description><![CDATA[
<p>If I look around in the FLOSS communities, I see a lot of skepticism towards LLMs. The main concerns are:<p>1. they were trained on FLOSS repositories without consent of the authors, including GPL and AGPL repos<p>2. the best models are proprietary<p>3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).<p>I agree those are legitimate problems but LLMs are the new reality, they are not going to go away. Much more powerful lobbies than the OSS ones are losing fights against the LLM companies (the big copyright holders in media).<p>But while companies can use LLMs to build replacements for GPL licensed code (where those LLMs have that GPL code probably in their training set), the reverse thing can also be done: one can break monopolies open using LLMs, and build so much open source software using LLMs.<p>In the end, the GPL is only a means to an end.</p>
]]></description><pubDate>Mon, 30 Mar 2026 00:06:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47568839</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47568839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47568839</guid></item><item><title><![CDATA[New comment by est31 in "I wanted to build vertical SaaS for pest control, so I took a technician job"]]></title><description><![CDATA[
<p>The end game is a resource based economy as all sorts of labor becomes cheap.<p>Think of Saudi Arabia, Iran, Putin's Russia, or Norway. I.e. risk for highly nepotic dictatorships, with the potential that it might end up well despite the odds (Norway).<p>Before, if you made a product that improved the lives of everyone, say you invented Google or Heinz ketchup, you could make a lot of money through that, and you did a good deed and became rich the same time. The masses of humans would reward you for delivering the benefits of your invention to them by giving you a piece of their work output.<p>As their work becomes less and less worth, why focus on those humans though? I am asking rhetorically of course.<p>An economy that thrives from innovation enriches the innovators, making them powerful. A brute in power causes the innovators to leave or in the worst case, he mass-executes them outright (think of what Stalin did in Russia). With AI, you can have a brute in power though, as an oil rig or datacenter can be protected by a bunch of machine guns.<p>An economy with AI everywhere will be, after a short and very innovative period, just be about who controls which resource, i.e. water for a datacenter, production lines for robots, mining rights, operational control of robot fleets, etc.<p>The working 95% will probably experience a sharp decrease in purchasing power, making a lot of products unaffordable to them, so consumption wise we'll have a further shift towards plutonomics. The owning top 10% will probably be affected by this major shift in consumption as well, E.g. a tower full of condos becomes worthless if the tenants can't pay rent because they got laid off, etc.<p>Need for robots and AI will further increase. Eventually most economic activity will revolve around those robots. It's a bit like paperclip optimizer here, whether those robots protect gay luxury space communism from counterrevolutionaries, or they project the will of the Davos council of Forbes 400, economically it will be quite similar.<p>There will still be human societies, humans will still talk to other humans. We won't be all exclusively conversing with LLMs, I doubt that. There will still be social mobility but it will revolve around nepotism, lying, and various escalation steps of war.<p>We might end up in different scenarios depending on the country, but some countries like Germany might lose relevance as most of their value lies in stuff that is going to be replaced by AI, i.e. they have less natural resources, or they have been depleted already.<p>We might also see companies that automate everything from end to end, from mining to producing and running weaponized robot fleets. Shareholders of those companies will do great too, if the leadership of the companies respects minority shareholder rights that is (why should they though, they will outgun any law enforcement).<p>Do I like this future? I don't think so. We will probably have solved cancer, communicable diseases, and aging in the next 30 years if AI continues its successful trajectory, but not sure if it will be accessible to 8 billion humans.</p>
]]></description><pubDate>Wed, 25 Mar 2026 00:49:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47511737</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47511737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47511737</guid></item><item><title><![CDATA[New comment by est31 in "AI coding is gambling"]]></title><description><![CDATA[
<p>You have a lot of control over LLM quality. There is different models available. Even with different effort settings of those models you have different outcomes.<p>E.g. look at the "SWE-Bench Pro (public)" heading in this page: <a href="https://openai.com/index/introducing-gpt-5-4/" rel="nofollow">https://openai.com/index/introducing-gpt-5-4/</a> , showing reasoning efforts from none to high.<p>Of course, they don't learn like humans so you can't do the trick of hiring someone less senior but with great potential and then mentor them. Instead it's more of an up front price you have to pay. The top models at the highest settings obviously form a ceiling though.</p>
]]></description><pubDate>Wed, 18 Mar 2026 19:08:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47430076</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47430076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47430076</guid></item><item><title><![CDATA[New comment by est31 in "Stop Sloppypasta"]]></title><description><![CDATA[
<p>AI etiquette is a great term. AI is useful in general but some patterns of AI usage are annoying. Especially if the other side spent 10 seconds on something and expects you to treat it seriously.<p>Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.</p>
]]></description><pubDate>Mon, 16 Mar 2026 07:47:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47396136</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47396136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47396136</guid></item><item><title><![CDATA[New comment by est31 in "Launching the Claude Partner Network"]]></title><description><![CDATA[
<p>Yeah I meant it in the context of the comment I was replying to, to be precise in the context of the comment that one was replying to, i.e. "10 years of certified Claude Code experience required".<p>The technology is moving so fast that the tricks you learned a year ago might not be relevant any more.</p>
]]></description><pubDate>Sun, 15 Mar 2026 21:30:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47392161</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47392161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47392161</guid></item><item><title><![CDATA[New comment by est31 in "Launching the Claude Partner Network"]]></title><description><![CDATA[
<p>I think the older AI users are even held back because they might be doing things that are not neccessary any more, like explaining basic things, like please don't bring in random dependencies but prefer the ones that are there already, or the classic think really strongly and make a plan, or try to use a prestigious register of the language in attempts to make it think harder.<p>Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do, where it originated, and looks into causes. Often times I come back to the chat and see a working explanation together with a fix.<p>Before I had to actually explain why I want it to change some implementation in some direction, otherwise it would refuse no I won't do that because abc. Nowadays I can just pass the raw instruction "please move this into its own function", etc, and it follows.<p>So yeah, a lot of these skills become outdated very quickly, the technology is changing so fast, and one needs to constantly revisit if what one had to do a couple of months earlier is still required, whether there's still the limits of the technology precisely there or further out.</p>
]]></description><pubDate>Sun, 15 Mar 2026 05:44:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384649</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47384649</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384649</guid></item><item><title><![CDATA[New comment by est31 in "RAM kits are now sold with one fake RAM stick alongside a real one"]]></title><description><![CDATA[
<p>I've been 8 years on this site, and I have 8 favorite comments. This comment just made it into a very exclusive club.</p>
]]></description><pubDate>Sat, 14 Mar 2026 16:00:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377992</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47377992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377992</guid></item><item><title><![CDATA[New comment by est31 in "Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy"]]></title><description><![CDATA[
<p>yeah it writing stuff that's way better than mine is not the case for me, at least for areas I'm familiar with. In areas I'm not familiar with, it's way better than what I could have produced.</p>
]]></description><pubDate>Thu, 12 Mar 2026 00:59:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47344867</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47344867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47344867</guid></item><item><title><![CDATA[New comment by est31 in "Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy"]]></title><description><![CDATA[
<p>Have you tried the latest models at best settings?<p>I've been writing software for 20 years. Rust since 10 years. I don't consider myself to be a median coder, but quite above average.<p>Since the last 2 years or so, I've been trying out changes with AI models every couple months or so, and they have been consistently disappointing. Sure, upon edits and many prompts I could get something useful out of it but often I would have spent the same amount of time or more than I would have spent manually coding.<p>So yes, while I love technology, I'd been an LLM skeptic for a long time, and for good reason, the models just hadn't been good. While many of my colleagues used AI, I didn't see the appeal of it. It would take more time and I would still have to think just as much, while it be making so many mistakes everywhere and I would have to constantly ask it to correct things.<p>Now 5 months or so ago, this changed as the models actually figured it out. The February releases of the models sealed things for me.<p>The models are still making mistakes, but their number and severity is lower, and the output would fit the specific coding patterns in that file or area. It wouldn't import a random library but use the one that was already imported. If I asked it to not do something, it would follow (earlier iterations just ignored me, it was frustrating).<p>At least for the software development areas I'm touching (writing databases in Rust), LLMs turned into a genuinely useful tool where I now am able to use the fundamental advantages that the technology offers, i.e. write 500 lines of code in 10 minutes, reducing something that would have taken me two to three days before to half a day (as of course I still need to review it and fix mistakes/wrong choices the tool made).<p>Of course this doesn't mean that I am now 6x faster at all coding tasks, because sometimes I need to figure out the best design or such, but<p>I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings, and not about the tab auto completion or the quick edit features of the IDEs, but the agentic feature where the IDE can actually spend some effort into thinking what I, the user, meant with my less specific prompt.</p>
]]></description><pubDate>Wed, 11 Mar 2026 02:20:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47331132</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47331132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47331132</guid></item><item><title><![CDATA[New comment by est31 in "Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy"]]></title><description><![CDATA[
<p>I still think the source code is the preferred form for modification because it is what you point the AI at when you want it to make a change.<p>Sure there might be md documents that you created that the AI used to implement the software, but maybe those documents themselves have been AI written from prompts (due to how context works in LLMs, it's better for larger projects to first make an md document about them, even if an LLM is used for it in the first place).<p>As for proprietary software, the chinese models are not far behind the cutting edge of the US models.</p>
]]></description><pubDate>Wed, 11 Mar 2026 01:20:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330815</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47330815</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330815</guid></item><item><title><![CDATA[New comment by est31 in "Debian decides not to decide on AI-generated contributions"]]></title><description><![CDATA[
<p>I think it's a complicated issue.<p>A lot of low quality AI contributions arrive using free tiers of these AI models, the output of which is pretty crap. On the other hand, if you max out the model configs, i.e. get "the best money can buy", then those models are actually quite useful and powerful.<p>OSS should not miss out on the power LLMs can unleash. Talking about the maxed out versions of the newest models only, i.e. stuff like Claude 4.5+ and Gemini 3, so developments of the last 5 months.<p>But at the same time, maintainers should not have to review code written by a low quality model (and the high quality models, for now, are all closed, although I heard good things about Minmax 2.5 but I haven't tried it).<p>Given how hard it is to tell which model made a specific output, without doing an actual review, I think it would make most sense to have a rule restricting AI access to trusted contributors only, i.e. maintainers as a start, and maybe some trusted group of contributors where you know that they use the expensive but useful models, and not the cheap but crap models.</p>
]]></description><pubDate>Tue, 10 Mar 2026 15:37:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47324734</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47324734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47324734</guid></item><item><title><![CDATA[New comment by est31 in "AI doesn't replace white collar work"]]></title><description><![CDATA[
<p>That billion dollar figure is being thrown around for Steinberger's exit to OpenAI, but I couldn't find any reputable source claiming it. It might be a wrong number, idk.</p>
]]></description><pubDate>Sun, 08 Mar 2026 20:24:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47301033</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47301033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47301033</guid></item><item><title><![CDATA[New comment by est31 in "Google just gave Sundar Pichai a $692M pay package"]]></title><description><![CDATA[
<p>You can't train LLMs on proprietary data, at least not if you want to make that LLM as accessible as Gemini. Otherwise random people can ask it your home address.<p>So it matters less than one would think. Also, ChatGPT can do 'internet search' as a tool already, so it already has access to say Google maps POI database of SMBs.<p>And ChatGPT also gets a lot of proprietary data of its own as well. People use it as a Google replacement.</p>
]]></description><pubDate>Sun, 08 Mar 2026 20:01:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300788</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47300788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300788</guid></item><item><title><![CDATA[New comment by est31 in "AI doesn't replace white collar work"]]></title><description><![CDATA[
<p>From my opinion, the block layoffs were a test, to see how a) a software company manages with only half of its employees now that there's powerful LLMs, and b) how the remaining employees react to the imminent threat of them being laid off as well.<p>If block succeeds, we'll see more layoffs of that kind, probably even more extreme ones. You are not top senior level employee? Out. You don't single handedly cause 30% of the AI spend on your 15 person team? Out.<p>People say how in five years there won't be seniors because one stopped junior hiring... in five years the seniors won't be needed either. Already today, we have single person billion dollar exits, high schoolers making millions from food apps. This is thanks to LLMs.<p>The technology is there to replace most of the white collar work, it's just not applied enough yet. The economic system needs to adapt to not having labor being such a big redistributor.</p>
]]></description><pubDate>Sun, 08 Mar 2026 19:56:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300734</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47300734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300734</guid></item><item><title><![CDATA[New comment by est31 in "The worst acquisition in history, again"]]></title><description><![CDATA[
<p>They also have their own global CDN, while Disney/HBO et al use various third party CDNs.</p>
]]></description><pubDate>Fri, 06 Mar 2026 22:50:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47282195</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47282195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47282195</guid></item><item><title><![CDATA[New comment by est31 in "Hardening Firefox with Anthropic's Red Team"]]></title><description><![CDATA[
<p>I suppose eventually we'll see something like Google's OSS-Fuzz for core open source projects, maybe replacing bug bounty programs a bit. Anthropic already hands out Claude access for free to OSS maintainers.<p>LLMs made it harder to run bug bounty programs where anyone can submit stuff, and where a lot of people flooded them with seemingly well-written but ultimately wrong reports.<p>On the other hand, the newest generation of these LLMs (in their top configuration) finally understands the problem domain well enough to identify legitimate issues.<p>I think a lot of judging of LLMs happens on the free and cheaper tiers, and quality on those tiers is indeed bad. If you set up a bug bounty program, you'll necessarily get bad quality reports (as cost of submission is 0 usually).<p>On the other hand, if instead of a bug bounty program you have an "top tier LLM bug searching program", then then the quality bar can be ensured, and maintainers will be getting high quality reports.<p>Maybe one can save bug bounty programs by requiring a fee to be paid, idk, or by using LLM there, too.</p>
]]></description><pubDate>Fri, 06 Mar 2026 15:26:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47276103</link><dc:creator>est31</dc:creator><comments>https://news.ycombinator.com/item?id=47276103</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47276103</guid></item></channel></rss>