<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bakuninsbart</title><link>https://news.ycombinator.com/user?id=bakuninsbart</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 04:56:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bakuninsbart" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bakuninsbart in "AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'"]]></title><description><![CDATA[
<p>It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.<p>Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.</p>
]]></description><pubDate>Thu, 21 Aug 2025 16:57:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44975134</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44975134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44975134</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Ask HN: With all the AI hype, how are software engineers feeling?"]]></title><description><![CDATA[
<p>I think you are making a couple of very good points getting bogged down in the wrong framework of discussion. Let me rephrase what I think you are saying:<p>Once you are very comfortable in a domain, it is detrimental to have to wrangle a junior dev with low IQ, way too much confidence but encyclopediac knowledge of everything instead of just doing it yourself.<p>The dichotomy of Junior vs. Senior is a bit misleading here, every junior is uncomfortable in the domain they are working in, but a Senior probably isn't comfortable in all domains. For example, many people with 10+ SE experience I know aren't very good with databases and data engineering, which is becoming an increasingly large part of the job. For someone who has worked 10+ years on Java Backends, now attempting to write Pythin data pipelines, Coding Agents might be a useful tool to gap that bridge.<p>The other thing is creation vs. critique. I often let my code, writing and planning be rewiewed by Claude or Gemini, because once I have created something, I know it very well, and I can very quickly go through 20 points of criticism/recommendations/tips and pick out the relevant ones. - And honestly, that has been super helpful. Using it that way around, Claude has caught a number of bugs, taught me some new tricks and made me aware of some interesting tech.</p>
]]></description><pubDate>Mon, 11 Aug 2025 17:23:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44866866</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44866866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44866866</guid></item><item><title><![CDATA[New comment by bakuninsbart in "GPT-5"]]></title><description><![CDATA[
<p>I think one thing to look out for are "deliberately" slow models. We are currently using basically all models as if we needed them in an instant loop, but many of these applications do not have to run that fast.<p>To tell a made-up anecdote: A colleague told me how his professor friend was running statistical models over night because the code was extremely unoptimized and needed 6+ hours to compute. He helped streamline the code and took it down to 30 minutes, which meant the professor could run it before breakfast instead.<p>We are completely fine with giving a task to a Junior Dev for a couple of days and see what happens. Now we love the quick feedback of running Claude Max for a hundred bucks, but if we could run it for a buck over night? Would be quite fine for me as well.</p>
]]></description><pubDate>Thu, 07 Aug 2025 19:46:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44829436</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44829436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44829436</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Genie 3: A new frontier for world models"]]></title><description><![CDATA[
<p>Why wouldn't it? I still have to hear one convincing argument how our brain isn't working as a function of probable next best actions. When you look at amoebas work, and animals that are somewhere between them and us in intelligence, and then us, it is a very similar kind of progression we see with current LLMs, from almost no state of the world, to a pretty solid one.</p>
]]></description><pubDate>Tue, 05 Aug 2025 17:19:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44801035</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44801035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44801035</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Genie 3: A new frontier for world models"]]></title><description><![CDATA[
<p>That's not even a devil's advocate, many other animals clearly have consciousness, at least if we're not solipsistic. There have been many very dangerous precedents in medicine where people have been declared "brain dead" only to awake and remember.<p>Since consciousness is closely linked to being a moral patient, it is all the more important to err on the side of caution when denying qualia to other beings.</p>
]]></description><pubDate>Tue, 05 Aug 2025 17:12:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44800921</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44800921</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44800921</guid></item><item><title><![CDATA[New comment by bakuninsbart in "AI promised efficiency. Instead, it's making us work harder"]]></title><description><![CDATA[
<p>I've been experimenting with Claude, and feel like it works quite well if I micromanage it. I will ask it: "Ok, but why this way and not the simpler way? And it will go "You are absolutely right" and implement the changes exactly how I want them. At least I think it does. Repeatedly, I've looked at a PR I created (and review myself, as I'm not using it "on production"), and found some pretty useless stuff mixed into otherwise solid PRs. These things are so easily missed.<p>That said, the models, or to be more precise, the tools surrounding it and the craft of interacting with it, are still improving at a pace where I now believe we will get to a point where "hand-crafted" code is the exception in a matter of years.</p>
]]></description><pubDate>Mon, 04 Aug 2025 18:02:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44789378</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44789378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44789378</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Persona vectors: Monitoring and controlling character traits in language models"]]></title><description><![CDATA[
<p>Regarding truth telling, there seems to be some evidence that LLMs at least sometimes "know" when they are lying:<p><a href="https://arxiv.org/abs/2310.06824" rel="nofollow">https://arxiv.org/abs/2310.06824</a></p>
]]></description><pubDate>Sun, 03 Aug 2025 19:21:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44778974</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44778974</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44778974</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Palantir gets $10B contract from U.S. Army"]]></title><description><![CDATA[
<p>The issue is that by introducing hyperbole, the meaning changes completely. Take the two statements:<p>1. I want peace.<p>2. a) Therefore I need to be strong enough to deter any attack.<p>2. b) Therefore I need to be so strong that all my enemies fear me.<p>2. a) is sound. Nobody attacks if they believe the cost is higher than benefit. ("Believe" is doing heavy lifting here, most wars start when countries belief about cost/value is misaligned)<p>2. b) is incompatible with 1. Either you believe that a stronger party does not necessarily attack weaker parties, thus peace could also be maintained without supremacy, or you believe supremacy leads to wars, but then your own goal of supremacy cannot be in the name of peace.<p>Unless, of course, you're a race supremacist, who believes you're so much wiser and more moral than anyone else that only you can be trusted with unchecked power. An idiotic and immoral position to take.</p>
]]></description><pubDate>Fri, 01 Aug 2025 12:49:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44756021</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44756021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44756021</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Figma will IPO on July 31"]]></title><description><![CDATA[
<p>On the contrary, Figma's value proposition is increased by LLMs. Current coding assistants are like savant-idiot junior devs: They have relatively low reasoning capabilities, way too much courage, lack taste and need to be micromanaged to be successful.<p>But they can be successful if you spell out the exact specifications. And what is Figma if not an exact specification of the design you want? Within a couple of years the Frontend Developer Market might crash pretty hard.</p>
]]></description><pubDate>Fri, 01 Aug 2025 09:23:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44754607</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44754607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44754607</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Programmers aren’t so humble anymore, maybe because nobody codes in Perl"]]></title><description><![CDATA[
<p>There were many daggers making the Perl Community bleed:<p>1. Enterprise Development<p>Java et Al led to a generation of developers working further from the kernel and the shell. Professionalization of the field led to increased specialization, and most developers had less to do with deployment and management of running software.<p>Tools also got much better, requiring less glue and shifting the glue layer to configs or platform specific languages.<p>Later on, DevOps came for the SysAdmins, and there's just much less space for Perl in the cloud.<p>2. The rise of Python<p>I would put this down mostly to universities. Perl is very expressive by design, in Python there's supposedly only "one right way to do it". Imagine you're a TA grading a hundred code submissions; in Python, everyone probably die it in one of three ways, in Perl the possibilities are endless. Perl is a language for writing, not reading.<p>3. Cybersecurity became a thing<p>Again, this goes back to readability and testability. Requirements for security started becoming a thing, and Perl was not designed with that in mind.<p>4. The Web was lost to Rails, PHP, then SPAs<p>I'm less clear on the why of that, but Perl just wasn't able to compete against newer web technologies.</p>
]]></description><pubDate>Fri, 01 Aug 2025 08:06:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44754148</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44754148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44754148</guid></item><item><title><![CDATA[New comment by bakuninsbart in "When we get Komooted"]]></title><description><![CDATA[
<p>Germany has a 6 month probation period for new hires in which both sides can terminate the contract with 2 week notice. After that, it is one month, two months after 3 years going up to 7 months after 20 years.</p>
]]></description><pubDate>Sun, 27 Jul 2025 13:31:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44701204</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44701204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44701204</guid></item><item><title><![CDATA[New comment by bakuninsbart in "What if AI made the world’s economic growth explode?"]]></title><description><![CDATA[
<p>> Granted, you need to have the political structure in place that allows the growth to benefit everyone.<p>Which is the scary part of the AI revolution. Devaluing labor always leads to increased inequality in the short-to-mid term until a new equilibrium is met. But what if we have machines that can do most jobs for 10-20k a year? Suddenly we have a hard ceiling for everyone below a certain "skill level", where skill includes things like owning capital, going to the right college, and having the right parents.<p>In the past, when inequality became too extreme, (the threat of) violent uprisings usually led to reform, but with autonomous weapon systems, drones and droids, manpower becomes less of a concern. The result might be a permanent underclass.</p>
]]></description><pubDate>Sat, 26 Jul 2025 06:52:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44691968</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44691968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44691968</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Intel CEO Letter to Employees"]]></title><description><![CDATA[
<p>> Except flight simulators. They're great as long as they have realistic physics.<p>I'm quite fascinated by the huge overlap of flight enthusiasts and computer nerds. Any discussion on HN even tangentially involving flight will have at least one thread discussing details of aviation. Why planes, and not cars or coffee machines or urban planning?</p>
]]></description><pubDate>Fri, 25 Jul 2025 08:42:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44681035</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44681035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44681035</guid></item><item><title><![CDATA[New comment by bakuninsbart in "US AI Action Plan"]]></title><description><![CDATA[
<p>Could you provide a prompt where the popular LLMs provide false or biased output based on "wokeness"?</p>
]]></description><pubDate>Thu, 24 Jul 2025 06:18:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44667544</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44667544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44667544</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Supabase MCP can leak your entire SQL database"]]></title><description><![CDATA[
<p>Rewriting the cloud in Lisp.<p>On a more serious note, there should almost certainly be regulation regarding open weights. Either AI companies are responsible for the output of their LLMs or they at least have to give customers the tools to deal with problems themselves.<p>"Behavioral" approaches are the only stop-gap solution available at the moment because most commercial LLMs are black boxes. Even if you have the weights, it is still a super hard problem, but at least then there's a chance.</p>
]]></description><pubDate>Wed, 09 Jul 2025 07:54:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44507338</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44507338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44507338</guid></item><item><title><![CDATA[New comment by bakuninsbart in "A non-anthropomorphized view of LLMs"]]></title><description><![CDATA[
<p>> people are genuinely talking about them thinking and reasoning when they are doing nothing of that sort<p>With such strong wording, it should be rather easy to explain how our thinking differs from what LLMs do. The next step - showing that what LLMs do <i>precludes</i> any kind of sentience is probably much harder.</p>
]]></description><pubDate>Mon, 07 Jul 2025 18:25:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44493260</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44493260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44493260</guid></item><item><title><![CDATA[New comment by bakuninsbart in "I don't think AGI is right around the corner"]]></title><description><![CDATA[
<p>Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.</p>
]]></description><pubDate>Sun, 06 Jul 2025 22:24:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44484664</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44484664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44484664</guid></item><item><title><![CDATA[New comment by bakuninsbart in "American science to soon face its largest brain drain in history"]]></title><description><![CDATA[
<p>That is so narrow a definition of scientific research it excludes many major contributions to our base of knowledge. The primary difference between engineering and science is the intention - Scientists want to understand how things work by using the scientific method, engineers want to make stuff that works, but this still often includes iterating over designs by using empirical data.<p>If a team of engineers find a cool new algorithm to make computer vision easier, we learnt something new about the world in the process. On the flip-side, you actually have plenty of research in fields you would consider science, eg. physics,  that do not use the scientific method at all, but instead deduce possibilities based on mathematical modelling.</p>
]]></description><pubDate>Fri, 04 Jul 2025 15:24:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44465334</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44465334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44465334</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>Weirdly enough, both can be true. I was tangentially involved in EA in the early days, and have some friends who were more involved. Lots of interesting, really cool stuff going on, but there was always latent insecurity paired with overconfidence and elitism as is typical in young nerd circles.<p>When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.<p>In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).</p>
]]></description><pubDate>Thu, 19 Jun 2025 16:39:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44320241</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44320241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44320241</guid></item><item><title><![CDATA[New comment by bakuninsbart in "Israel launches strikes against Iran, Defense Minister says"]]></title><description><![CDATA[
<p>Israeli and american intelligence agree that Iran was not aware of the October 7th attack. Hamas did that by themselves. In hindsight we also know that Israel thoroughly infiltrated the Iranian forces, so if they had known, Israel would have known in advance as well.</p>
]]></description><pubDate>Sat, 14 Jun 2025 11:39:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44275748</link><dc:creator>bakuninsbart</dc:creator><comments>https://news.ycombinator.com/item?id=44275748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44275748</guid></item></channel></rss>