<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jascha_eng</title><link>https://news.ycombinator.com/user?id=jascha_eng</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 08:41:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jascha_eng" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jascha_eng in "Show HN: Postgres extension for BM25 relevance-ranked full-text search"]]></title><description><![CDATA[
<p>FWIW TJ is not your average vibe coder imo: <a href="https://www.linkedin.com/in/todd-j-green/" rel="nofollow">https://www.linkedin.com/in/todd-j-green/</a><p>In september he burned through 3000$ in API credits though, but I think that's before we finally bought max plans for everyone that wanted it.</p>
]]></description><pubDate>Tue, 31 Mar 2026 19:36:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592350</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47592350</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592350</guid></item><item><title><![CDATA[New comment by jascha_eng in "Claude Code full source code leaked on NPM"]]></title><description><![CDATA[
<p>Doesn't look as bad as I expected tbh.
Sure some stuff could be better but I've seen much shittier vibe coded projects (including my own). I'd be more interested in their workflows and testing pipeline though. They ship pretty often but Boris still says he has 10+ PRs a day. I would be really curious what triggers a release, since it doesn't seem like every PR is released. I'm also curious how large their PRs really are.<p>There is a big difference between:<p>> Build plugins<p>and:<p>> Add 3px padding in line 5<p>if you claim "No code is written by humans anymore"</p>
]]></description><pubDate>Tue, 31 Mar 2026 10:10:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585093</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47585093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585093</guid></item><item><title><![CDATA[New comment by jascha_eng in "Show HN: Klaus – OpenClaw on a VM, batteries included"]]></title><description><![CDATA[
<p>Yes and even now if you tell the LLM any private information inside the sandbox it can now leak that if it gets misdirected/prompt injected.<p>So there isn't really a way to avoid this trade-off you can either have a useless agent with no info and no access. Or a useful agent that then is incredibly risky to use as it might go rogue any moment.<p>Sure you can slightly choose where on the scale you want to be but any usefulness inherently means it's also risky if you run LLMs async without supervision.<p>The only absolutely safe way to give access and info to an agent is with manual approvals for anything it does. Which gives you review fatigue in minutes.</p>
]]></description><pubDate>Wed, 11 Mar 2026 19:15:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47339904</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47339904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47339904</guid></item><item><title><![CDATA[New comment by jascha_eng in "We might all be AI engineers now"]]></title><description><![CDATA[
<p>FWIW I reported your post to the mods because it reads completely AI generated to me. My judgement was that it might have been slightly edited but is largely verbatim LLM output.<p>Some tells that you might wanna look at in your writing, if you truly did write it yourself without Any LLM input are these contrarian/pivoting statements. Your post is full of these and it is imo the most classic LLM writing tell atm. These are mostly variants of the 'Its not X but Y" theme:<p>- "Not whether they've adopted every tool, but whether they're curious"<p>- "I still drive the intuition. The agents just execute at a speed I never could alone."<p>- "The model doesn't save you from bad decisions. It just helps you make them faster."<p>- "That foundation isn't decoration. It's the reason the AI is useful to me in the first place."<p>- "That's not prompting. That's engineering"<p>It is also telling that the reader basically cant take a breather most of the sentences try to emphasize harder than the last one. There is no fluff thought, no getting side tracked. It reads unnatural, humans do not think like this usually.</p>
]]></description><pubDate>Fri, 06 Mar 2026 19:04:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47279548</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47279548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47279548</guid></item><item><title><![CDATA[New comment by jascha_eng in "GPT-5.4"]]></title><description><![CDATA[
<p>When did they stop putting competitor models on the comparison table btw?
And yeh I mean the benchmark improvements are meh. Context Window and lack of real memory is still an issue.</p>
]]></description><pubDate>Thu, 05 Mar 2026 19:19:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47266004</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47266004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47266004</guid></item><item><title><![CDATA[New comment by jascha_eng in "Agentic Engineering Patterns"]]></title><description><![CDATA[
<p>It definitely feels like everyone is trying to sell you something that is supposed to help you build rather than actually building useful stuff.<p>Which is oddly close to how investment advice is given. If these techniques work so well, why give them up for free?</p>
]]></description><pubDate>Wed, 04 Mar 2026 09:14:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47244981</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47244981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47244981</guid></item><item><title><![CDATA[New comment by jascha_eng in "Don't make me talk to your chatbot"]]></title><description><![CDATA[
<p>This is not really what the article is about</p>
]]></description><pubDate>Tue, 03 Mar 2026 23:48:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47240873</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47240873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47240873</guid></item><item><title><![CDATA[New comment by jascha_eng in "Why No AI Games?"]]></title><description><![CDATA[
<p>Yeh there is a skyrim mod that lets you talk to any NPC and basically queries an LLM behind the scenes even with a screenshot of the scene so it can react to your clothing etc. if you insult it the LLM understands it dynamically and makes the NPC attack.<p>I mean thats not a new game concept but I definitely think that levels up the experience.</p>
]]></description><pubDate>Tue, 03 Mar 2026 17:01:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47235315</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47235315</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47235315</guid></item><item><title><![CDATA[New comment by jascha_eng in "The Xkcd thing, now interactive"]]></title><description><![CDATA[
<p>This is oddly fun to play with. Has that angry birds vibe</p>
]]></description><pubDate>Tue, 03 Mar 2026 13:11:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47231779</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47231779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47231779</guid></item><item><title><![CDATA[New comment by jascha_eng in "Claude Code LSP"]]></title><description><![CDATA[
<p>It's not as secret as they make it sound. Documented here:
<a href="https://code.claude.com/docs/en/discover-plugins#code-intelligence" rel="nofollow">https://code.claude.com/docs/en/discover-plugins#code-intell...</a><p>Also the post is definitely AI written partially, but still useful I suppose.</p>
]]></description><pubDate>Mon, 02 Mar 2026 12:42:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47217291</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47217291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47217291</guid></item><item><title><![CDATA[New comment by jascha_eng in "AI Made Writing Code Easier. It Made Being an Engineer Harder"]]></title><description><![CDATA[
<p>because it is an LLM account or at least someone responding by putting things through an LLM first im pretty sure. Reported it already earlier today somehow not banned. I guess HN is a bit dead, considering how many people are upvoting this slop.</p>
]]></description><pubDate>Sun, 01 Mar 2026 18:32:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209364</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47209364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209364</guid></item><item><title><![CDATA[New comment by jascha_eng in "Switch to Claude without starting over"]]></title><description><![CDATA[
<p>Memory in general Chat apps is actually more harmful than helpful imo.
It biases the LLM responses to your background which has the same effect as filter bubbles. You end up getting your own thoughts spit back at you.<p>Of course sometimes this is useful if you only use your chatbot to ask personal things like: "What should I eat today?".<p>But if you use it for anything else you're much better off having full control over the prompt. I can always say: "Hey btw I am german and heavily anti surveillance, what should I know about the recent anthropic DoW situation?" but with memory I lose the option of leaving out that first part.</p>
]]></description><pubDate>Sun, 01 Mar 2026 08:51:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47204944</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47204944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47204944</guid></item><item><title><![CDATA[New comment by jascha_eng in "What Claude Code chooses"]]></title><description><![CDATA[
<p>The fact that this comment had 10 comments and full comment chains that didn't notice it is LLM generated tells me the community is not aware enough. It was also upvoted a lot.<p>I think there is significant value in making people second guess content and look at it critically. Especially in a time where it is so easy to fake expertise. We all need to train that skill anyway these days for all online interactions.<p>10 years ago it was clickbait titles that we needed to learn to ignore, today it is LLM generated content. We will get there, but by not calling it out publicly we are making it easier for adversaries to fool everyone.<p>And yes I don't want to falsely accuse anyone of LLM slop either but they can defend themselves and making mistakes is part of the learning process for all of us. Writers and commenters will learn how to not sound like an LLM and we will more finely atune to the nuance between polished human writing and AI.</p>
]]></description><pubDate>Sun, 01 Mar 2026 08:44:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47204905</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47204905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47204905</guid></item><item><title><![CDATA[New comment by jascha_eng in "We do not think Anthropic should be designated as a supply chain risk"]]></title><description><![CDATA[
<p>If the writing itself is not enough for you read the other comments they posted like 6 or 7 on topic within 10 minutes. Noone reads the content that fast.</p>
]]></description><pubDate>Sun, 01 Mar 2026 08:35:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47204856</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47204856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47204856</guid></item><item><title><![CDATA[New comment by jascha_eng in "We do not think Anthropic should be designated as a supply chain risk"]]></title><description><![CDATA[
<p>This is an LLM bot. Careful what you upvote folks especially with new accounts.</p>
]]></description><pubDate>Sun, 01 Mar 2026 07:32:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47204539</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47204539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47204539</guid></item><item><title><![CDATA[New comment by jascha_eng in "Addressing Antigravity Bans and Reinstating Access"]]></title><description><![CDATA[
<p>I still kinda wish that the subscriptions would just allow you to use the tokens however you wish.
I get that they rely on people not using all of their quota. But e.g. with open code it doesn't really matter if I use antigravity or gemini-cli the usage should be about the same.<p>What they are actually trying to force you to do is to pay for the tokens that you don't use in their applications to increase their revenue and/or give their in-house tools an "unfair" advantage. But this is bad for the consumer because it means that there is less competition between coding agents and unless I'm willing to pay per token I have to take one of the model labs agents.<p>Anticompetitive behaviour imo they could just ban reselling tokens or something like that instead of locking your subscription in like this.</p>
]]></description><pubDate>Sat, 28 Feb 2026 14:29:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47195796</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47195796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47195796</guid></item><item><title><![CDATA[New comment by jascha_eng in "What Claude Code chooses"]]></title><description><![CDATA[
<p>Okay fair about the mentions but I don't that email is a good process:<p>1. It puts more effort on me as a user to report the spam via email because I have to open my email, compose one by hand and add the reasoning. The offending user in comparison probably automatically spams. Can't we have a button at least?<p>2. It doesn't make the community aware of the ongoing issue. Other community members could be primed that currently they need to read comments more critically. At the moment that seems like the only detection that somewhat works but if I silently send an email instead of commenting here it doesn't inform anyone else of my suspicion.</p>
]]></description><pubDate>Fri, 27 Feb 2026 12:41:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47179853</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47179853</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47179853</guid></item><item><title><![CDATA[New comment by jascha_eng in "What Claude Code chooses"]]></title><description><![CDATA[
<p>Probably referring to superpowers or gsd.
But imo these are asking way too much stuff and are just annoying. It's useful for realy vibe coders though that don't have any idea what they are doing. It will ask you: Should I handle rate limiting for the slack-api? Before you have written a single line of code.</p>
]]></description><pubDate>Fri, 27 Feb 2026 10:49:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47179015</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47179015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47179015</guid></item><item><title><![CDATA[New comment by jascha_eng in "What Claude Code chooses"]]></title><description><![CDATA[
<p>Note I might be wrong on this one but it's just extremely annoying that I even have to consider if I am being manipulated by an AI while reading HN comments.<p>If I want to read AI stuff I go to Clawdbook or OpenAIs Sora app.</p>
]]></description><pubDate>Fri, 27 Feb 2026 08:52:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47178238</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47178238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47178238</guid></item><item><title><![CDATA[New comment by jascha_eng in "What Claude Code chooses"]]></title><description><![CDATA[
<p>@dang this accounts comments smell like LLM slop. They are mostly on topic and its more claude than chatgpt but it's slop nontheless.<p>is telling<p>didn't win... It won ...<p>Look at their other comments they are also fishy<p>I know you guys don't want us to call it out because of negativity. But there needs to be awareness in the community, this is the top comment somehow right now. It feels like it happens every other thread. Please do something more rigorous than manually deleting accounts.</p>
]]></description><pubDate>Fri, 27 Feb 2026 07:59:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47177857</link><dc:creator>jascha_eng</dc:creator><comments>https://news.ycombinator.com/item?id=47177857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47177857</guid></item></channel></rss>