<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ehsanu1</title><link>https://news.ycombinator.com/user?id=ehsanu1</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 00:51:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ehsanu1" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ehsanu1 in "Thoughts on slowing the fuck down"]]></title><description><![CDATA[
<p>That a personal website? Prod means different things in different contexts. Even then, I'd be a bit worried about prompt injection unless you control your context closely (no web access etc).</p>
]]></description><pubDate>Wed, 25 Mar 2026 16:48:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47519901</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=47519901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47519901</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>I have. In fact, I've been building my own coding agent for 2 years at this point (i.e. before claude code existed). So it's fair to say I get the point you're making and have said all the same stuff to others. But this experience has taught me that LLMs, in their current form, will always have gaps: it's in the nature of the tech. Every time a new model comes out, even the latest opus versions, while they are always better, I always eventually find their limits when pushing them hard enough and enough times to see these failure modes. Anything sufficiently out of distribution will lead to more or less nonsensical results.</p>
]]></description><pubDate>Mon, 23 Mar 2026 05:53:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47485891</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=47485891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47485891</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>That shows it knew this bit of satire more than anything.  Also, the problem as stated isn't actually constrained enough to be unsolvable: <a href="https://youtu.be/B7MIJP90biM" rel="nofollow">https://youtu.be/B7MIJP90biM</a></p>
]]></description><pubDate>Sun, 22 Mar 2026 23:13:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47483351</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=47483351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47483351</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>Business problems are essentially neverending. And humans have a broader type of intelligence that LLMs lack but are needed to solve many novel problems. I wouldn't worry.</p>
]]></description><pubDate>Sun, 22 Mar 2026 23:05:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47483280</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=47483280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47483280</guid></item><item><title><![CDATA[New comment by ehsanu1 in "An FAQ on Reinforcement Learning Environments"]]></title><description><![CDATA[
<p>An honest mistake.</p>
]]></description><pubDate>Sat, 21 Mar 2026 05:08:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47464149</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=47464149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47464149</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Chrome DevTools MCP"]]></title><description><![CDATA[
<p>What are the numbers? Are there problems other than context usage you refer to?</p>
]]></description><pubDate>Sun, 15 Mar 2026 22:27:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47392657</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=47392657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47392657</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Evaluating AGENTS.md: are they helpful for coding agents?"]]></title><description><![CDATA[
<p>They aren't necessarily "stored" but they are part of the response content. They are referred to as reasoning or thinking blocks. The big 3 model makers all have this in their APIs, typically in an encrypted form.<p>Reconstruction of reasoning from scratch can happen in some legacy APIs like the OpenAI chat completions API, which doesn't support passing reasoning blocks around. They specifically recommend folks to use their newer esponses API to improve both accuracy and latency (reusing existing reasoning).</p>
]]></description><pubDate>Tue, 17 Feb 2026 08:51:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47045176</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=47045176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47045176</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Claude is a space to think"]]></title><description><![CDATA[
<p>But those aren't things you can really separate for proprietary models. Keeping inference running also requires staff, not just for the R&D.</p>
]]></description><pubDate>Thu, 05 Feb 2026 01:36:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46894492</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46894492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46894492</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Litestream Writable VFS"]]></title><description><![CDATA[
<p>I wonder how this compares to running sqlite off of an s3-backed ZeroFS <a href="https://github.com/Barre/ZeroFS" rel="nofollow">https://github.com/Barre/ZeroFS</a></p>
]]></description><pubDate>Thu, 05 Feb 2026 01:34:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46894475</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46894475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46894475</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation"]]></title><description><![CDATA[
<p>To spell it out for myself and others: approaching equivalent calculations for each individual attention block means we also approach equivalent performance for the combination of them. And with an error bar approaching floating point accuracy, the performance should be practically identical to regular attention. Elementwise errors of this magnitude can't lead to any noteworthy changes in the overall result, especially given how robust LLM networks seem to be to small deviations.</p>
]]></description><pubDate>Wed, 04 Feb 2026 20:06:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46890901</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46890901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46890901</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Claude is a space to think"]]></title><description><![CDATA[
<p>Could you substantiate that? That take into account training and staffing costs?</p>
]]></description><pubDate>Wed, 04 Feb 2026 19:56:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46890767</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46890767</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46890767</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Claude's new constitution"]]></title><description><![CDATA[
<p><a href="https://www.anthropic.com/news/anthropic-and-the-department-of-defense-to-advance-responsible-ai-in-defense-operations" rel="nofollow">https://www.anthropic.com/news/anthropic-and-the-department-...</a></p>
]]></description><pubDate>Wed, 21 Jan 2026 21:31:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46711848</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46711848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46711848</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Anthropic Explicitly Blocking OpenCode"]]></title><description><![CDATA[
<p>You don't get a simple request/response paradigm with claude code: 1 message from the user results in a loop that usually invokes many inner LLM requests, among other business logic, resulting in some user-visible output and a bunch of less visible stuff (filesystem changes, etc). You control an input to the outer loop: you can only do some limited stuff with hooks to control what happens within the loop. But there's a lot happening inside that loop that you have no say over.<p>A simple example: can you arbitrarily manipulate the historical context of a given request to the LLM? It's useful to do that sometimes. Another one: can you create a programmatic flow that tries 3 different LLM requests, then uses an LLM judge to contrast and combine into a best final answer? Sure, you could write a prompt that says do that, but that won't yield equivalent results.<p>These are just examples, the point is you don't get fine control.</p>
]]></description><pubDate>Fri, 16 Jan 2026 22:23:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46653015</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46653015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46653015</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Ask HN: How are you doing RAG locally?"]]></title><description><![CDATA[
<p>I've gotten great results applying it to file paths + signatures. Even better if you also fuse those results with BM25.</p>
]]></description><pubDate>Thu, 15 Jan 2026 06:39:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46628892</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46628892</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46628892</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Ask HN: How are you doing RAG locally?"]]></title><description><![CDATA[
<p>Embedded usearch vector database. <a href="https://github.com/unum-cloud/USearch" rel="nofollow">https://github.com/unum-cloud/USearch</a></p>
]]></description><pubDate>Thu, 15 Jan 2026 06:38:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46628881</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46628881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46628881</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Anthropic Explicitly Blocking OpenCode"]]></title><description><![CDATA[
<p>They have rate limits for this purpose. Many folks run claude code instances in parallel, which has roughly the same characteristics.</p>
]]></description><pubDate>Thu, 15 Jan 2026 01:09:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46626574</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46626574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46626574</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Anthropic Explicitly Blocking OpenCode"]]></title><description><![CDATA[
<p>You can't control it to the level of individual LLM requests and orchestration of those. And that is very valuable, practically required, to build a tool like this. Otherwise, you just have a wrapper over another big program and can barely do anything interesting/useful to make it actually work better.</p>
]]></description><pubDate>Thu, 15 Jan 2026 01:06:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46626548</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46626548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46626548</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Anthropic blocks third-party use of Claude Code subscriptions"]]></title><description><![CDATA[
<p>You can: <a href="https://claude.ai/settings/data-privacy-controls" rel="nofollow">https://claude.ai/settings/data-privacy-controls</a></p>
]]></description><pubDate>Sat, 10 Jan 2026 20:32:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46569642</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46569642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46569642</guid></item><item><title><![CDATA[New comment by ehsanu1 in "Anthropic blocks third-party use of Claude Code subscriptions"]]></title><description><![CDATA[
<p>True. You can turn it off though: <a href="https://claude.ai/settings/data-privacy-controls" rel="nofollow">https://claude.ai/settings/data-privacy-controls</a></p>
]]></description><pubDate>Sat, 10 Jan 2026 20:32:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46569641</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46569641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46569641</guid></item><item><title><![CDATA[New comment by ehsanu1 in "ASCII-Driven Development"]]></title><description><![CDATA[
<p>It has to be suited for human consumption too though.<p>I wonder if this has any real benefits over just doing very simple html wireframing with highly constrained css, which is readily renderable for human consumption. I guess pure text makes it easier to ignore many stylistic factors as they are harder to represent if not impossible. But I'm sure that LLMs have a lot more training data on html/css, and I'd expect them to easily follow instructions to produce html/css for a mockup/wireframe.</p>
]]></description><pubDate>Sat, 10 Jan 2026 20:29:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46569612</link><dc:creator>ehsanu1</dc:creator><comments>https://news.ycombinator.com/item?id=46569612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46569612</guid></item></channel></rss>