<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: calebkaiser</title><link>https://news.ycombinator.com/user?id=calebkaiser</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 09:37:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=calebkaiser" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Opik – The missing observability layer for OpenClaw]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/comet-ml/opik-openclaw">https://github.com/comet-ml/opik-openclaw</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47398709">https://news.ycombinator.com/item?id=47398709</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 16 Mar 2026 13:24:04 +0000</pubDate><link>https://github.com/comet-ml/opik-openclaw</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=47398709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47398709</guid></item><item><title><![CDATA[New comment by calebkaiser in "Grief and the AI split"]]></title><description><![CDATA[
<p>It's funny how "the real split" is always between the intellectually and morally superior (me) and the inferiors (them).</p>
]]></description><pubDate>Fri, 13 Mar 2026 15:27:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47365761</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=47365761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47365761</guid></item><item><title><![CDATA[Opik – An Observability Layer for OpenClaw]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/comet-ml/opik-openclaw">https://github.com/comet-ml/opik-openclaw</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47268339">https://news.ycombinator.com/item?id=47268339</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 05 Mar 2026 22:49:11 +0000</pubDate><link>https://github.com/comet-ml/opik-openclaw</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=47268339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47268339</guid></item><item><title><![CDATA[New comment by calebkaiser in "Micropayments as a reality check for news sites"]]></title><description><![CDATA[
<p>There is a platform called ethical ads for developer focused advertising: <a href="https://www.ethicalads.io/" rel="nofollow">https://www.ethicalads.io/</a></p>
]]></description><pubDate>Thu, 19 Feb 2026 22:38:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47080692</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=47080692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47080692</guid></item><item><title><![CDATA[Agent Optimizer: Self-improving prompts from production data]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/comet-ml/opik/blob/main/sdks/opik_optimizer/README.md">https://github.com/comet-ml/opik/blob/main/sdks/opik_optimizer/README.md</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46981664">https://news.ycombinator.com/item?id=46981664</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 11 Feb 2026 21:53:25 +0000</pubDate><link>https://github.com/comet-ml/opik/blob/main/sdks/opik_optimizer/README.md</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=46981664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46981664</guid></item><item><title><![CDATA[New comment by calebkaiser in "Two kinds of AI users are emerging"]]></title><description><![CDATA[
<p>I work with/am friends with many junior-ish developers who are in the same place as you (got into programming in their late 20s around the 2020 hiring cycle). I'm very sorry for the stress you're dealing with.<p>I don't know if this describes your situation, but I know many people who are dealing with positions where they have no technical mentorship, no real engineering culture to grow in, and a lot of deadlines and work pressure. Coupled with this, they often don't have a large social group within programming/tech, because they've only been in it for a few years and have been heads down grinding to get a good job the whole time. They're experiencing a weird mixture of isolation, directionless-ness, and intense pressure. The work is joyless for them, and they don't see a future.<p>If I can offer any advice, be selfish for a bit. Outsource as much as you want to LLMs, but use whatever time savings you get out of this to spend time on programming-related things you enjoy. Maybe work the tickets you find mildly interesting without LLMs, even if they aren't mission critical. Find something interesting to tinker with. Learn a niche language. Or slack off in a discord group/make friends in programming circles that aren't strictly about career advancement and networking.<p>I think it's basically impossible to get better past a certain level if you can't enjoy programming, LLM-assisted or otherwise. There's such a focus on "up-skilling" and grinding through study materials in the culture right now, and that's all well and good if you're trying to pass an interview in 6 weeks, but all of that stuff is pretty useless when you're burned out and overwhelmed.</p>
]]></description><pubDate>Tue, 03 Feb 2026 16:52:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46873505</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=46873505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46873505</guid></item><item><title><![CDATA[Deep Implicit Layers]]></title><description><![CDATA[
<p>Article URL: <a href="http://implicit-layers-tutorial.org/introduction/">http://implicit-layers-tutorial.org/introduction/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46698348">https://news.ycombinator.com/item?id=46698348</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 20 Jan 2026 22:06:27 +0000</pubDate><link>http://implicit-layers-tutorial.org/introduction/</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=46698348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46698348</guid></item><item><title><![CDATA[New comment by calebkaiser in "We put Claude Code in Rollercoaster Tycoon"]]></title><description><![CDATA[
<p>If anyone is curious, Beads is an agent memory project from the same developer: <a href="https://github.com/steveyegge/beads" rel="nofollow">https://github.com/steveyegge/beads</a></p>
]]></description><pubDate>Sat, 17 Jan 2026 20:05:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46661528</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=46661528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46661528</guid></item><item><title><![CDATA[Show HN: Opik Optimizer – open-source agents for self-improving LLM applications]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer">https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46528743">https://news.ycombinator.com/item?id=46528743</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 07 Jan 2026 16:50:16 +0000</pubDate><link>https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=46528743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46528743</guid></item><item><title><![CDATA[Opik Agent Optimizer – Open-Source Prompt Optimization Framework]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer">https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46267143">https://news.ycombinator.com/item?id=46267143</a></p>
<p>Points: 6</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 14 Dec 2025 21:29:27 +0000</pubDate><link>https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=46267143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46267143</guid></item><item><title><![CDATA[New comment by calebkaiser in "Context engineering"]]></title><description><![CDATA[
<p>Hello friend!</p>
]]></description><pubDate>Sun, 02 Nov 2025 19:24:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45792709</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45792709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45792709</guid></item><item><title><![CDATA[New comment by calebkaiser in "Context engineering"]]></title><description><![CDATA[
<p>I don't really understand this line of criticism, in this context.<p>What would "generalizing" the information in this article mean? I think the author does a good job of contextualizing most of the techniques under the general umbrella of in-context learning. What would it mean to generalize further beyond that?</p>
]]></description><pubDate>Sun, 02 Nov 2025 19:24:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45792704</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45792704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45792704</guid></item><item><title><![CDATA[New comment by calebkaiser in "Context engineering"]]></title><description><![CDATA[
<p>I think it's fair to question the use of the term "engineering" throughout a lot of the software industry. But to be fair to the author, his focus in the piece is on design patterns that require what we'd commonly call software engineering to implement.<p>For example, his first listed design pattern is RAG. To implement such a system from scratch, you'd need to construct a data layer (commonly a vector database), retrieval logic, etc.<p>In fact I think the author largely agrees with you re: crafting prompts. He has a whole section admonishing "prompt engineering" as magical incantations, which he differentiates from his focus here (software which needs to be built around an LLM).<p>I understand the general uneasiness around using "engineering" when discussing a stochastic model, but I think it's worth pointing out that there is a lot of engineering work required to build the software systems around these models. Writing software to parse context-free grammars into masks to be applied at inference, for example, is as much "engineering" as any other common software engineering project.</p>
]]></description><pubDate>Sun, 02 Nov 2025 16:49:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45791612</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45791612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45791612</guid></item><item><title><![CDATA[New comment by calebkaiser in "Context engineering"]]></title><description><![CDATA[
<p>Most of the inference techniques (what the author calls context engineering design patterns) listed here originally came from the research community, and there are tons of benchmarks measuring their effectiveness, as well as a great deal of research behind what is happening mechanistically with each.<p>As the author points out, many of the patterns are fundamentally about in-context learning, and this in particular has been subject to a ton of research from the mechanistic interpretability crew. If you're curious, I think this line of research is fascinating: <a href="https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html" rel="nofollow">https://transformer-circuits.pub/2022/in-context-learning-an...</a></p>
]]></description><pubDate>Sun, 02 Nov 2025 16:11:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45791324</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45791324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45791324</guid></item><item><title><![CDATA[New comment by calebkaiser in "Context engineering"]]></title><description><![CDATA[
<p>Any of the "design patterns" listed in the article will have a ton of popular open source implementations. For structured generation, I think outlines is a particularly cool library, especially if you want to poke around at how constrained decoding works under the hood: <a href="https://github.com/dottxt-ai/outlines" rel="nofollow">https://github.com/dottxt-ai/outlines</a></p>
]]></description><pubDate>Sun, 02 Nov 2025 16:04:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45791276</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45791276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45791276</guid></item><item><title><![CDATA[New comment by calebkaiser in "Context engineering"]]></title><description><![CDATA[
<p>Based on the comments, I expected this to be slop listing a bunch of random prompt snippets from the author's personal collection.<p>I'm honestly a bit confused at the negativity here. The article is incredibly benign and reasonable. Maybe a bit surface level and not incredibly in depth, but at a glance, it gives fair and generally accurate summaries of the actual mechanisms behind inference. The examples it gives for "context engineering patterns" are actual systems that you'd need to implement (RAG, structured output, tool calling, etc.), not just a random prompt, and they're all subject to pretty thorough investigation from the research community.<p>The article even echoes your sentiments about "prompt engineering," down to the use of the word "incantation". From the piece:<p>> This was the birth of so-called "prompt engineering", though in practice there was often far less "engineering" than trial-and-error guesswork. This could often feel closer to uttering mystical incantations and hoping for magic to happen, rather than the deliberate construction and rigorous application of systems thinking that epitomises true engineering.</p>
]]></description><pubDate>Sun, 02 Nov 2025 15:56:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45791226</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45791226</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45791226</guid></item><item><title><![CDATA[New comment by calebkaiser in "CompileBench: Can AI Compile 22-year-old Code?"]]></title><description><![CDATA[
<p>There's been a decent chunk of research in this direction over the years. Michael O'Boyle is pretty active as a researcher in the space, if you're looking for stuff to read: <a href="https://www.dcs.ed.ac.uk/home/mob/" rel="nofollow">https://www.dcs.ed.ac.uk/home/mob/</a></p>
]]></description><pubDate>Mon, 22 Sep 2025 18:58:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45337902</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45337902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45337902</guid></item><item><title><![CDATA[New comment by calebkaiser in "Important machine learning equations"]]></title><description><![CDATA[
<p>LoRa uses singular value decomposition to get the low rank matrices. In different optimizers, you'll also see eigendecomposition or some approximation used (I think Shampoo does something like this, but it's been a while).</p>
]]></description><pubDate>Thu, 28 Aug 2025 13:11:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45051742</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=45051742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45051742</guid></item><item><title><![CDATA[New comment by calebkaiser in "Claude Code IDE integration for Emacs"]]></title><description><![CDATA[
<p>There's a lot of great work both around supporting memory efficient inference (like on a closer-to-consumer machine), as well as on open source code-focused models.<p>A lot of people are excited about the Qwen3-Coder family of models: <a href="https://huggingface.co/collections/Qwen/qwen3-coder-687fc861e53c939e52d52d10" rel="nofollow">https://huggingface.co/collections/Qwen/qwen3-coder-687fc861...</a><p>For running locally, there are tools like Ollama and LM Studio. Your hardware needs will fluctuate depending on what size/quantization of model you try to run, but 2k in hardware cost is reasonable for running a lot of models. Some people have good experiences using the M-series Macs, which is probably a good bang-for-buck if you're exclusively interested in inference.<p>I'd recommend checking out the LocalLlamas subreddit for more: <a href="https://www.reddit.com/r/LocalLLaMA/" rel="nofollow">https://www.reddit.com/r/LocalLLaMA/</a><p>Getting results on par with big labs isn't feasible, but if you prefer to run everything locally, it is a fun and doable project.</p>
]]></description><pubDate>Wed, 06 Aug 2025 18:30:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44815791</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=44815791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44815791</guid></item><item><title><![CDATA[New comment by calebkaiser in "AI promised efficiency. Instead, it's making us work harder"]]></title><description><![CDATA[
<p>I don't think this is an AI specific thing. I work in the field, and so I'm around some of the most enthusiastic adopters of LLMs, and from what I see, engineering cultures surrounding LLM usage typically match the org's previous general engineering culture.<p>So, for example, by and large the orgs I've seen chucking Claude PRs over the wall with little review were previously chucking 100% human written PRs over the wall with little review.<p>Similarly, the teams I see effectively using test suites to guide their code generation are the same teams that effectively use test suites to guide their general software engineering workflows.</p>
]]></description><pubDate>Mon, 04 Aug 2025 17:06:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44788618</link><dc:creator>calebkaiser</dc:creator><comments>https://news.ycombinator.com/item?id=44788618</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44788618</guid></item></channel></rss>