<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: zby</title><link>https://news.ycombinator.com/user?id=zby</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 17:51:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=zby" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by zby in "Inventing Cyrillic (2024)"]]></title><description><![CDATA[
<p>"In the 890s, having recently converted to Orthodox Christianity, Boris ensured his church would be independent from the Patriarchate of Constantinople." --- I thought Orthodox Christianity was created by the Great Schism in 1054.</p>
]]></description><pubDate>Fri, 08 May 2026 09:13:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=48060621</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=48060621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48060621</guid></item><item><title><![CDATA[New comment by zby in "Nobody Reviews Compiler Output"]]></title><description><![CDATA[
<p>And how they are doing? I think this might be a solid research program - but that blog presented it as some kind of practical approach.</p>
]]></description><pubDate>Fri, 08 May 2026 08:53:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=48060469</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=48060469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48060469</guid></item><item><title><![CDATA[New comment by zby in "AI slop is killing online communities"]]></title><description><![CDATA[
<p>So no hope for <a href="https://xkcd.com/810/" rel="nofollow">https://xkcd.com/810/</a>?</p>
]]></description><pubDate>Thu, 07 May 2026 20:46:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=48054719</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=48054719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48054719</guid></item><item><title><![CDATA[New comment by zby in "Nobody Reviews Compiler Output"]]></title><description><![CDATA[
<p>First you need to write these specifications and if you say just tell the llm to write them - then how would it be different from just tell the llm to write the program?<p>I guess you can argue that these are two independent processes so you can combine them to get something more reliable than both - this might be a viable path. But from what I heard writing formal specifications is just really hard - I haven't seen anything practical in this area.</p>
]]></description><pubDate>Thu, 07 May 2026 20:37:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=48054620</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=48054620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48054620</guid></item><item><title><![CDATA[New comment by zby in "Agents need control flow, not more prompts"]]></title><description><![CDATA[
<p>I concur - it does not make sense to do in llm prompts what can be done in code. Code is cheaper, faster, deterministic and we have lots of experience with working with code.<p>Especially all bookkeeping logic should move into the symbolic layer: <a href="https://zby.github.io/commonplace/notes/scheduler-llm-separation-exploits-an-error-correction-asymmetry/" rel="nofollow">https://zby.github.io/commonplace/notes/scheduler-llm-separa...</a></p>
]]></description><pubDate>Thu, 07 May 2026 20:15:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=48054291</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=48054291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48054291</guid></item><item><title><![CDATA[New comment by zby in "Nobody Reviews Compiler Output"]]></title><description><![CDATA[
<p>"""
we need to build:<p><pre><code>    Formal specification layers that agents execute against, not just prompts</code></pre>
"""<p>It is probably easier to just write that program.</p>
]]></description><pubDate>Thu, 07 May 2026 19:51:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=48054011</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=48054011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48054011</guid></item><item><title><![CDATA[New comment by zby in "Show HN: Auto-Architecture: Karpathy's Loop, pointed at a CPU"]]></title><description><![CDATA[
<p>It is not novel - but with the new models it is just becoming practical.</p>
]]></description><pubDate>Wed, 29 Apr 2026 12:48:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47947576</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47947576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47947576</guid></item><item><title><![CDATA[New comment by zby in "Simulacrum of Knowledge Work"]]></title><description><![CDATA[
<p>If you have a test that fails 50% times - is that test valuable or not? A 50% failure rate alone looks like a coin toss, but by itself that does not tell us whether the test is noise or whether it is separating bad states from good ones. For a test to be useful it needs to have positive Youden’s statistic (<a href="https://en.wikipedia.org/wiki/Youden%27s_J_statistic" rel="nofollow">https://en.wikipedia.org/wiki/Youden%27s_J_statistic</a>): sensitivity + specificity - 1. A 50% failure rate alone does not let us calculate sensitivity and specificity.<p>I can see a similar problem with this article - the author notices that LLMs produce a lot of errors - then concludes that they are useless and produce only simulacrum of work. The author has an interesting observation about how llms disrupt the way we judge knowledge work. But when he concludes that llms do only simulacrum of work - this is where his arguments fail.</p>
]]></description><pubDate>Sat, 25 Apr 2026 21:32:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47904703</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47904703</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47904703</guid></item><item><title><![CDATA[New comment by zby in "AI agents that argue with each other to improve decisions"]]></title><description><![CDATA[
<p>I don't know - looks like an interesting idea - but ... I am struggling to put that in a polite manner. When I go into the repo and find out that it does stuff like lip syncing of talking avatars then I start to think what percentage of the development effort goes into marketing?</p>
]]></description><pubDate>Sat, 25 Apr 2026 20:29:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47904315</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47904315</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47904315</guid></item><item><title><![CDATA[New comment by zby in "Show HN: Tolaria – Open-source macOS app to manage Markdown knowledge bases"]]></title><description><![CDATA[
<p>The other reviews are based on published articles - but I am not sure if I want to continue this - because it is hard to keep them honest.<p>Maybe you can use the instructions from my repo - which are here: <a href="https://github.com/zby/commonplace/blob/main/kb/agent-memory-systems/types/agent-memory-system-review.md" rel="nofollow">https://github.com/zby/commonplace/blob/main/kb/agent-memory...</a> and run them on your code directory? Then send me the result.</p>
]]></description><pubDate>Sat, 25 Apr 2026 20:04:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47904141</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47904141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47904141</guid></item><item><title><![CDATA[New comment by zby in "What's Missing in the 'Agentic' Story"]]></title><description><![CDATA[
<p>I like how the author notices that it really got a start with cloud computing.</p>
]]></description><pubDate>Sat, 25 Apr 2026 17:38:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47903108</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47903108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47903108</guid></item><item><title><![CDATA[New comment by zby in "Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)"]]></title><description><![CDATA[
<p>For the design notes like: <a href="https://zby.github.io/commonplace/notes/designing-agent-memory-systems/" rel="nofollow">https://zby.github.io/commonplace/notes/designing-agent-memo...</a> - I iterate over and over to clean them. This one is also a compilation with many intermediate documents.<p>But the reviews are written automatically - here are the instructions: <a href="https://github.com/zby/commonplace/blob/main/kb/agent-memory-systems/types/agent-memory-system-review.md" rel="nofollow">https://github.com/zby/commonplace/blob/main/kb/agent-memory...</a><p>Overall the knowledgebase is a mixture of these. I have this disclaimer on the first page:<p><i>This KB is itself agent-operated: a human directs the inquiry, AI agents draft, connect, and maintain the notes. The framework for building knowledge bases is documented using that framework.</i><p>I hope it is enough - I've seen many people get angry with publishing LLM generated work.</p>
]]></description><pubDate>Sat, 25 Apr 2026 15:18:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47902136</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47902136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47902136</guid></item><item><title><![CDATA[New comment by zby in "Open source memory layer so any AI agent can do what Claude.ai and ChatGPT do"]]></title><description><![CDATA[
<p>I have a wishlist for these systems: <a href="https://zby.github.io/commonplace/notes/designing-agent-memory-systems/" rel="nofollow">https://zby.github.io/commonplace/notes/designing-agent-memo...</a></p>
]]></description><pubDate>Sat, 25 Apr 2026 15:02:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47902024</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47902024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47902024</guid></item><item><title><![CDATA[New comment by zby in "Open source memory layer so any AI agent can do what Claude.ai and ChatGPT do"]]></title><description><![CDATA[
<p>Reviewed: <a href="https://zby.github.io/commonplace/agent-memory-systems/reviews/stash/" rel="nofollow">https://zby.github.io/commonplace/agent-memory-systems/revie...</a><p>Together with the other hundred llm memory systems: <a href="https://zby.github.io/commonplace/agent-memory-systems/" rel="nofollow">https://zby.github.io/commonplace/agent-memory-systems/</a><p>I have also written a wishlist for these systems: <a href="https://zby.github.io/commonplace/notes/designing-agent-memory-systems/" rel="nofollow">https://zby.github.io/commonplace/notes/designing-agent-memo...</a></p>
]]></description><pubDate>Sat, 25 Apr 2026 14:42:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47901893</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47901893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47901893</guid></item><item><title><![CDATA[New comment by zby in "Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)"]]></title><description><![CDATA[
<p>Reviewed: <a href="https://zby.github.io/commonplace/agent-memory-systems/reviews/wuphf/" rel="nofollow">https://zby.github.io/commonplace/agent-memory-systems/revie...</a><p>It is a third llm wiki on front page in 24 hours!
Obviously it is a hot topic. I have my own horse in that race - so I might not be objective - but I've compiled a wishlist for these system: <a href="https://zby.github.io/commonplace/notes/designing-agent-memory-systems/" rel="nofollow">https://zby.github.io/commonplace/notes/designing-agent-memo...</a><p>I wish there was a chance for collaboration - everybody coding their own system seems like a lot of effort duplication.</p>
]]></description><pubDate>Sat, 25 Apr 2026 13:16:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47901360</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47901360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47901360</guid></item><item><title><![CDATA[New comment by zby in "Do I belong in tech anymore?"]]></title><description><![CDATA[
<p>What I meant was how LangChain dominated the llm frameworks scene because it loaded VC money. It was just at the beginning - now it has normalised - but I believe it did a lot of damage at that early stage by sucking all oxygen.</p>
]]></description><pubDate>Sat, 25 Apr 2026 12:44:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47901039</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47901039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47901039</guid></item><item><title><![CDATA[New comment by zby in "Do I belong in tech anymore?"]]></title><description><![CDATA[
<p>This report lists failures of some AI systems. They look consequential - but the company does not seem to care. This is very strange - how can it be? I really like AI products they help me all the time - but I know I need to take into account their failure modes and be careful. But lots of organisations don't seem to do that calculation. Will competition root them out? I don't know - I am so enthusiastic about AI - but ever after the LangChain situation I can see that what is adopted is always something that has a lot of flows. The more careful developers that notice the flaws and try to find true workarounds fail because it takes time to do the design well. It is not new thing - there were Betamax mourners for decades - but it seems that the hype machine is now more and more powerful.</p>
]]></description><pubDate>Sat, 25 Apr 2026 06:18:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47899154</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47899154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47899154</guid></item><item><title><![CDATA[New comment by zby in "Show HN: Atomic – Local-first, AI-augmented personal knowledge base"]]></title><description><![CDATA[
<p>Thanks!<p>The reviews are done automatically - here are the instructions: <a href="https://github.com/zby/commonplace/blob/main/kb/agent-memory-systems/types/agent-memory-system-review.md" rel="nofollow">https://github.com/zby/commonplace/blob/main/kb/agent-memory...</a><p>I am open to changing these instructions - it cannot be about just making your system look better - but I'll try to incorporate genuine ideas how to improve these reviews.</p>
]]></description><pubDate>Sat, 25 Apr 2026 05:27:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47898910</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47898910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47898910</guid></item><item><title><![CDATA[New comment by zby in "Show HN: Atomic – Local-first, AI-augmented personal knowledge base"]]></title><description><![CDATA[
<p>Reviewed: <a href="https://zby.github.io/commonplace/agent-memory-systems/reviews/atomic/" rel="nofollow">https://zby.github.io/commonplace/agent-memory-systems/revie...</a><p>It is the second llm wiki on frontpage today!<p>I wish the scene was more collaborative - instead of everyone writing their own. But I guess this is the llm curse - too easy to start. I am afraid it will all go in the LangChain direction with VC funding designs that are not yet ready solidifying choices that would normally be superseded.</p>
]]></description><pubDate>Fri, 24 Apr 2026 16:22:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47892318</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47892318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47892318</guid></item><item><title><![CDATA[New comment by zby in "Show HN: Tolaria – Open-source macOS app to manage Markdown knowledge bases"]]></title><description><![CDATA[
<p>Added: <a href="https://zby.github.io/commonplace/agent-memory-systems/reviews/signetai/" rel="nofollow">https://zby.github.io/commonplace/agent-memory-systems/revie...</a><p>By the way - here is the prompt for these reviews: <a href="https://github.com/zby/commonplace/blob/main/kb/agent-memory-systems/types/agent-memory-system-review.md" rel="nofollow">https://github.com/zby/commonplace/blob/main/kb/agent-memory...</a></p>
]]></description><pubDate>Fri, 24 Apr 2026 11:52:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47888969</link><dc:creator>zby</dc:creator><comments>https://news.ycombinator.com/item?id=47888969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47888969</guid></item></channel></rss>