<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jannniii</title><link>https://news.ycombinator.com/user?id=jannniii</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 16:14:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jannniii" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jannniii in "I've been waiting over a month for Anthropic to respond to my billing issue"]]></title><description><![CDATA[
<p>Similar experiences here. Suddenly my max 20 account is just useless…</p>
]]></description><pubDate>Thu, 09 Apr 2026 04:56:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47699424</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47699424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47699424</guid></item><item><title><![CDATA[New comment by jannniii in "Show HN: Ghost Pepper – Local hold-to-talk speech-to-text for macOS"]]></title><description><![CDATA[
<p>github.com/randomm/kuiskaus</p>
]]></description><pubDate>Tue, 07 Apr 2026 04:53:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670911</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47670911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670911</guid></item><item><title><![CDATA[New comment by jannniii in "Show HN: Ghost Pepper – Local hold-to-talk speech-to-text for macOS"]]></title><description><![CDATA[
<p>Oh dear, why does it not use apfel for cleanup? No model download necessary…</p>
]]></description><pubDate>Tue, 07 Apr 2026 04:52:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670903</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47670903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670903</guid></item><item><title><![CDATA[Show HN: Chained apples, or apfels, for more logic]]></title><description><![CDATA[
<p>Thought to try if I could chain multiple apfel calls to make the Apple LLM be more useful, enabling use of natural language to run many zsh commands or combinations of commands.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47643058">https://news.ycombinator.com/item?id=47643058</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 04 Apr 2026 20:30:48 +0000</pubDate><link>https://github.com/randomm/omppu</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47643058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47643058</guid></item><item><title><![CDATA[New comment by jannniii in "Show HN: Refrax – my Arc Browser replacement I made from scratch"]]></title><description><![CDATA[
<p>Nice one! Looking forward to trying it out. Have had the same grievances about Arc being abandone. Dia? … nah</p>
]]></description><pubDate>Mon, 23 Mar 2026 13:23:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47489197</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47489197</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47489197</guid></item><item><title><![CDATA[Show HN: Oo – compress output for coding agents (cargo test → "47 passed, 2.1s")]]></title><description><![CDATA[
<p>I've been running coding agents heavily for the past year or so using frontier model APIs, open weight model APIs and, most recently, local models (Qwen family models on a Strix Halo).<p>Starting to run local inference has highlighted something I've been aware for longer: just running tests output shedloads of text into the context window that is there for good until compaction or starting afresh. For example, a single `cargo test` dumping 8KB into the agent's context just to communicate "47 test passed." The agent reads all of it, learns nothing useful, and the context window fills with noise. Makes LLM prefill slower as well as costs more when using per token APIs.<p>I created a small program that sits between the command output and the LLM: oo, or double-o ... yes, sad play on words. Double-o, the agent's best friend :)<p>oo wraps commands and classifies their output:<p><pre><code>  - Small output (<4KB): passes through unchanged
  - Known success pattern: one-line summary
    (oo cargo test →  cargo test (47 passed, 2.1s))
  - Failure: filtered to actionable errors
  - Large unknown output: indexed locally, queryable via oo recall
</code></pre>
It currently ships with 10 built-in patterns (pytest, cargo test, go test, jest, eslint, ruff, cargo build, cargo clippy, go build, tsc), but users can add their own via TOML files or use oo learn <cmd> to have an LLM generate one from real command output (currently only with Anthropic models).<p>No agent modification needed: add "prefix commands with oo" to your system prompt. Single Rust binary, 197 tests, Apache-2.0.<p>The classification engine works using regex-based pattern matching with per-command failure strategies (tail, head, grep, between) and automatic command categorization (status/content/data/unknown) that determines what happens with unrecognized commands. Content commands like git diff always pass through; data commands like git log get indexed when large.<p>Especially noticeable with local models & wall-clock time. Helps with frontier models too ... cleaner context, fewer confused follow-ups.<p><a href="https://github.com/randomm/oo" rel="nofollow">https://github.com/randomm/oo</a><p><a href="https://crates.io/crates/double-o" rel="nofollow">https://crates.io/crates/double-o</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47488675">https://news.ycombinator.com/item?id=47488675</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 23 Mar 2026 12:41:07 +0000</pubDate><link>https://github.com/randomm/oo</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47488675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47488675</guid></item><item><title><![CDATA[New comment by jannniii in "Double-O, agent's best friend"]]></title><description><![CDATA[
<p>Hey, so I have been tinkering lately with locally running LLMs for programming purposes, on a Strix Halo machine with 128GB of RAM. I quickly realised that in order to get more speed out of the setup I need to save on stuff that ends up in model context. I built (with agents!) this simple bash command runner that swallows large output, passing only the relevant stuff to agent.<p>Think pytest only outputting OK if all tests pass, but in case of error getting the necessary error output only.<p>It is a cli, basic use for agents:<p>oo pytest
oo cargo test
etc…<p>Custom patterns configurable and there is also learn -flag to use Anthropic models to have LLM write the configuration for your custom command.<p>Nothing too complicated or special, but works in my LLM workflow saving on token use & speeding models!<p>Have a great weekend!</p>
]]></description><pubDate>Sat, 21 Mar 2026 05:45:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47464326</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47464326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47464326</guid></item><item><title><![CDATA[Double-O, agent's best friend]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/randomm/oo">https://github.com/randomm/oo</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47464325">https://news.ycombinator.com/item?id=47464325</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 21 Mar 2026 05:45:39 +0000</pubDate><link>https://github.com/randomm/oo</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47464325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47464325</guid></item><item><title><![CDATA[New comment by jannniii in "Quillx is an open standard for disclosing AI involvement in software projects"]]></title><description><![CDATA[
<p>Nice idea, but the labels are a bit too opinionated for me.<p>Literally all my code has been ”ghostwritten” for the past 18 months. Does not sound like something enterprise customers would like to hear and try to understand what it means.</p>
]]></description><pubDate>Mon, 16 Mar 2026 06:08:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47395634</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47395634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47395634</guid></item><item><title><![CDATA[New comment by jannniii in "Show HN: Badge that shows how well your codebase fits in an LLM's context window"]]></title><description><![CDATA[
<p>Interesting concept, but is it going to age well with context sizes of models are changing all the time (growing, mostly)?</p>
]]></description><pubDate>Fri, 27 Feb 2026 15:42:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47181819</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47181819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47181819</guid></item><item><title><![CDATA[New comment by jannniii in "Show HN: Clocksimulator.com – A minimalist, distraction-free analog clock"]]></title><description><![CDATA[
<p>Very nice!!</p>
]]></description><pubDate>Wed, 25 Feb 2026 15:10:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47152569</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47152569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47152569</guid></item><item><title><![CDATA[New comment by jannniii in "Pi – A minimal terminal coding harness"]]></title><description><![CDATA[
<p>Oh-my-bloat.<p>I am still an avid user of opencode, my own fork though with async tools etc, but it is cumbersome and tries to do too many things.</p>
]]></description><pubDate>Wed, 25 Feb 2026 08:40:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47148950</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47148950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47148950</guid></item><item><title><![CDATA[New comment by jannniii in "Pi – A minimal terminal coding harness"]]></title><description><![CDATA[
<p>It is an awesome fork! Tried to contribute also, but community seems quite close knit.</p>
]]></description><pubDate>Wed, 25 Feb 2026 08:39:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47148943</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47148943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47148943</guid></item><item><title><![CDATA[Show HN: Vipune – Simple Memory for Agents]]></title><description><![CDATA[
<p>vipune is a local CLI memory store for AI agents. You tell your LLM agents to run vipune add "..." and vipune search "..." — it stores embeddings locally in SQLite and retrieves by semantic meaning, reranking and recency baked in, not keyword matching.<p>The problem it solves: agents needing historical knowledge of your project or other tasks that other agents are working on. Or just getting context transferred from session to session in clean and semantically searchable way. Most solutions require an API, a cloud service, or a running server. vipune is a single binary, no daemon, no keys, works offline.<p>I had a predecessor to vipune in my personal agentic flow for a long time, but it got bloated and decided on a rewrite and thought to share, maybe someone else will find it useful also.<p>And yes, this is written 99% by LLM agents, 100% monitored by yours truly.<p>It also does conflict detection — if you try to add something semantically similar to an existing memory, it flags it instead of silently duplicating.<p>Built in Rust, Apache-2.0. Binaries for macOS ARM64 and Linux. Early release — would love feedback on the use cases people try it for.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47094213">https://news.ycombinator.com/item?id=47094213</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 20 Feb 2026 21:27:32 +0000</pubDate><link>https://github.com/randomm/vipune</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=47094213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094213</guid></item><item><title><![CDATA[New comment by jannniii in "GPT‑5.3‑Codex‑Spark"]]></title><description><![CDATA[
<p>This would be interesting if it was an open weights model.</p>
]]></description><pubDate>Thu, 12 Feb 2026 20:51:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46994979</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=46994979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46994979</guid></item><item><title><![CDATA[New comment by jannniii in "GLM-5: Targeting complex systems engineering and long-horizon agentic tasks"]]></title><description><![CDATA[
<p>Indeed and I got two words for you:<p>Strix Halo</p>
]]></description><pubDate>Wed, 11 Feb 2026 14:42:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46975537</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=46975537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46975537</guid></item><item><title><![CDATA[New comment by jannniii in "Sheldon Brown's Bicycle Technical Info"]]></title><description><![CDATA[
<p>So happy to see this featured here! Had been tinkering with bikes a long time before finding Sheldon’s site, but when I did I was dumbstruck by the amount of insight. And to top that, what a person he was. RIP</p>
]]></description><pubDate>Fri, 06 Feb 2026 20:00:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46917418</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=46917418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46917418</guid></item><item><title><![CDATA[New comment by jannniii in "Nanobot: Ultra-Lightweight Alternative to OpenClaw"]]></title><description><![CDATA[
<p>Okay so is this ”inspired” by nanoclaw that was featured here two days ago?</p>
]]></description><pubDate>Thu, 05 Feb 2026 11:11:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46898422</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=46898422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46898422</guid></item><item><title><![CDATA[New comment by jannniii in "Unauthenticated remote code execution in OpenCode"]]></title><description><![CDATA[
<p>They are a small team and tool has gotten wildly popular. Which is not to say that slowing down and addressing quality and security issues would not be a bad idea.<p>I’ve been an active user of opencode for 7-8 months now, really like the tool, but beginning to get a feeling that the core team’s idea of keeping the core development to themselves is not going to scale any longer.<p>Really loving opencode though!</p>
]]></description><pubDate>Tue, 13 Jan 2026 17:14:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46604068</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=46604068</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46604068</guid></item><item><title><![CDATA[New comment by jannniii in "Show HN: I replaced Beads with a faster, simpler Markdown-based task tracker"]]></title><description><![CDATA[
<p>Yes, I get it. Personally dancing between commercial work (GH) and personal (no need for GH). Because of orchestration setup works for former end up using also for latter.<p>Maybe should try to give tickets a go. gh cli does add another HTTP layer and slows things down - feels silly to be paying for Cerebras if one is slowed down by other tooling.</p>
]]></description><pubDate>Tue, 06 Jan 2026 06:33:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46509371</link><dc:creator>jannniii</dc:creator><comments>https://news.ycombinator.com/item?id=46509371</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46509371</guid></item></channel></rss>