<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hsaliak</title><link>https://news.ycombinator.com/user?id=hsaliak</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 11:47:45 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hsaliak" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hsaliak in "We've raised $17M to build what comes after Git"]]></title><description><![CDATA[
<p>I've long had the same idea.. this one has legs.</p>
]]></description><pubDate>Fri, 10 Apr 2026 09:16:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47715483</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47715483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47715483</guid></item><item><title><![CDATA[New comment by hsaliak in "Components of a Coding Agent"]]></title><description><![CDATA[
<p>Tool output truncation helps a lot and is one of the best ways to reduce context bloat. In my coding agent the context is assembled from SQLite. I suffix the message ID to rehydrate the truncated tool call if it’s needed and it works great. 
My exploration on context management is mostly documented here <a href="https://github.com/hsaliak/std_slop/blob/main/docs/CONTEXT_MANAGEMENT.md" rel="nofollow">https://github.com/hsaliak/std_slop/blob/main/docs/CONTEXT_M...</a></p>
]]></description><pubDate>Sat, 04 Apr 2026 22:36:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47644241</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47644241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47644241</guid></item><item><title><![CDATA[New comment by hsaliak in "Nanobrew: The fastest macOS package manager compatible with brew"]]></title><description><![CDATA[
<p>This is most certainly vibed with a few optimization focused prompts. Yes - performance is a feature, but so is lack of risk.</p>
]]></description><pubDate>Tue, 24 Mar 2026 19:21:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47507737</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47507737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47507737</guid></item><item><title><![CDATA[New comment by hsaliak in "Show HN: Context Gateway – Compress agent context before it hits the LLM"]]></title><description><![CDATA[
<p>Agreed on the need, and this space needs more exploration that is not going to come from big-cos as they are incentivised in boosting spend. I've been exploring the same problem statement, but with a different approach <a href="https://github.com/hsaliak/std_slop/blob/main/docs/CONTEXT_MANAGEMENT.md" rel="nofollow">https://github.com/hsaliak/std_slop/blob/main/docs/CONTEXT_M...</a>.<p>The comment was more around how to make their approach sticky.. I feel that local SLMs can replicate what this product does.</p>
]]></description><pubDate>Fri, 20 Mar 2026 15:46:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47456271</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47456271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456271</guid></item><item><title><![CDATA[New comment by hsaliak in "I'm OK being left behind, thanks"]]></title><description><![CDATA[
<p>My experience so far tells me that the default path with AI tooling is that it lets us create without learning. So the author is right in that they can pay for a seat in this revolution whenever they want.<p>A practitioner with more experience maybe a few percentage points more productive, but the median - grab subscription, get tool, prompt, will be mostly good enough.</p>
]]></description><pubDate>Fri, 20 Mar 2026 14:45:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47455328</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47455328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47455328</guid></item><item><title><![CDATA[New comment by hsaliak in "Show HN: Context Gateway – Compress agent context before it hits the LLM"]]></title><description><![CDATA[
<p>I expect tools to start embedding an SLM ~1B range locally for something like this. It will become a feature in a rapidly changing landscape and its need may disappear in the future. How would you turn into a sticky product?</p>
]]></description><pubDate>Sat, 14 Mar 2026 02:00:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47372537</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47372537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47372537</guid></item><item><title><![CDATA[New comment by hsaliak in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p><a href="https://github.com/hsaliak/std_slop" rel="nofollow">https://github.com/hsaliak/std_slop</a> a sqlite centric coding agent. it does a few things differently.
1 - context is completely managed in sqlite
2 - it has a "mail model" basically, it uses the git email workflow as the agentic plan => code => review loop. You become "linus" in this mode, and the patches are guaranteed bisect safe.
3 - everything is done in a javascript control plane, no free form tools like read / write / patch. Those are available but within a javascript repl. So the agent works on that. You get other benefits such as being able to persist js functions in the database for future use that's specific to your codebase.<p>Give it a try!</p>
]]></description><pubDate>Mon, 09 Mar 2026 03:44:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304661</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47304661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304661</guid></item><item><title><![CDATA[New comment by hsaliak in "We should revisit literate programming in the agent era"]]></title><description><![CDATA[
<p>I explored this in std::slop (my clanker) <a href="https://github.com/hsaliak/std_slop" rel="nofollow">https://github.com/hsaliak/std_slop</a>. One of it's differentiating features of this clanker i that it only has a single tool call, run_js.
The LLM produces js scripts to do it's work. Naturally, i tried to teach it to add comments for these scripts and incorporate literate programming elements.  This was interesting because, every tool call now 'hydrated' some free form thinking, but it comes at output token cost.<p>Output Tokens are expensive! In GPT-5.4 it's ~180 dollars per Million tokens!
I've settled for brief descriptions that communicate 'why' as a result. The code is documentation after all.</p>
]]></description><pubDate>Mon, 09 Mar 2026 02:42:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304280</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47304280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304280</guid></item><item><title><![CDATA[New comment by hsaliak in "Agent Safehouse – macOS-native sandboxing for local agents"]]></title><description><![CDATA[
<p>This is a very nice and clean implementation. Related to this - I've been exploring injecting landlock and seccomp profiles directly into the elf binary, so that applications that are backed by some LLM, but want to 'do the right thing' can lock themselves out. This ships a custom process loader (that reads the .sandbox section) and applies the policies, not unlike bubblewrap which uses namespaces). The loading can be pushed to a kernel module in the future.<p><a href="https://github.com/hsaliak/sacre_bleu" rel="nofollow">https://github.com/hsaliak/sacre_bleu</a> very rough around the edges, but it works.
In the past there were apps that either behaved well, or had malicious intent, but with these LLM backed apps, you are going to see apps that want to behave well, but cannot guarantee it. 
We are going to see a lot of experimentation in this space until the UX settles!</p>
]]></description><pubDate>Mon, 09 Mar 2026 02:29:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304201</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47304201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304201</guid></item><item><title><![CDATA[New comment by hsaliak in "Google Workspace CLI"]]></title><description><![CDATA[
<p>GCP Next is Apr 22-24. Hope this continues to live afer that.</p>
]]></description><pubDate>Thu, 05 Mar 2026 02:08:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47256620</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47256620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47256620</guid></item><item><title><![CDATA[New comment by hsaliak in "Agentic Engineering Patterns"]]></title><description><![CDATA[
<p>I'd like to plug <a href="https://github.com/hsaliak/std_slop/blob/main/docs/mail_model.md" rel="nofollow">https://github.com/hsaliak/std_slop/blob/main/docs/mail_mode...</a> my coding harness (std::slop)'s mail model (a poor name i admit). I believe this solves a fundamental problem of accummulating errors along with code in your project.<p>This brings the Linux Kernel style patch => discuss => merge by maintainer workflow to agents. You get bisect safe patches you 'review' and provide feedback and approve.<p>While a SKILL could mimic this, being built in allows me to place access control and 'gate' destructive actions so the LLM is forced to follow this workflow. Overall, this works really well for me. I am able to get bisect-safe patches, and then review / re-roll them until I get exactly what I want, then I merge them.<p>Sure this may be the path to software factories, but it scales 'enough' for medium size projects and I've been able to build in a way that I maintain strong understanding of the code that goes in.</p>
]]></description><pubDate>Wed, 04 Mar 2026 16:59:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47250409</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47250409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47250409</guid></item><item><title><![CDATA[New comment by hsaliak in "Qwen3.5 122B and 35B models offer Sonnet 4.5 performance on local computers"]]></title><description><![CDATA[
<p>No it does not. None of these models have the “depth” that the frontier models have across a variety of conversations, tasks and situations. Working with them is like playing snakes and ladders, you never know when it’s going to do something crazy and set you back.</p>
]]></description><pubDate>Sun, 01 Mar 2026 01:32:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47202673</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47202673</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47202673</guid></item><item><title><![CDATA[New comment by hsaliak in "Addressing Antigravity Bans and Reinstating Access"]]></title><description><![CDATA[
<p>It takes your query, computes the complexity of the request, and tries to route it to the appropriate model. There is a /manual command i think, to pick the right model.<p>They mask the 429s well in Gemini-Cli - if an endpoint is rate limited, they try another, or route to another model, etc to keep service availability up.<p>Your experience on the 429s is consistent with mine - the 429s is the first thing they need to fix. Fix that and they have a solid model at a good price point.<p>I use my own coding agent (<a href="https://github.com/hsaliak/std_slop" rel="nofollow">https://github.com/hsaliak/std_slop</a>) and not being able to bring my (now cancelled) AI account with Google to it is a bummer.<p>I'd still use it with the Code Assist Standard license if the google cloud API subscription allows for it but I have no clarification.</p>
]]></description><pubDate>Sat, 28 Feb 2026 19:47:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47199394</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47199394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47199394</guid></item><item><title><![CDATA[New comment by hsaliak in "Addressing Antigravity Bans and Reinstating Access"]]></title><description><![CDATA[
<p>The Gemini-CLI situation is poor. They did not communicate that AI Pro or AI Ultra accounts cannot be used with this API broadly earlier. I specifically remember searching for this info. Seeing this made me wonder if I had missed it.  Turns out it was added to the TOS 2 days ago - diff
<a href="https://github.com/google-gemini/gemini-cli/pull/20488/changes" rel="nofollow">https://github.com/google-gemini/gemini-cli/pull/20488/chang...</a>. I'd be happy to stand corrected here.<p>Anti Gravity I understand, they are subsidizing to promote a general IDE, but I dont understand constraining the generative AI backend that Gemini CLI hits.<p>Finally, it's unclear what's allowed and what's not if I purchase the API access from google cloud here <a href="https://developers.google.com/gemini-code-assist/docs/overview#supported-features" rel="nofollow">https://developers.google.com/gemini-code-assist/docs/overvi...</a><p>The Apache License of this product at this point is rich. Just make it closed source and close the API reference. Why have it out there?</p>
]]></description><pubDate>Sat, 28 Feb 2026 18:20:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47198537</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47198537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47198537</guid></item><item><title><![CDATA[New comment by hsaliak in "Ladybird adopts Rust, with help from AI"]]></title><description><![CDATA[
<p>Fair enough. I'll find a way to publish some of this. I try to cover most of the information in the docs/ folder, and keep it up to date. 
Blog posts in release notes is a good idea!</p>
]]></description><pubDate>Tue, 24 Feb 2026 14:35:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47137678</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47137678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47137678</guid></item><item><title><![CDATA[New comment by hsaliak in "Ladybird adopts Rust, with help from AI"]]></title><description><![CDATA[
<p>Thanks for your interest in this work - I do not blog(maybe I should?) but i have posted a bit more on X about this work.<p>- A bit more on mail mode <a href="https://x.com/hsaliak/status/2020022329154420830" rel="nofollow">https://x.com/hsaliak/status/2020022329154420830</a><p>- on the Lua integration <a href="https://x.com/hsaliak/status/2022911468262350976" rel="nofollow">https://x.com/hsaliak/status/2022911468262350976</a> (I've since disabled the recursion, not every code file is long and it seems simpler to not do it), but the rest of it is still there<p>- hotwords for skill activation <a href="https://x.com/hsaliak/status/2024322170353037788" rel="nofollow">https://x.com/hsaliak/status/2024322170353037788</a><p>Also /review and /feedback. /feedback (the non code version) opens up the LLM's last response in an editor so you can give line by line comments. Inspired by "not top posting" from mailing lists.</p>
]]></description><pubDate>Mon, 23 Feb 2026 20:51:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47128612</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47128612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47128612</guid></item><item><title><![CDATA[New comment by hsaliak in "Ladybird adopts Rust, with help from AI"]]></title><description><![CDATA[
<p>This is the way. This exact workflow is my sweet spot.<p>In my coding agent std::slop I've optimized for this workflow 
<a href="https://github.com/hsaliak/std_slop/blob/main/docs/mail_model.md" rel="nofollow">https://github.com/hsaliak/std_slop/blob/main/docs/mail_mode...</a> basically the idea is that you are the 'maintainer' and you get bisect safe, git patches that you review (or ask a code reviewer skill or another agent to review). Any change re-rolls the whole stack. Git already supports such a flow and I added it to the agent. A simple markdown skill does not work because it 'forgets'. A 'github' based PR flow felt too externally dependent. This workflow is enforced by a 'patcher' skill, and once that's active, tools do not work unless they follow the enforced flow.<p>I think a lot of people are going to feel comfortable using agents this way rather than going full blast. I do all my development this way.</p>
]]></description><pubDate>Mon, 23 Feb 2026 16:38:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47124692</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47124692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47124692</guid></item><item><title><![CDATA[New comment by hsaliak in "Google restricting Google AI Pro/Ultra subscribers for using OpenClaw"]]></title><description><![CDATA[
<p>Google's Pro service (no idea about ultra and I have no intention to find out) is riddled with 429s. They have generous quotas for sure, but they really give you very low priority. For example, I still dont have access to Gemini 3.1 from that endpoint.
It's completely uncharacteristic of Google.<p>I analyzed 6k HTTP requests on the Pro account, 23% of those were hit with 429s.  (Though not from Gemini-CLI, but from my own agent using code assist). The gemini-cli has a default retry backoff of 5s. That's verifiable in code, and it's a lot.<p>I dont touch the anti-gravity endpoint, unlike code-assist, it's clear that they are subsidizing that for user acquisition on that tool. So perhaps it's ok for them to ban users form it.<p>I like their models, but they also degrade. It's quite easy to see when the models are 'smart' and capacity is available, and when they are 'stupid'. They likely clamp thinking when they are capacity strapped.<p>Yes the models are smart, but you really cant "build things" despite the marketing if you actively beat back your users for trying. I spent a decade at Google, and it's sad to see how they are executing here, despite having solid models in gemini-3-flash and gemini-3.1</p>
]]></description><pubDate>Mon, 23 Feb 2026 02:33:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47117345</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47117345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47117345</guid></item><item><title><![CDATA[New comment by hsaliak in "Gemini 3.1 Pro"]]></title><description><![CDATA[
<p>The eventual nerfing gives me pause. Flash is awesome. What we really want is gemini-3.1-flash :)</p>
]]></description><pubDate>Thu, 19 Feb 2026 16:49:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47075825</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47075825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47075825</guid></item><item><title><![CDATA[New comment by hsaliak in "-fbounds-safety: Enforcing bounds safety for C"]]></title><description><![CDATA[
<p><a href="https://github.com/hsaliak/filc-bazel-template" rel="nofollow">https://github.com/hsaliak/filc-bazel-template</a> i created this recently to make it super easy to get started with fil-c projects. If you find it daunting to get started with the setup in the core distribution and want a 3-4 step approach to building a fil-c enabled binary, then try this.</p>
]]></description><pubDate>Thu, 19 Feb 2026 16:44:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47075760</link><dc:creator>hsaliak</dc:creator><comments>https://news.ycombinator.com/item?id=47075760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47075760</guid></item></channel></rss>