<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: brynary</title><link>https://news.ycombinator.com/user?id=brynary</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 05:44:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=brynary" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: Fabro – open-source dark software factory]]></title><description><![CDATA[
<p>Hi — I created Fabro to free myself from supervising a fleet of Claude Code tabs running in a REPL (read-eval-prompt-loop). REPLs are great for exploration, but once I know what I need I want to be able to walk away while the agents get it done.
(Before building Fabro, I looked for something off the shelf but couldn't find anything that was open source, hype-free, and full featured / ready.)<p>Fabro helps experienced engineers evolve towards a “dark” software factory where average time between disengagements increases. It’s easy to throw a Ralph shell script around Claude, but as runtime increases the chance of high quality output declines.<p>Fabro adds the last mile of guardrails to make it actually work: combining deterministic workflows of agents, commands like linters and test suites, with strategically applied human steering. (Similar to the Stripe's Minions.)<p>Fabro is multi-model and makes it easy to combine Claude, Gemini, and GPT in ensemble reviews — or delegate coding to faster and cheaper models like Kimi.<p>Software factories work best when combined with cloud VMs (like Daytona) so you get infinitely scalable, secure sandboxes that can run 24/7 and accessible via SSH, VS Code, and preview links as needed. This can be a bit of a pain to set up today and Fabro tries to make it as easy as Docker.<p>The closest analog to Fabro today would be something like Factory.ai Droids. However, I think it’s critical for engineers to own their own toolchain and so Fabro is open source (MIT) so you can fork it and customize it anytime.<p>The project is highly active and I’d love any feedback or feature requests. I’ll be on here answering questions today.<p>(Had posted this a week or so, hoping to engage in some conversation.)<p>-Bryan</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47652404">https://news.ycombinator.com/item?id=47652404</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 05 Apr 2026 18:30:21 +0000</pubDate><link>https://github.com/fabro-sh/fabro</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=47652404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47652404</guid></item><item><title><![CDATA[New comment by brynary in "Show HN: Fabro – The open source dark software factory"]]></title><description><![CDATA[
<p>Hi — I created Fabro to free myself from supervising a fleet of Claude Code tabs running in a REPL (read-eval-prompt-loop). REPLs are great for exploration, but once I know what I need I want to be able to walk away while the agents get it done.<p>(Before building Fabro, I looked for something off the shelf but couldn't find anything that was open source, hype-free, and full featured / ready.)<p>Fabro helps experienced engineers evolve towards a “dark” software factory where average time between disengagements increases. It’s easy to throw a Ralph shell script around Claude, but as runtime increases the chance of high quality output declines.<p>Fabro adds the last mile of guardrails to make it actually work: combining deterministic workflows of agents, commands like linters and test suites, with strategically applied human steering. (Similar to the Stripe's Minions.)<p>Fabro is multi-model and makes it easy to combine Claude, Gemini, and GPT in ensemble reviews — or delegate coding to faster and cheaper models like Kimi.<p>Software factories work best when combined with cloud VMs (like Daytona) so you get infinitely scalable, secure sandboxes that can run 24/7 and accessible via SSH, VS Code, and preview links as needed. This can be a bit of a pain to set up today and Fabro tries to make it as easy as Docker.<p>The closest analog to Fabro today would be something like Factory.ai Droids. However, I think it’s critical for engineers to own their own toolchain and so Fabro is open source (MIT) so you can fork it and customize it anytime.<p>The project is highly active and I’d love any feedback or feature requests. I’ll be on here answering questions today.<p>-Bryan</p>
]]></description><pubDate>Tue, 17 Mar 2026 12:46:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47411913</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=47411913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47411913</guid></item><item><title><![CDATA[Show HN: Fabro – The open source dark software factory]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/fabro-sh/fabro">https://github.com/fabro-sh/fabro</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47411909">https://news.ycombinator.com/item?id=47411909</a></p>
<p>Points: 6</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 17 Mar 2026 12:46:47 +0000</pubDate><link>https://github.com/fabro-sh/fabro</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=47411909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47411909</guid></item><item><title><![CDATA[Show HN: Generated implementation of StrongDM Attractor from Markdown specs]]></title><description><![CDATA[
<p>Yesterday, I used Claude Opus 4.6 agent teams to generate a full TypeScript implementation of the StrongDM Attractor software factory from the Markdown specifications they published: <a href="https://github.com/strongdm/attractor" rel="nofollow">https://github.com/strongdm/attractor</a><p>It took a few hours of mostly light prompting like "Implement the spec" and "Fix the gaps relative to the spec".</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46935880">https://news.ycombinator.com/item?id=46935880</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 08 Feb 2026 16:36:15 +0000</pubDate><link>https://github.com/brynary/attractor</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=46935880</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46935880</guid></item><item><title><![CDATA[New comment by brynary in "AI is forcing us to write good code"]]></title><description><![CDATA[
<p>We're at 100k LOC between the tests and code so far, running in about 500-600ms. We have a few CPU intensive tests (e.g. cryptography) which I recently moved over to the integration test suite.<p>With no contention for shared resources and no async/IO, it just function calls running on Bun (JavaScriptCore) which measures function calling latency in nanoseconds. I haven't measured this myself, but the internet seems to suggest JavaScriptCore function calls can run in 2 to 5 nanoseconds.<p>On a computer with 10 cores, fully concurrent, that would imply 10 billion nanoseconds of CPU time in one wall clock second. At 5 nanoseconds per function call, that would imply a theoretical maximum of 2 billion function calls per second.<p>Real world is not going to be anywhere close to that performance, but where is the time going otherwise?</p>
]]></description><pubDate>Tue, 30 Dec 2025 00:43:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46428073</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=46428073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46428073</guid></item><item><title><![CDATA[New comment by brynary in "AI is forcing us to write good code"]]></title><description><![CDATA[
<p>Strong agreement with everything in this post.<p>At Qlty, we are going so far as to rewrite hundreds of thousands of lines of code to ensure full test coverage, end-to-end type checking (including database-generated types).<p>I’ll add a few more:<p>1. Zero thrown errors. These effectively disable the type checker and act as goto statements. We use neverthrow for Rust-like Result types in TypeScript.<p>2. Fast auto-formatting and linting. An AI code review is not a substitute for a deterministic result in sub-100ms to guarantee consistency. The auto-formatter is set up as a post-tool use Claude hook.<p>3. Side-effect free imports and construction. You should be able to load all the code files and construct an instance of every class in your app without a network connection spawning. This is harder than it sounds and without it you run into all sorts of trouble with the rest.<p>3. Zero mocks and shared global state. By mocks, I mean mocking frameworks which override functions on existing types or global. These effectively are injecting lies into the type checker.<p>Should put to tsgo which has dramatically lowered our type checking latency. As the tok/sec of models keeps going up, all the time is going to get bottlenecked on tool calls (read: type checking and tests).<p>With this approach we now have near 100% coverage with a test suite that runs in under 1,000ms.</p>
]]></description><pubDate>Mon, 29 Dec 2025 22:59:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46427038</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=46427038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46427038</guid></item><item><title><![CDATA[New comment by brynary in "Claude Code on the web"]]></title><description><![CDATA[
<p>The most interesting parts of this to me are somewhat buried:<p>- Claude Code has been added to iOS<p>- Claude Code on the Web allows for seamless switching to Claude Code CLI<p>- They have open sourced an OS-native sandboxing system which limits file system and network access _without_ needing containers<p>However, I find the emphasis on limiting the outbound network access somewhat puzzling because the allowlists invariably include domains like gist.github.com and dozens of others which act effectively as public CMS’es and would still permit exfiltration with just a bit of extra effort.</p>
]]></description><pubDate>Mon, 20 Oct 2025 18:54:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45647705</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=45647705</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45647705</guid></item><item><title><![CDATA[New comment by brynary in "Show HN: Pyscn – Python code quality analyzer for vibe coders"]]></title><description><![CDATA[
<p>What benefits do you see from having the agent call a CLI like this via MCP as opposed to just executing the CLI as a shell command and taking action on the stdout?</p>
]]></description><pubDate>Sun, 05 Oct 2025 16:21:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=45482803</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=45482803</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45482803</guid></item><item><title><![CDATA[New comment by brynary in "Show HN: Pyscn – Python code quality analyzer for vibe coders"]]></title><description><![CDATA[
<p>This looks great! Duplication and dead code are especially tricky to catch because they are not visible in diffs.<p>Since you mentioned the implementation details, a couple questions come to mind:<p>1. Are there any research papers you found helpful or influential when building this? For example, I need to read up on using tree edit distance for code duplication.<p>2. How hard do you think this would be to generalize to support other programming languages?<p>I see you are using tree-sitter which supports many languages, but I imagine a challenge might be CFGs and dependencies.<p>I’ll add a Qlty plugin for this (<a href="https://github.com/qltysh/qlty" rel="nofollow">https://github.com/qltysh/qlty</a>) so it can be run with other code quality tools and reported back to GitHub as pass/fail commit statuses and comments. That way, the AI coding agents can take action based on the issues that pyscn finds directly in a cloud dev env.</p>
]]></description><pubDate>Sun, 05 Oct 2025 14:59:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45481994</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=45481994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45481994</guid></item><item><title><![CDATA[New comment by brynary in "Using Claude Code SDK to reduce E2E test time"]]></title><description><![CDATA[
<p>Historically, this kind of test optimization was done either with static analysis to understand dependency graphs and/or runtime data collected from executing the app.<p>However, those methods are tightly bound to programming languages, frameworks, and interpreters so they are difficult to support across technology stacks.<p>This approach substitutes the intelligence of the LLM to make educated guesses about what tests execute, to achieve the same goal of executing all of the tests that could fail and none of the rest (balancing a precision/recall tradeoff). What’s especially interesting about this to me is that the same technique could be applied to any language or stack with minimal modification.<p>Has anyone seen LLMs in other contexts being substituted for traditional analysis to achieve language agnostic results?</p>
]]></description><pubDate>Sat, 06 Sep 2025 18:49:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45151874</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=45151874</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45151874</guid></item><item><title><![CDATA[New comment by brynary in "Why Greptile just does code reviews and doesn't also generate code"]]></title><description><![CDATA[
<p>This rings similar to a recent post that was on the front page about red team vs. blue team.<p>Before running LLM-generated code through yet more LLMs, you can run it through traditional static analysis (linters, SAST, auto-formatters). They aren’t flashy but they produce the same results 100% of the time.<p>Consistency is critical if you want to pass/fail a build on the results. Nobody wants a flaky code reviewer robot, just like flaky tests are the worst.<p>I imagine code review will evolve into a three tier pyramid:<p>1. Static analysis (instant, consistent) — e.g using Qlty CLI (<a href="https://github.com/qltysh/qlty" rel="nofollow">https://github.com/qltysh/qlty</a>) as a Claude Code or Git hook<p>2. LLMs — Has the advantage of being able to catch semantic issues<p>3. Human<p>We make sure commits pass each level in succession before moving on to the next.</p>
]]></description><pubDate>Mon, 04 Aug 2025 21:22:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44791508</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=44791508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44791508</guid></item><item><title><![CDATA[New comment by brynary in "June.so Acquired by Amplitude"]]></title><description><![CDATA[
<p>As an early June customer, this is a big disappointment. We specifically selected June over Mixpanel and Amplitude and were happy with it.<p>I wish there was more honesty in the post about what happened. When you boil down the details, it basically just seems to say the founders decided they would rather become (the X-hundredth) engineers at Amplitude.<p>Unless they were running out of money, I don’t see how they’ll have a “bigger impact” doing that instead of building a fresh take on the B2B analytics space.</p>
]]></description><pubDate>Tue, 08 Jul 2025 19:18:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44503127</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=44503127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44503127</guid></item><item><title><![CDATA[New comment by brynary in "Claude Code now supports hooks"]]></title><description><![CDATA[
<p>When using Claude Code cloud, in order to create signed commits, Claude uses the GitHub API to create commits instead of the git CLI</p>
]]></description><pubDate>Tue, 01 Jul 2025 13:48:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44433909</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=44433909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44433909</guid></item><item><title><![CDATA[New comment by brynary in "Claude Code now supports Hooks"]]></title><description><![CDATA[
<p>This can be implemented at the line level if the linter is Git aware</p>
]]></description><pubDate>Tue, 01 Jul 2025 02:00:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44429877</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=44429877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44429877</guid></item><item><title><![CDATA[New comment by brynary in "Claude Code now supports hooks"]]></title><description><![CDATA[
<p>This closes a big feature gap. One thing that may not be obvious is that because of the way Claude Code generates commits, regular Git hooks won’t work. (At least, in most configurations.)<p>We’ve been using CLAUDE.md instructions to tell Claude to auto-format code with the Qlty CLI (<a href="https://github.com/qltysh/qlty">https://github.com/qltysh/qlty</a>) but Claude a bit hit and miss in following them. The determinism here is a win.<p>It looks like the events that can be hooked are somewhat limited to start, and I wonder if they will make it easy to hook Git commit and Git push.</p>
]]></description><pubDate>Tue, 01 Jul 2025 00:40:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44429455</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=44429455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44429455</guid></item><item><title><![CDATA[New comment by brynary in "Show HN: Free local security checks for AI coding in VSCode, Cursor and Windsurf"]]></title><description><![CDATA[
<p>@jaimefjorge — Congrats on the launch!<p>How would you compare this to the Qlty CLI (<a href="https://github.com/qltysh/qlty">https://github.com/qltysh/qlty</a>)?<p>Do you plan to support CLI-based workflows for tools like Claude Code and linting?</p>
]]></description><pubDate>Wed, 18 Jun 2025 18:54:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44312373</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=44312373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44312373</guid></item><item><title><![CDATA[AI Code Is Exploding. Your Verification Needs to Catch Up]]></title><description><![CDATA[
<p>Article URL: <a href="https://qlty.sh/blog/ai-code-is-exploding-your-verification-needs-to-catch-up">https://qlty.sh/blog/ai-code-is-exploding-your-verification-needs-to-catch-up</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43753747">https://news.ycombinator.com/item?id=43753747</a></p>
<p>Points: 1</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 21 Apr 2025 16:27:02 +0000</pubDate><link>https://qlty.sh/blog/ai-code-is-exploding-your-verification-needs-to-catch-up</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=43753747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43753747</guid></item><item><title><![CDATA[New comment by brynary in "Dockerfmt: A Dockerfile Formatter"]]></title><description><![CDATA[
<p>It's great to see auto-formatting continuing to become universal across all languages. As LLMs write more code, full auto-formatting helps keep diffs clean.<p>For anyone looking to try dockerfmt, I just added a plugin to Qlty CLI, which is available in v0.508.0. The plugin took about ten minutes to add: <a href="https://github.com/qltysh/qlty/blob/main/qlty-plugins/plugins/linters/dockerfmt/plugin.toml">https://github.com/qltysh/qlty/blob/main/qlty-plugins/plugin...</a><p>Full disclosure: I'm the founder of Qlty, which produces a universal code linter and formatter, Qlty CLI (<a href="https://github.com/qltysh/qlty">https://github.com/qltysh/qlty</a>). It is completely free and published under a Fair Source license.</p>
]]></description><pubDate>Wed, 09 Apr 2025 03:39:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43628687</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=43628687</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43628687</guid></item><item><title><![CDATA[Show HN: Qlty CLI – Meta-linter and auto-formatter for 20 programming languages]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/qltysh/qlty">https://github.com/qltysh/qlty</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42177020">https://news.ycombinator.com/item?id=42177020</a></p>
<p>Points: 6</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 18 Nov 2024 21:04:28 +0000</pubDate><link>https://github.com/qltysh/qlty</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=42177020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42177020</guid></item><item><title><![CDATA[New comment by brynary in "GitHub Insights"]]></title><description><![CDATA[
<p>This HBR article "Many Strategies Fail Because They’re Not Actually Strategies", while not entirely about metrics, has some great recommendations for how leaders can avoid these pitfalls:<p><a href="https://hbr.org/2017/11/many-strategies-fail-because-theyre-not-actually-strategies" rel="nofollow">https://hbr.org/2017/11/many-strategies-fail-because-theyre-...</a><p>Their top recommendations are: A) Communicate the logic behind what you are trying to achieve; B) Make strategy execution a two-way process, not top-down; C) Let selection happen organically, through systems that cause strong initiatives to rise up to to the top; D) Find ways to make change the default, to help move beyond the status quo and existing habits</p>
]]></description><pubDate>Wed, 06 May 2020 20:41:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=23096046</link><dc:creator>brynary</dc:creator><comments>https://news.ycombinator.com/item?id=23096046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23096046</guid></item></channel></rss>