<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mackross</title><link>https://news.ycombinator.com/user?id=mackross</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 15:18:52 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mackross" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mackross in "What years of production-grade concurrency teaches us about building AI agents"]]></title><description><![CDATA[
<p>Durable objects looks interesting! Thanks for the link</p>
]]></description><pubDate>Mon, 23 Feb 2026 22:40:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47130037</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=47130037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47130037</guid></item><item><title><![CDATA[New comment by mackross in "What years of production-grade concurrency teaches us about building AI agents"]]></title><description><![CDATA[
<p>I’m a huge elixir fan, but imho it doesn’t solve durable execution out of the box which is a major problem that often gets swept under the rug by BEAM fanboys. Because ETS and supervision trees don’t play well with deployment via restart, you’ve got to write some level of execution state to relational database or files. You can choose persistent ETS, mnesia, etc,  (which have their own tradeoffs and come with some kind of gnarley data-loss scenarios in deep documentation). But, whatever you choose, in my experience you will need to spend a fair amount of time considering how your processes are going to survive restarts. Alternatively, Oban is nice, but it’s a heavy layer that makes control flow more complex to follow. And, yes you can roll your own hot code deploy and run in persistent VMs/bare metal and be a true BEAM native, but it’s not easy out of the box and comes with its own set of foot guns. If I’m missing something, I would love for someone to explain to me how to do things better, as I find this to be a big pain point whenever I pick up elixir. I want to use the beautiful primitives, but I feel I’m always fighting durable execution in the event of a server restart. I wish a temporal.io client or something with similar guarantees was baked into the lang/frameworks.</p>
]]></description><pubDate>Thu, 19 Feb 2026 12:22:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47073031</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=47073031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47073031</guid></item><item><title><![CDATA[New comment by mackross in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>- A phoenix/ecto inspired batteries included framework for Golang. Uses data-star for real time bindings (can do live view like things but my personal favorite is just real time form validation out of the box). Hot reload with templ, daisy, and tailwind (no npm required). Routes file provides metadata on routes so type safe route helpers are generated for views and handlers.</p>
]]></description><pubDate>Mon, 09 Feb 2026 18:30:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46948938</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=46948938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46948938</guid></item><item><title><![CDATA[New comment by mackross in "Lix – universal version control system for binary files"]]></title><description><![CDATA[
<p>Same name as my Phoenix inspired framework for go: <a href="https://codeberg.org/lixgo/lix" rel="nofollow">https://codeberg.org/lixgo/lix</a></p>
]]></description><pubDate>Thu, 22 Jan 2026 18:23:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46723152</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=46723152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46723152</guid></item><item><title><![CDATA[New comment by mackross in "Mistral 3 family of models released"]]></title><description><![CDATA[
<p>Cool app. I couldn’t see a way to report an error in one of the default expressions.</p>
]]></description><pubDate>Wed, 03 Dec 2025 12:32:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46133765</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=46133765</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46133765</guid></item><item><title><![CDATA[New comment by mackross in "Google, Nvidia, and OpenAI"]]></title><description><![CDATA[
<p>Guess my edit didn’t work…</p>
]]></description><pubDate>Mon, 01 Dec 2025 22:48:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46114506</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=46114506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46114506</guid></item><item><title><![CDATA[New comment by mackross in "Google, Nvidia, and OpenAI"]]></title><description><![CDATA[
<p>An often overlooked extra advantage to Google is their massive existing ad inventory. If LLMs do end up being ad supported and both products are roughly the same, Google wins. The large supply of ads direct from a diverse set of advertisers means they can fill more ad slots with higher quality ads, for a higher price, and at a lower cost. They’re also already staffed with an enormous amount of talent for ad optimization. Just this advantage would translate into higher sustained margins (even assuming similar costs), but given TPU it might be even greater. This plus the gobs of cash they already spin off, and their massive war chest means they can spend an ungodly amount on user acquisition. It’s their search playbook all over again.</p>
]]></description><pubDate>Mon, 01 Dec 2025 19:10:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46111643</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=46111643</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46111643</guid></item><item><title><![CDATA[New comment by mackross in "Google, Nvidia, and OpenAI"]]></title><description><![CDATA[
<p>An often overlooked extra advantage to Google is their massive existing ad inventory. If LLMs do end up being ad supported and both products are roughly the same, Google wins. The large supply of ads direct from a diverse set of advertisers means they can fill more ad slots with higher quality ads, for a higher price, and at a lower cost. They’re also already staffed with an enormous amount of talent for ad optimization. Just thus advantage would translate into higher sustained margins (even assuming similar costs), but given TPU it might be even greater. This plus the gobs of cash they already spin off, and their massive war chest means they can spend an ungodly amount on user acquisition. It’s their search playbook all over again.</p>
]]></description><pubDate>Mon, 01 Dec 2025 19:10:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46111633</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=46111633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46111633</guid></item><item><title><![CDATA[New comment by mackross in "Fnox, a secret manager that pairs well with mise"]]></title><description><![CDATA[
<p>Love the thought put into mise and now fnox. They’re a joy to use.</p>
]]></description><pubDate>Mon, 27 Oct 2025 17:56:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45724218</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=45724218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45724218</guid></item><item><title><![CDATA[New comment by mackross in "Reverse engineering iWork"]]></title><description><![CDATA[
<p>Amazing work by author!</p>
]]></description><pubDate>Wed, 15 Oct 2025 17:38:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45595994</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=45595994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45595994</guid></item><item><title><![CDATA[New comment by mackross in "Uncertain<T>"]]></title><description><![CDATA[
<p>Always enjoy mattt’s work. Looks like a great library.</p>
]]></description><pubDate>Thu, 28 Aug 2025 18:28:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45055371</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=45055371</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45055371</guid></item><item><title><![CDATA[Show HN: Instruct LLMs to do what you want in Ruby]]></title><description><![CDATA[
<p>I’ve been working on this gem part-time for a few months now. The API is not yet fully stable so I wouldn’t recommend for anything other than experimenting. Nevertheless, my company is using it in production (as of today :)), so it seemed like a nice time to share.<p>So why did I write yet another LLM prompting library?<p>I found the existing Ruby ones either too abstract — hiding the LLM’s capabilities behind unseen prompts, too low-level — leaving my classes hard to follow and littered with boilerplate managing prompts and responses, or they used class level abstractions — forcing me to create classes when I didn’t want to.<p>After reading an early version of Patterns of Application Development Using AI by Obie Fernandez and using Obie’s library raix, I felt inspired. The book has many great patterns, and raix’s transcript management and tool management were the first I’d used that felt ruby-ish. At the same time libraries in the python community such as guidance, DSPy, LangSmith, and TEXTGRAD had caught my eye. I also liked what the cross-platform BAML was doing too. I didn’t love the code generation and freemium aspects.<p>So, with motivation high, I set out to build an opinionated library of gems that improves my Ruby (and Rails) LLM developer experience.<p>The first gem (this one) is instruct. It is the flexible foundation that the other gems will build on. While the API is similar to guidance, it has a different architecture based around attributed strings and middleware which enables some unique features (like async guard rails, content filters, self-healing, auto-continuation, and native multi-modal support).<p>I’m currently working on a hopefully elegant API that makes requesting and handling streaming structured output easy (taking inspiration from BAML, but with automatic upgrades to json schema if the API supports it). Along with that, I’ve been working on a conversational memory middleware that automatically prunes historic irrelevant bits of the conversation transcript. I hope this keeps
the model more steerable, but without loss of crucial details.<p>Thanks in advance for taking a look and providing any constructive feedback or ideas. Lastly, if you’re interested in contributing, please message me.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42470719">https://news.ycombinator.com/item?id=42470719</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 20 Dec 2024 12:58:34 +0000</pubDate><link>https://github.com/instruct-rb/instruct</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=42470719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42470719</guid></item><item><title><![CDATA[New comment by mackross in "Show HN: Graph-Based Editor for LLM Workflows"]]></title><description><![CDATA[
<p>Very cool :) can it just do observability too or do you have to use for all prompting?</p>
]]></description><pubDate>Tue, 17 Dec 2024 10:06:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42440014</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=42440014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42440014</guid></item><item><title><![CDATA[New comment by mackross in "Inside the university AI cheating crisis"]]></title><description><![CDATA[
<p>We’ve built a solution in conjunction with a university to this problem that is pretty low effort to implement, but very few professors can be bothered to even try it out (the apathy and red tape is unreal). Honestly, it has been disheartening that distribution is so tough, as the results have been great for those who are using it.</p>
]]></description><pubDate>Mon, 16 Dec 2024 06:09:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=42428377</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=42428377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42428377</guid></item><item><title><![CDATA[New comment by mackross in "The Problem with Reasoners"]]></title><description><![CDATA[
<p>I love this — it captures what I’ve been struggling to articulate after using o1 a lot.</p>
]]></description><pubDate>Wed, 27 Nov 2024 11:24:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42255170</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=42255170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42255170</guid></item><item><title><![CDATA[New comment by mackross in "A short introduction to Interval Tree Clocks"]]></title><description><![CDATA[
<p>I used these in a distributed sync in about 2013-2014. When Postgres got JSON support we ended up going centralized for that product and a simple logic clock was all we needed. Nevertheless, I still think they’re very cool.</p>
]]></description><pubDate>Sun, 24 Nov 2024 09:06:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42226856</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=42226856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42226856</guid></item><item><title><![CDATA[New comment by mackross in "What's New in Ruby on Rails 8"]]></title><description><![CDATA[
<p>Still possible, propshaft works perfectly with the official js-bundling and css-bundling gems which let you add any js build pipeline as a build step</p>
]]></description><pubDate>Tue, 08 Oct 2024 09:49:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41775440</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=41775440</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41775440</guid></item><item><title><![CDATA[New comment by mackross in "Theres a Tool to Catch Students Cheating with ChatGPT. OpenAI Hasn't Released It"]]></title><description><![CDATA[
<p>We built a simple but novel solution for that is far more reliable and works completely differently to gpt-zero and openai methods. I'm not posting a link as we're not ready for HN hug of death, but please PM if interested.<p>The saddest thing is that this project has been once of the most demoralizing I've ever taken on. Day to day we see so many students being failed by teachers and school leadership who care more about "adapting to AI" than real student outcomes today.<p>In practice, we've found teachers don't generally want to have the difficult conversations with students when the hard evidence of cheating is given to them.<p>And generally school/university/college leadership have no real tactics to implement their "AI strategy" other than train their own chat bots (wtf) and "adapting assessment to use AI".<p>Unfortunately, a simple non AI fix to the problem is definitely not as good for their careers.<p>IMHO without a change, we're creating a pretty bleak future for students of the next few years.</p>
]]></description><pubDate>Mon, 05 Aug 2024 10:04:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=41159653</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=41159653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41159653</guid></item><item><title><![CDATA[New comment by mackross in "How to build quickly"]]></title><description><![CDATA[
<p>I find programming outside-in ends with a better design and is generally faster than inside-out. Similar experience with the rest of the advice.</p>
]]></description><pubDate>Sun, 04 Aug 2024 19:22:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=41155655</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=41155655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41155655</guid></item><item><title><![CDATA[New comment by mackross in "Magnesium Depletion Score and Metabolic Syndrome in US Adults"]]></title><description><![CDATA[
<p>Exact same thing happened to me</p>
]]></description><pubDate>Sat, 20 Apr 2024 08:26:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=40095761</link><dc:creator>mackross</dc:creator><comments>https://news.ycombinator.com/item?id=40095761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40095761</guid></item></channel></rss>