<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: davidsainez</title><link>https://news.ycombinator.com/user?id=davidsainez</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 20:58:07 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=davidsainez" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by davidsainez in "Red Hat takes on Docker Desktop with its enterprise Podman Desktop build"]]></title><description><![CDATA[
<p>I put off podman for a while because of claims of compatibility issues, which is unfortunate because I've had an excellent experience since switching over. Can you point as specific issues you've had (not doubting, just curious)?<p>I also have heard a lot of recommendations for OrbStack, but I haven't had problems with speed either. And I could never stomach using a proprietary system for such a core part of my workflow.<p>For context I use containers for practically everything and I run some decently complex workflows on them: fullstack node codebases, networking, persistent volumes, mounting, watch mode, etc. Red Hat knocked it out of the park with podman!</p>
]]></description><pubDate>Wed, 25 Feb 2026 18:06:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47155223</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=47155223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47155223</guid></item><item><title><![CDATA[New comment by davidsainez in "Ghostty is now non-profit"]]></title><description><![CDATA[
<p>Ever heard of Debian or Linux?</p>
]]></description><pubDate>Thu, 04 Dec 2025 07:38:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46144822</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46144822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46144822</guid></item><item><title><![CDATA[New comment by davidsainez in "Arcee Trinity Mini: US-Trained Moe Model"]]></title><description><![CDATA[
<p>Excited to put this through its paces. It seems most directly comparable to GPT-OSS-20B. Comparing their numbers on the Together API: Trinity Mini is slightly less expensive ($0.045/$0.15 v $0.05/$0.20) and seems to have better latency and throughput numbers.</p>
]]></description><pubDate>Tue, 02 Dec 2025 04:46:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46117657</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46117657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46117657</guid></item><item><title><![CDATA[New comment by davidsainez in "Arcee AI Trinity Mini and Nano – US based open weight models"]]></title><description><![CDATA[
<p>Why would that undermine its integrity? AFAICT there are a selection of "open" US-based LLMs to choose from: Google's Gemma, Microsoft's Phi, Meta's LLAMA, and OpenAI's GPT-OSS. With Phi licensed under MIT and GPT-OSS under Apache 2.</p>
]]></description><pubDate>Tue, 02 Dec 2025 04:45:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46117650</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46117650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46117650</guid></item><item><title><![CDATA[New comment by davidsainez in "How we built the v0 iOS app"]]></title><description><![CDATA[
<p>I find the existence of opennext convincing proof of lock-in: <a href="https://blog.logrocket.com/opennext-next-js-portability/" rel="nofollow">https://blog.logrocket.com/opennext-next-js-portability/</a><p>Personally, I don’t bother with nextjs at all.</p>
]]></description><pubDate>Mon, 01 Dec 2025 21:37:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46113585</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46113585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46113585</guid></item><item><title><![CDATA[New comment by davidsainez in "Migrating the main Zig repository from GitHub to Codeberg"]]></title><description><![CDATA[
<p>But to determine its merit a maintainer must first donate their time and read through the PR.<p>LLMs reduce the effort to create a plausible PR down to virtually zero. Requiring a human to write the code is a good indicator that A. the PR has at least some technical merit and B. the human cares enough about the code to bother writing a PR in the first place.</p>
]]></description><pubDate>Thu, 27 Nov 2025 04:19:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46065591</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46065591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46065591</guid></item><item><title><![CDATA[New comment by davidsainez in "Migrating the main Zig repository from GitHub to Codeberg"]]></title><description><![CDATA[
<p>Not wanting to review and maintain code that someone didn't even bother to write themselves is childish?</p>
]]></description><pubDate>Thu, 27 Nov 2025 03:23:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46065222</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46065222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46065222</guid></item><item><title><![CDATA[New comment by davidsainez in "Migrating the main Zig repository from GitHub to Codeberg"]]></title><description><![CDATA[
<p>> works flawlessly<p>> intermittent outages<p>Those seem like conflicting statements to me. Last outage was only 13 days ago: <a href="https://news.ycombinator.com/item?id=45915731">https://news.ycombinator.com/item?id=45915731</a>.<p>Also, there have been increasing reports of open source maintainers dealing with LLM generated PRs: <a href="https://news.ycombinator.com/item?id=46039274">https://news.ycombinator.com/item?id=46039274</a>. GitHub seems perfectly positioned to help manage that issue, but in all likelihood will do nothing about it: '"Either you have to embrace the Al, or you get out of your career," Dohmke wrote, citing one of the developers who GitHub interviewed.'<p>I used to help maintain a popular open source library and I do not envy what open source maintainers are now up against.</p>
]]></description><pubDate>Thu, 27 Nov 2025 03:04:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46065110</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46065110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46065110</guid></item><item><title><![CDATA[New comment by davidsainez in "Claude Opus 4.5"]]></title><description><![CDATA[
<p>AFAICT, kimi k2 was the first to apply this technique [1]. I wonder if Anthropic came up with it independently or if they trained a model in 5 months after seeing kimi’s performance.<p>1: <a href="https://www.decodingdiscontinuity.com/p/open-source-inflection-point-kimi2-ai-competitive-dynamics" rel="nofollow">https://www.decodingdiscontinuity.com/p/open-source-inflecti...</a></p>
]]></description><pubDate>Tue, 25 Nov 2025 00:23:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46040995</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46040995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46040995</guid></item><item><title><![CDATA[New comment by davidsainez in "Claude Opus 4.5"]]></title><description><![CDATA[
<p>I never claimed that it was being done in secrecy. Here is another example: <a href="https://groq.com/blog/inside-the-lpu-deconstructing-groq-speed" rel="nofollow">https://groq.com/blog/inside-the-lpu-deconstructing-groq-spe...</a>.<p>I have seen multiple people mention openrouter multiple times here on HN: <a href="https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=openrouter%20quant&sort=byDate&type=comment" rel="nofollow">https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...</a><p>Again, I'm not claiming malicious intent. But model performance depends on a number of factors and the end-user just sees benchmarks for a specific configuration. For me to have a high degree of confidence in a provider I would need to see open and continuous benchmarking of the end-user API.</p>
]]></description><pubDate>Mon, 24 Nov 2025 22:33:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46040192</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46040192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46040192</guid></item><item><title><![CDATA[New comment by davidsainez in "Claude Opus 4.5"]]></title><description><![CDATA[
<p>There are well documented cases of performance degradation: <a href="https://www.anthropic.com/engineering/a-postmortem-of-three-recent-issues" rel="nofollow">https://www.anthropic.com/engineering/a-postmortem-of-three-...</a>.<p>The real issue is that there is no reliable system currently in place for the end user (other than being willing to burn the cash and run your own benchmarks regularly) to detect changes in performance.<p>It feels to me like a perfect storm. A combination of high cost of inference, extreme competition, and the statistical nature of LLMs make it very tempting for a provider to tune their infrastructure in order to squeeze more volume from their hardware. I don't mean to imply bad faith actors: things are moving at breakneck speed and people are trying anything that sticks. But the problem persists, people are building on systems that are in constant flux (for better or for worse).</p>
]]></description><pubDate>Mon, 24 Nov 2025 21:42:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46039720</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46039720</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46039720</guid></item><item><title><![CDATA[New comment by davidsainez in "FAWK: LLMs can write a language interpreter"]]></title><description><![CDATA[
<p>Thanks for sharing. I hear people make extraordinary claims about LLMs (not saying that is what you are doing) but it's hard to evaluate exactly what they mean without seeing the results. I've been working on a similar project (a static analysis tool) and I've been using sonnet 4.5 to help me build it. On cursory review it produces acceptable results but closer inspection reveals obvious performance or architectural mistakes. In its current state, one-shotted llm code feels like wood filler: very useful in many cases but I would not trust it to be load bearing.</p>
]]></description><pubDate>Fri, 21 Nov 2025 16:53:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46006189</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46006189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46006189</guid></item><item><title><![CDATA[New comment by davidsainez in "Exploring the Fragmentation of Wayland, an xdotool adventure"]]></title><description><![CDATA[
<p>Access to virtually infinite cash had more to do with Android's success than the source being  proprietary.</p>
]]></description><pubDate>Fri, 21 Nov 2025 03:12:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46000820</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=46000820</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46000820</guid></item><item><title><![CDATA[New comment by davidsainez in "Why Zig Is Quietly Doing What Rust Couldn't: Staying Simple"]]></title><description><![CDATA[
<p>Golang I think (mostly) successfully resisted this temptation</p>
]]></description><pubDate>Thu, 20 Nov 2025 09:05:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45990580</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=45990580</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45990580</guid></item><item><title><![CDATA[New comment by davidsainez in "GitHub: Git operation failures"]]></title><description><![CDATA[
<p>Doesn’t have to be an in house system, just basic redundancy is fine. eg a simple hook that pushes to both GitHub and gitlab</p>
]]></description><pubDate>Tue, 18 Nov 2025 21:47:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45972646</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=45972646</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45972646</guid></item><item><title><![CDATA[New comment by davidsainez in "AI World Clocks"]]></title><description><![CDATA[
<p>Sure, we are still closer to alchemy than materials science, but its still early days. But consider this blogpost that was on the front page today: <a href="https://www.levs.fyi/blog/2-years-of-ml-vs-1-month-of-prompting/#footnotes" rel="nofollow">https://www.levs.fyi/blog/2-years-of-ml-vs-1-month-of-prompt...</a>. The table on the bottom shows a generally steady increase in performance just by iterating on prompts. It feels like we are on the path to true engineering.</p>
]]></description><pubDate>Sat, 15 Nov 2025 02:39:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45934593</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=45934593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45934593</guid></item><item><title><![CDATA[New comment by davidsainez in "Honda: 2 years of ml vs 1 month of prompting - heres what we learned"]]></title><description><![CDATA[
<p>Thanks for sharing! I am working on a rag engine and that document provides great guidance.<p>And, agreed, each individual technique seems marginal but they really add up. What seems to be missing is some automated layer that determines the best way to chunk documents into embeddings. My use case is mostly normalized mostly technical documents so I have a pretty clear idea of how to chunk to preserve semantics. But I imagine that for generalized documents it is a lot trickier.</p>
]]></description><pubDate>Fri, 14 Nov 2025 18:55:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45930458</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=45930458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45930458</guid></item><item><title><![CDATA[New comment by davidsainez in "Honda: 2 years of ml vs 1 month of prompting - heres what we learned"]]></title><description><![CDATA[
<p>> We tried multiple vectorization and classification approaches. Our data was heavily imbalanced and skewed towards negative cases. We found that TF-IDF with 1-gram features paired with XGBoost consistently emerged as the winner.</p>
]]></description><pubDate>Fri, 14 Nov 2025 13:12:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45926398</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=45926398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45926398</guid></item><item><title><![CDATA[New comment by davidsainez in "Claude Code was down"]]></title><description><![CDATA[
<p>web version sonnet is down for me as well. <a href="https://status.claude.com/" rel="nofollow">https://status.claude.com/</a></p>
]]></description><pubDate>Fri, 14 Nov 2025 00:17:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45922413</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=45922413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45922413</guid></item><item><title><![CDATA[New comment by davidsainez in "How I fell in love with Erlang"]]></title><description><![CDATA[
<p>I highly recommend How to Design Programs. I recall being repeatedly mind blown working through the book. It was great fun. The authors start by composing pure functions. IIRC you get quite far before you have to do any mutation. Take a look! <a href="https://htdp.org/2003-09-26/Book/curriculum-Z-H-5.html" rel="nofollow">https://htdp.org/2003-09-26/Book/curriculum-Z-H-5.html</a></p>
]]></description><pubDate>Tue, 11 Nov 2025 12:20:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45886462</link><dc:creator>davidsainez</dc:creator><comments>https://news.ycombinator.com/item?id=45886462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45886462</guid></item></channel></rss>