<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tommy_axle</title><link>https://news.ycombinator.com/user?id=tommy_axle</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 08:57:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tommy_axle" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tommy_axle in "OpenAI's response to the Axios developer tool compromise"]]></title><description><![CDATA[
<p>I wouldn't go that far. Right tool for the job as always. Axios offers a lot over fetch for all but the simplest use cases plus you get to take advantage of the ecosystem. Need offline, axios-cache-interceptor already exists. Sure you can do all of those things with fetch but you need more to go with it taking you right back to just using axios. Also is no one annoyed that you can't replay fetch like the xhr? Same with express: solves a problem reliably.</p>
]]></description><pubDate>Thu, 23 Apr 2026 04:02:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47872146</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47872146</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47872146</guid></item><item><title><![CDATA[New comment by tommy_axle in "Codex for almost everything"]]></title><description><![CDATA[
<p>OpenClaw acquisition at work.</p>
]]></description><pubDate>Thu, 16 Apr 2026 17:25:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47796677</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47796677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47796677</guid></item><item><title><![CDATA[New comment by tommy_axle in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>Pick a decent quant (4-6KM) then use llama-fit-params and try it yourself to see if it's giving you what you need.</p>
]]></description><pubDate>Thu, 16 Apr 2026 15:34:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47794830</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47794830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47794830</guid></item><item><title><![CDATA[New comment by tommy_axle in "The Oxford Comma – Why and Why Not (2024)"]]></title><description><![CDATA[
<p>Nah, prepending will lead to a messier diff than the parent example.</p>
]]></description><pubDate>Thu, 26 Mar 2026 20:54:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47535592</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47535592</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47535592</guid></item><item><title><![CDATA[New comment by tommy_axle in "Can I run AI locally?"]]></title><description><![CDATA[
<p>I'm guessing this is also calculating based on the full context size that the model supports but depending on your use case it will be misleading. Even on a small consumer card with Qwen 3 30B-A3B you probably don't need 128K context depending on what you're doing so a smaller context and some tensor overrides will help. llama.cpp's llama-fit-params is helpful in those cases.</p>
]]></description><pubDate>Fri, 13 Mar 2026 18:36:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47367961</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47367961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47367961</guid></item><item><title><![CDATA[New comment by tommy_axle in "What Claude Code chooses"]]></title><description><![CDATA[
<p>More like redux vs zustand. Picking zustand was one of the good standout picks for me.</p>
]]></description><pubDate>Thu, 26 Feb 2026 21:26:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47172197</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47172197</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47172197</guid></item><item><title><![CDATA[New comment by tommy_axle in "Vim 9.2"]]></title><description><![CDATA[
<p>With all the buzz about orchestrating in the age of CLI agents there doesn't seem to be much talk about vim + tmux with send-keys (a blessing). You can run as many windows and panes doing so many different things across multiple projects.</p>
]]></description><pubDate>Sat, 14 Feb 2026 20:46:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47018226</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47018226</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47018226</guid></item><item><title><![CDATA[AI Video of Tom Cruise Fighting Brad Pitt Has Top Writer Warning]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.hollywoodreporter.com/movies/movie-news/ai-video-tom-cruise-brad-pitt-writer-warning-1236504200/">https://www.hollywoodreporter.com/movies/movie-news/ai-video-tom-cruise-brad-pitt-writer-warning-1236504200/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47007083">https://news.ycombinator.com/item?id=47007083</a></p>
<p>Points: 4</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 13 Feb 2026 20:02:30 +0000</pubDate><link>https://www.hollywoodreporter.com/movies/movie-news/ai-video-tom-cruise-brad-pitt-writer-warning-1236504200/</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47007083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47007083</guid></item><item><title><![CDATA[New comment by tommy_axle in "I asked Claude Code to remove jQuery. It failed miserably"]]></title><description><![CDATA[
<p>If doing it directly fails (not surprising) wouldn't the next thing (maybe the first thing) to do was to have AI write a codemod to do what needed to be done then apply the codemod? Then all you need to do is get the codemod right and apply it to as many files as you need. Seems much more predictable and context-efficient.</p>
]]></description><pubDate>Fri, 13 Feb 2026 15:01:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47003480</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=47003480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47003480</guid></item><item><title><![CDATA[New comment by tommy_axle in "Ship Types, Not Docs"]]></title><description><![CDATA[
<p>so like GraphQL?</p>
]]></description><pubDate>Wed, 11 Feb 2026 03:04:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46970250</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=46970250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46970250</guid></item><item><title><![CDATA[New comment by tommy_axle in "Influencers and OnlyFans models are dominating U.S. O-1 visa requests"]]></title><description><![CDATA[
<p>My guess is business (if they are doing well on the platform)</p>
]]></description><pubDate>Tue, 13 Jan 2026 17:19:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46604183</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=46604183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46604183</guid></item><item><title><![CDATA[New comment by tommy_axle in "Show HN: No more writing shitty regexes to police usernames"]]></title><description><![CDATA[
<p>I see a service like this as being in the ip lookup API category (like ipinfo.io) but I wanted to mention that for this (and IP lookup, captcha etc) I would expect that if the service is down then you allow the registrations then review later, and not simply prevent all registrations.</p>
]]></description><pubDate>Wed, 24 Dec 2025 19:18:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46378343</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=46378343</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46378343</guid></item><item><title><![CDATA[New comment by tommy_axle in "Show HN: No more writing shitty regexes to police usernames"]]></title><description><![CDATA[
<p>Ok so taylorswift is reserved but taylor_swift and realtaylorswift can be used? It seems like impersonation would still be a problem.</p>
]]></description><pubDate>Wed, 24 Dec 2025 17:56:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46377670</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=46377670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46377670</guid></item><item><title><![CDATA[New comment by tommy_axle in "Build vs. Buy: What This Week's Outages Should Teach You"]]></title><description><![CDATA[
<p>An aside: it looks like there is a certificate error for <a href="https://certkit.com/" rel="nofollow">https://certkit.com/</a> as it's for *.mscertkit.com (this was on Chromium + Linux)</p>
]]></description><pubDate>Wed, 19 Nov 2025 17:12:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45982040</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=45982040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45982040</guid></item><item><title><![CDATA[New comment by tommy_axle in "Spec-Driven Development: The Waterfall Strikes Back"]]></title><description><![CDATA[
<p>You want it to be as close to deterministic as possible to reduce the risk of the LLM doing something crazy like deleting a feature or functionality. Sure, the idea is for reviews to catch it but it's easier to miss there when there is a lot of noise. I agree that it's very similar to an offshore team that's just focused on cranking out code versus caring about what it does.</p>
]]></description><pubDate>Sat, 15 Nov 2025 13:49:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45937457</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=45937457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45937457</guid></item><item><title><![CDATA[New comment by tommy_axle in "Michael Burry a.k.a. "Big Short",discloses $1.1B bet against Nvidia&Palantir"]]></title><description><![CDATA[
<p>Technically writing calls is also taking the downside.</p>
]]></description><pubDate>Tue, 04 Nov 2025 18:59:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45814640</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=45814640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45814640</guid></item><item><title><![CDATA[New comment by tommy_axle in "Ask HN: Who uses open LLMs and coding assistants locally? Share setup and laptop"]]></title><description><![CDATA[
<p>Not the OP but yes you can definitely get a bigger quant like Q6 if it makes a difference but you also can go with a bigger param model like gpt oss 120B. A 70B would probably be great for a 128GB machine, which I don't think qwen has. You can search for the model you're interested in on hugging face often with "gguf" to get it ready to go (e.g. <a href="https://huggingface.co/ggml-org/gpt-oss-120b-GGUF/tree/main" rel="nofollow">https://huggingface.co/ggml-org/gpt-oss-120b-GGUF/tree/main</a>). Otherwise it's not a big deal to quantize yourself using llama.cpp.</p>
]]></description><pubDate>Fri, 31 Oct 2025 18:24:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45775085</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=45775085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45775085</guid></item><item><title><![CDATA[New comment by tommy_axle in "Evaluating the impact of AI on the labor market: Current state of affairs"]]></title><description><![CDATA[
<p>There was more also going on in that time-frame: several interest rate hikes, no fix for section 174 changes by the end of 2022. Maybe someone will pinpoint whatever had the largest impact in a detailed study.</p>
]]></description><pubDate>Wed, 01 Oct 2025 21:17:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45443675</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=45443675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45443675</guid></item><item><title><![CDATA[New comment by tommy_axle in "Ask HN: Have any successful startups been made by 'vibe coding'?"]]></title><description><![CDATA[
<p>Not sure, most were probably created just for this hackathon so I'd expect few if any. It's just a good way to see how far one can take it with vibe coding. It's easy to crank out smaller apps these days so marketing and distribution will be more important going forward.</p>
]]></description><pubDate>Tue, 19 Aug 2025 17:36:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44954102</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=44954102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44954102</guid></item><item><title><![CDATA[New comment by tommy_axle in "Ask HN: Have any successful startups been made by 'vibe coding'?"]]></title><description><![CDATA[
<p>One way to gauge is by taking a look at some of the projects submitted for Bolt's contest at <a href="https://worldslargesthackathon.devpost.com/project-gallery" rel="nofollow">https://worldslargesthackathon.devpost.com/project-gallery</a></p>
]]></description><pubDate>Tue, 19 Aug 2025 16:17:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44953274</link><dc:creator>tommy_axle</dc:creator><comments>https://news.ycombinator.com/item?id=44953274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44953274</guid></item></channel></rss>