<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: drodgers</title><link>https://news.ycombinator.com/user?id=drodgers</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 19:15:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=drodgers" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by drodgers in "The Dilbert Afterlife"]]></title><description><![CDATA[
<p>Don’t read long form content on mobile then? IDK what else to say.</p>
]]></description><pubDate>Sat, 17 Jan 2026 19:46:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46661374</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=46661374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46661374</guid></item><item><title><![CDATA[New comment by drodgers in "Nvidia DGX Spark and Apple Mac Studio = 4x Faster LLM Inference with EXO 1.0"]]></title><description><![CDATA[
<p>This is really cool!<p>Now I'm trying to stop myself from finding an excuse to spend upwards of $30k on compute hardware...</p>
]]></description><pubDate>Fri, 17 Oct 2025 01:15:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45612441</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=45612441</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45612441</guid></item><item><title><![CDATA[New comment by drodgers in "Game launcher installs Root CA certificate on your machine (2024)"]]></title><description><![CDATA[
<p>MacOS has been moving more and more in this direction, and it’s good.</p>
]]></description><pubDate>Sun, 07 Sep 2025 02:13:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45154762</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=45154762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45154762</guid></item><item><title><![CDATA[New comment by drodgers in "Ask HN: Who is hiring? (September 2025)"]]></title><description><![CDATA[
<p>Stile Education | Melbourne, Australia | Hybrid/Onsite | Full-Time <a href="https://stileeducation.com/au/who-we-are/engineering-at-stile" rel="nofollow">https://stileeducation.com/au/who-we-are/engineering-at-stil...</a><p>We're high-performing, diverse, tight-knit team with a mission to radically improve mainstream science and maths education at schools. By creating the best lessons in the world, coupled with intuitive tools that allow teachers to take advantage of the latest pedagogies, we’ve already helped millions of students in Australia get excited about science and maths.<p>45% of Australian science students in years 7-10 use Stile. Help us scale from 500k to more than 5 million students across Australia and the US over the next two years!<p>We now have offices in Melbourne, Boston, Portland and more! We're primarily hiring in Melbourne (relocation assistance and visa sponsorship available), but there will be lots of travel opportunities.<p>We're looking for a bunch of new roles right now (not all on our jobs site yet; please reach out if you're interested even if you don't seem to fit a specific role!):<p><pre><code>  - Engineering manager
  - Full Stack Staff Engineer
  - Platform engineer
  - ML/AI engineer
  - Automation engineer
  - Principal+ engineer
  - Frontend engineer</code></pre></p>
]]></description><pubDate>Tue, 02 Sep 2025 09:52:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45100923</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=45100923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45100923</guid></item><item><title><![CDATA[New comment by drodgers in "Claude Sonnet will ship in Xcode"]]></title><description><![CDATA[
<p>> What am I doing wrong<p>Trying two things and giving up. It's like opening a REPL for a new language, typing some common commands you're familiar with, getting some syntax errors, then giving up.<p>You need how to learn to use your tools to get the best out of them!<p>Start by thinking about what you'd need to tell a new Junior human dev you'd never met before about the task if you could only send a single email to spec it out. There are shortcuts, but that's a good starting place.<p>In this case, I'd specifically suggest:<p>1. Write a CLAUDE.md listing the toolchains you want to work with, giving context for your projects, and listing the specific build, test etc. commands you work with on your system (including any helpful scripts/aliases you use). Start simple; you can have claude add to it as you find new things that you need to tell it or that it spends time working out (so that you don't need to do that every time).<p>2. In your initial command, include a pointer to an example project using similar tech in a directory that claude can read<p>3. Ask it to come up with a plan and ask for your approval before starting</p>
]]></description><pubDate>Fri, 29 Aug 2025 05:41:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45060624</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=45060624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45060624</guid></item><item><title><![CDATA[New comment by drodgers in "The Emperor's New LLM"]]></title><description><![CDATA[
<p>Eh. This is true for humans too and doesn’t make humans useless at evaluating business plans or other things.<p>You just want the signal from the object level question to drown out irrelevant bias (which plan was proposed first, which of the plan proposers are more attractive, which plan seems cooler etc.)</p>
]]></description><pubDate>Sat, 14 Jun 2025 04:45:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44274244</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44274244</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44274244</guid></item><item><title><![CDATA[New comment by drodgers in "Launch HN: Vassar Robotics (YC X25) – $219 robot arm that learns new skills"]]></title><description><![CDATA[
<p>Love it! I've been looking for an excuse to dive into AI planning for robotics, and this looks like it will make it easy to get started.<p>Just one question: does the power supply have a 220/240v option (I'm in Australia)?</p>
]]></description><pubDate>Tue, 10 Jun 2025 23:06:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44242501</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44242501</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44242501</guid></item><item><title><![CDATA[New comment by drodgers in "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]]></title><description><![CDATA[
<p>Also corporations, governments etc. - they're capable of things that none of the individuals could do alone.</p>
]]></description><pubDate>Sat, 07 Jun 2025 10:45:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44208779</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44208779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44208779</guid></item><item><title><![CDATA[New comment by drodgers in "I read all of Cloudflare's Claude-generated commits"]]></title><description><![CDATA[
<p>> Prompts as Source Code<p>Another way to phrase this is LLM-as-compiler and Python (or whatever) as an intermediate compiler artefact.<p>Finally, a true 6th generation programming language!<p>I've considered building a toy of this with really aggressive modularisation of the output code (eg. python) and a query-based caching system so that each module of code output only changes when the relevant part of the prompt or upsteam modules change (the generated code would be committed to source control like a lockfile).<p>I think that (+ some sort of WASM encapsulated execution environment) would one of the best ways to write one off things like scripts which <i>don't</i> need to incrementally get better and more robust over time in the way that ordinary code does.</p>
]]></description><pubDate>Sat, 07 Jun 2025 02:52:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44207036</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44207036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44207036</guid></item><item><title><![CDATA[New comment by drodgers in "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]]></title><description><![CDATA[
<p>I think that commenter was disagreeing with this line:<p>> because omniscient-yet-dim-witted models terminate at "superhumanly assistive"<p>It might be that with dim wits + enough brute force (knowledge, parallelism, trial-and-error, specialisation, speed) models could still substitute for humans and transform the economy in short order.</p>
]]></description><pubDate>Sat, 07 Jun 2025 02:33:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44206959</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44206959</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44206959</guid></item><item><title><![CDATA[New comment by drodgers in "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]]></title><description><![CDATA[
<p>> I think AI maximalists will continue to think that the models are in fact getting less dim-witted<p>I'm bullish (and scared) about AI progress precisely because I think they've only gotten a little less dim-witted in the last few years, but their practical capabilities have improved a <i>lot</i> thanks to better knowledge, taste, context, tooling etc.<p>What scares me is that I think there's a reasoning/agency capabilities overhang. ie. we're only one or two breakthroughs away from something which is both kinda omniscient (where we are today), and able to out-think you very quickly (if only through dint of applying parallelism to actually competent outcome-modelling and strategic decision making).<p>That combination is terrifying. I don't think enough people have really imagined what it would mean for an AI to be able to out-strategise humans in the same way that they can now — say — out-poetry humans (by being both decent in terms of quality and <i>super</i> fast). It's like when you're speaking to someone way smarter than you and you realise that they're 6 steps ahead, and actively shaping your thought process to guide you where they want you to end up. At scale. For everything.<p>This exact thing (better reasoning + agency) is also the top priority for all of the frontier researchers right now (because it's super useful), so I think a breakthrough might not be far away.<p>Another way to phrase it: I think today's LLMs are about as good at snap judgements in most areas as the best humans (probably much better at everything that rhymes with inferring vibes from text), but they kinda suck at:<p>1. Reasoning/strategising step-by-step for very long periods<p>2. Snap judgements about reasoning or taking strategic actions (in the way that expert strategic humans don't actually need to think through their actions step-by-step very often - they've built intuition which gets them straight to the best answer 90% of the time)<p>Getting good at the long range thinking might require more substantial architectural changes (eg. some sort of separate 'system 2' reasoning architecture to complement the already pretty great 'system 1' transformer models we have). OTOH, it might just require better training data and algorithms so that the models develop good enough strategic taste and agentic intuitions to get to a near-optimal solution quickly before they fall off a long-range reasoning performance cliff.<p>Of course, maybe the problem is really hard and there's no easy breakthrough (or it requires 100,000x more computing power than we have access to right now). There's no certainty to be found, but a scary breakthrough definitely seems possible to me.</p>
]]></description><pubDate>Sat, 07 Jun 2025 02:22:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44206901</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44206901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44206901</guid></item><item><title><![CDATA[New comment by drodgers in "My AI skeptic friends are all nuts"]]></title><description><![CDATA[
<p>> "LLMs can’t write Rust"<p>This really doesn't accord with my own experience. Using claude-code (esp. with opus 4) and codex (with o3) I've written lots of good Rust code. I've actually found Rust helps the AI-pair-programming experience because the agent gets such good, detailed feedback from the compiler that it can iterate very quickly and effectively.<p>Can it set up great architecture for a large, complex project from scratch? No, not yet. It can't do that in Ruby or Typescript either (though it might trick you by quickly getting something that kinda works in those languages). It think that will be a higher bar because of how Rust front-loads a lot of hard work, but I expect continuing improvement.</p>
]]></description><pubDate>Tue, 03 Jun 2025 03:03:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44165870</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44165870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44165870</guid></item><item><title><![CDATA[New comment by drodgers in "Codex CLI is going native"]]></title><description><![CDATA[
<p>Making changes to huge rust projects is quite easy. For a substantial alteration, you make your change, the compiler tells you the 100 problems it caused, and you fix them all (~50% auto fix, 30% Claude/Codex, 20% manual), then the program probably does the thing.<p>Architecting the original 100kloc program well requires skill, but that effort is heavily front loaded.</p>
]]></description><pubDate>Mon, 02 Jun 2025 09:00:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44156940</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44156940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44156940</guid></item><item><title><![CDATA[New comment by drodgers in "The ‘white-collar bloodbath’ is all part of the AI hype machine"]]></title><description><![CDATA[
<p>No doubt from me that it’s a sigmoid, but how high is the plateau? That’s also hard to know from early in the process, but it would be surprising if there’s not a fair bit of progress left to go.<p>Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).</p>
]]></description><pubDate>Sat, 31 May 2025 05:41:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44142092</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44142092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44142092</guid></item><item><title><![CDATA[New comment by drodgers in "Human coders are still better than LLMs"]]></title><description><![CDATA[
<p>I think that's generally fair, but this point goes too far:<p>> improve benchmarks one by one<p>If you're right about that in the strong sense — that each task needs to be optimised in total isolation — then it would be a longer, slower road to a really powerful humanlike system.<p>What I think is really happening though that each specific task (eg. coding) is having large spillover effects on other areas (eg. helping them to be better at extended verbal reasoning even when not writing any code). The AI labs can't do everything at once, so they're focusing where:<p>- It's easy to generate more data and measure results (coding, maths etc.)
 - There's a relative lack of good data in the existing training corpus (eg. good agentic reasoning logic - the kinds of internal monologs that humans rarely write down)
 - Areas where it would be immediately useful for the models to get better in a targeted way (eg. agentic tool-use; developing great hypothesis generation instincts in scientific fields like algorithm design, drug discovery and ML research)<p>By the time those tasks are optimised, I suspect the spill over effects will be substantial and the models will generally be much more capable.<p>Beyond that, the labs are all pretty open about the fact that they want to use the resulting AI talents for coding, reasoning and research skills to accelerate their own research. If that works (definitely not obvious yet) then finding ways to train a much broader array of skills could be much faster because that process itself would be increasingly automated.</p>
]]></description><pubDate>Fri, 30 May 2025 10:08:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44134612</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44134612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44134612</guid></item><item><title><![CDATA[New comment by drodgers in "Human coders are still better than LLMs"]]></title><description><![CDATA[
<p>I don't know how someone could be following the technical progress in detail and hold this view. The progress is astonishing, and the benchmarks are becoming saturated so fast that it's hard to keep track.<p>Are there plenty of gaps left between here and most definitions of AGI? Absolutely. Nevertheless, how can you be sure that those gaps will remain given how many faculties these models have already been able to excel at (translation, maths, writing, code, chess, algorithm design etc.)?<p>It seems to me like we're down to a relatively sparse list of tasks and skills where the models aren't getting enough training data, or are missing tools and sub-components required to excel. Beyond that, it's just a matter of iterative improvement until 80th percentile coder becomes 99th percentile coder becomes superhuman coder, and ditto for maths, persuasion and everything else.<p>Maybe we hit some hard roadblocks, but room for those challenges to be hiding seems to be dwindling day by day.</p>
]]></description><pubDate>Fri, 30 May 2025 02:48:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44132356</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44132356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44132356</guid></item><item><title><![CDATA[New comment by drodgers in "Human coders are still better than LLMs"]]></title><description><![CDATA[
<p>Yes. If you judge only from the hype, then you can't distinguish LLMs from crypto, or Nuclear Weapons from Nuclear Automobiles.<p>If you always say that every new fad is just hype, then you'll even be right 99.9% of the time. But if you want to be more valuable than a rock (<a href="https://www.astralcodexten.com/p/heuristics-that-almost-always-work" rel="nofollow">https://www.astralcodexten.com/p/heuristics-that-almost-alwa...</a>), then you need to dig into the object-level facts and form an opinion.<p>In my opinion, AI has a much higher likelihood of changing everything <i>very</i> quickly than crypto or similar technologies ever did.</p>
]]></description><pubDate>Fri, 30 May 2025 02:35:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44132275</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=44132275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44132275</guid></item><item><title><![CDATA[New comment by drodgers in "A flat pricing subscription for Claude Code"]]></title><description><![CDATA[
<p>Yes, this product mostly only targets the top 20% of US earners. That's a lot of people, and a lot of HN readers especially.</p>
]]></description><pubDate>Fri, 09 May 2025 03:13:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43933495</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=43933495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43933495</guid></item><item><title><![CDATA[New comment by drodgers in "Samsung is paying $350M for audio brands B&W, Denon, Marantz and Polk"]]></title><description><![CDATA[
<p>Unfortunately the market is quite small and shrinking. I wish more people wanted great sound rather than phone/tv speakers (or soundbars).</p>
]]></description><pubDate>Thu, 08 May 2025 03:37:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43922857</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=43922857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43922857</guid></item><item><title><![CDATA[New comment by drodgers in "Samsung is paying $350M for audio brands B&W, Denon, Marantz and Polk"]]></title><description><![CDATA[
<p>They don't do any of the Dolby decoding and multi-channel mixing. Their closest product is the miniDSP Flex HT which is really about applying EQ to a bunch of channels (only 8, aka 7.1 or 5.2.1) after they've been decoded by an upstream receiver. It's pretty niche.</p>
]]></description><pubDate>Thu, 08 May 2025 03:33:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43922842</link><dc:creator>drodgers</dc:creator><comments>https://news.ycombinator.com/item?id=43922842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43922842</guid></item></channel></rss>