<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: KallDrexx</title><link>https://news.ycombinator.com/user?id=KallDrexx</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 22:26:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=KallDrexx" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by KallDrexx in "How the AI Bubble Bursts"]]></title><description><![CDATA[
<p>The problem I have with this analysis is it's missing the multi-dimensional aspect of "is this profitable".<p>It's fair to say that if all these operators are competing for tokens, that the OpenRouter token operator (not sure the exact phrase but the people running the models) are accounting for some level of margin.<p>However, how many of these are running their own data centers and GPUs?<p>If they are running their own infrastructure, then it's not a simple equation of if each specific token set is profitable, since it needs to account for the cost of running the data center.  It could be that they <i>believe</i> that it is profitable in the long term by utilizing the long tail of asset depreciation, but that isn't guaranteed.<p>IF they aren't running their own infrastructure, then it's much easier to claim that it's profitable and has a margin (outside of running their servers to manage the rented infrastructure).<p>HOWEVER, a lot of data centers have some pretty crazy low prices for GPUs that may be vying for user base and revenue over profitability. In these cases, if data center growth starts slowing due to slower buildout then it's very likely GPU prices go up and inference stops becoming profitable for the open router owners.<p>So long term it's not clear how profitable even these open models are.<p>OpenAI and Anthropic definitely fall into the latter category too. Their infrastructure requirements are much higher than the open models, and they are being given huge discounts so Microsoft/Amazon/Google can all claim revenue (since they have profitability coming from other parts). It's not clear if OpenAI and Anthropic models would be profitable at inference if they were paying rates that cloud hosts would make a profit from.<p>There's just way too many dimensions to this scenario to flat out state that open router proves inference is profitable at scale.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:20:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576272</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=47576272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576272</guid></item><item><title><![CDATA[New comment by KallDrexx in "Vibe-Coded Ext4 for OpenBSD"]]></title><description><![CDATA[
<p>The copyright office is pretty clear on this if you read: <a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf" rel="nofollow">https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...</a>.<p>There is case law surrounding the fact that just because you commission a work to another entity doesn't give you co-authorship, the entity doing the work and making creative decisions is the entity that gets copyright.<p>In order for you to have co-authorship of the commissioned work you have to be involved and pretty much giving instruction level detail to the real author. The opinion shows many cases that its not the case with how LLM prompts work.<p>The monkey selfie case is relevant also because since it also solidifies that non-persons cannot claim copyright, that means the LLM cannot claim copyright, and therefore it does not have copyright that can be passed onto the LLM operator.</p>
]]></description><pubDate>Fri, 27 Mar 2026 20:16:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47547672</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=47547672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47547672</guid></item><item><title><![CDATA[New comment by KallDrexx in "Building an FPGA 3dfx Voodoo with Modern RTL Tools"]]></title><description><![CDATA[
<p>Which actual FPGA is this running on? I've been extremely curious on this space and would love to know what it took to actually get this to run.</p>
]]></description><pubDate>Mon, 23 Mar 2026 14:16:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47489884</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=47489884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47489884</guid></item><item><title><![CDATA[New comment by KallDrexx in "Investigating the Downstream Effect of AI Assistants on Software Maintainability"]]></title><description><![CDATA[
<p>This study came up on my feed but I'm not seeing much chatter on it. I'm interested in some perspectives from people who are better at reading if studies are quality or not.<p>> Abstract
> 
> AI assistants, like GitHub Copilot and Cursor, are transforming software engineering. While several studies highlight productivity improvements, their impact on maintainability requires further investigation. [Objective] This study investigates whether co-development with AI assistants affects software maintainability, specifically how easily other developers can evolve the resulting source code. [Method] We conducted a two-phase controlled experiment involving 151 participants, 95% of whom were professional developers. In Phase 1, participants added a new feature to a Java web application, with or without AI assistance. In Phase 2, a randomized controlled trial, new participants evolved these solutions without AI assistance. [Results] Phase 2 revealed no significant differences in subsequent evolution with respect to completion time or code quality. Bayesian analysis suggests that any speed or quality improvements from AI use were at most small and highly uncertain. Observational results from Phase 1 corroborate prior research: using an AI assistant yielded a 30.7% median reduction in completion time, and habitual AI users showed an estimated 55.9% speedup. [Conclusions] Overall, we did not detect systematic maintainability advantages or disadvantages when other developers evolved code co-developed with AI assistants. Within the scope of our tasks and measures, we observed no consistent warning signs of degraded code-level maintainability. Future work should examine risks such as code bloat from excessive code generation and cognitive debt as developers offload more mental effort to assistants.</p>
]]></description><pubDate>Wed, 18 Feb 2026 15:02:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47061685</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=47061685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47061685</guid></item><item><title><![CDATA[Investigating the Downstream Effect of AI Assistants on Software Maintainability]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2507.00788">https://arxiv.org/abs/2507.00788</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47061684">https://news.ycombinator.com/item?id=47061684</a></p>
<p>Points: 2</p>
<p># Comments: 2</p>
]]></description><pubDate>Wed, 18 Feb 2026 15:02:13 +0000</pubDate><link>https://arxiv.org/abs/2507.00788</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=47061684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47061684</guid></item><item><title><![CDATA[Commodore 64 JIT compilation into MSIL]]></title><description><![CDATA[
<p>Article URL: <a href="https://old.reddit.com/r/dotnet/comments/1qsl99h/commodore_64_jit_compilation_into_msil/">https://old.reddit.com/r/dotnet/comments/1qsl99h/commodore_64_jit_compilation_into_msil/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46846666">https://news.ycombinator.com/item?id=46846666</a></p>
<p>Points: 7</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 01 Feb 2026 15:05:01 +0000</pubDate><link>https://old.reddit.com/r/dotnet/comments/1qsl99h/commodore_64_jit_compilation_into_msil/</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=46846666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46846666</guid></item><item><title><![CDATA[New comment by KallDrexx in "Creators of Tailwind laid off 75% of their engineering team"]]></title><description><![CDATA[
<p>Yeah I don't disagree that selling components is going to be hard business in the age of AI. Just mostly pointing out that it was a good business previously.</p>
]]></description><pubDate>Wed, 07 Jan 2026 23:29:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46534755</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=46534755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46534755</guid></item><item><title><![CDATA[New comment by KallDrexx in "Creators of Tailwind laid off 75% of their engineering team"]]></title><description><![CDATA[
<p>Telerik, DevExpress, and a lot of other companies have made profitable businesses that have lasted well over a decade on that business premise.  Selling solid and easy to integrate pre-made components has been a pretty good business for a while.</p>
]]></description><pubDate>Wed, 07 Jan 2026 17:43:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46529600</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=46529600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46529600</guid></item><item><title><![CDATA[New comment by KallDrexx in "Has the cost of building software dropped 90%?"]]></title><description><![CDATA[
<p>FWIW I only mentioned staff engineers because the survey found staff+ engineers reported the highest time savings.  The survey itself had time savings averages for junior (3.9), Mid level (4.3), Senior (4.1) and Staff (4.4).</p>
]]></description><pubDate>Wed, 10 Dec 2025 14:42:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46218235</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=46218235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46218235</guid></item><item><title><![CDATA[New comment by KallDrexx in "Has the cost of building software dropped 90%?"]]></title><description><![CDATA[
<p>The latest survey is here apparently: <a href="https://getdx.com/report/ai-assisted-engineering-q4-impact-report/" rel="nofollow">https://getdx.com/report/ai-assisted-engineering-q4-impact-r...</a>.<p>No idea if signing up gives the full survey for free. We get it through our company paying for DX's services</p>
]]></description><pubDate>Tue, 09 Dec 2025 17:11:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46207512</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=46207512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46207512</guid></item><item><title><![CDATA[New comment by KallDrexx in "Has the cost of building software dropped 90%?"]]></title><description><![CDATA[
<p>Unfortunately I do not.<p>Past survey results are hidden in some presentations I've seen, and the latest survey I have full access due to my company paying for it. So I'm not sure it's legal for me to reproduce</p>
]]></description><pubDate>Tue, 09 Dec 2025 16:45:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46207132</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=46207132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46207132</guid></item><item><title><![CDATA[New comment by KallDrexx in "Has the cost of building software dropped 90%?"]]></title><description><![CDATA[
<p><i>EVERY</i> DX survey that comes out (surveying over 20k developers) says the exact same thing.<p>Staff engineers get the most time savings out of AI tools, and their weekly time savings is 4.4 hours for heavy AI users.  That's a little more than 10% productivity, so not anywhere close to 10x.<p>What's more telling about the survey results is they are also consistent in their findings between heavy and light users of AI.  Staff engineers who are heavy users of AI save 4.4 hours a week while staff engineers who are light users of AI save 3.3 hours a week. To put another way, the DX survey is pretty clear that the time savings between heavy and light AI users is minimal.<p>Yes surveys are all flawed in different ways but an N of 20k is nothing to sneeze at. Any study with data points shows that code generation is not a significant time savings and zero studies show significant time savings. All the productivity gains DX reports come from debugging and investigation/code base spelunking help.</p>
]]></description><pubDate>Tue, 09 Dec 2025 15:14:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46205781</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=46205781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46205781</guid></item><item><title><![CDATA[New comment by KallDrexx in "Zig's New Async I/O"]]></title><description><![CDATA[
<p>This is my main frustration with the push back against visibility modifiers. It's treated as an all or nothing approach, that any support for visibility modifiers locks anyone out from touching those fields.<p>It could just be a compiler error/warning that has to be explicitly opted into to touch those fields. This allows you to say "I know this is normally a footgun to modify these fields, and I might be violating an invariant condition, but I am know what I'm doing".</p>
]]></description><pubDate>Thu, 30 Oct 2025 19:07:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45763990</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45763990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45763990</guid></item><item><title><![CDATA[New comment by KallDrexx in "How much Anthropic and Cursor spend on Amazon Web Services"]]></title><description><![CDATA[
<p>Fwiw that's not necessarily true, because if Sonnet ends up using reasoning, then you are using more tokens than GPT-4 would have used for the same task.  Same with GPT-5 since it will decide (using an LLM) if it should use the thinking model for it (and you don't have as much control over it).</p>
]]></description><pubDate>Mon, 20 Oct 2025 16:15:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45645594</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45645594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45645594</guid></item><item><title><![CDATA[Show HN: JIT compilation of NES ROMs / 6502 programs to .NET MSIL]]></title><description><![CDATA[
<p>Video of it in action here: <a href="https://youtu.be/3v0oKuXkYlA" rel="nofollow">https://youtu.be/3v0oKuXkYlA</a><p>This all started with a "simple" premise, can you use the .net runtime as a just-in-time compiler for any language. 2 months later and now I have a fully working code base that can compile most 6502 functions into MSIL and execute them on demand.<p>It achieves this by instantiating a memory bus with any memory mapped I/O devices you may have the need for, and which memory regions they map to. For the NES, this includes the CPU ram (and its mirrored regions), the PPU device, the cartridge space, etc...<p>Then the JIT compiler is told to run the function at a specific address. The JIT compiler then:<p>1. Traces out the function boundaries of the function using the passed in address as the entry point.<p>2. After all instructions and their ordering is determined, the instructions are disassembled.<p>3. The 6502 disassembled instructions are converted to one or more intermediary representation instructions<p>4. A JitCustomization process is run that allows different emulators/hardware setups to modify how the IR instructions are set up. This also allows for analysis and optimization passes.<p>5. The final set of IR instructions are passed one by one into a MSIL generation class, and the MSIL is written to the `ILGenerator`<p>6. This IL is then added into an assembly builder and compiled on the fly, providing a static .net method containing that function's code.<p>7. The JIT compiler then turns that function into an Executable method delegate and executes it<p>8. The function runs until a cancellation token gets a cancellation signal, or the function hits a return statement with a new address of a function to call.  The JIT compiler then repeats this process, but now with the function at the address returned.<p>This allows the above video, where NES games are running inside the .net runtime via MSIL. Since it is just-in-time compilation, in theory even arbitrary code execution exploits would be executable.  The main bugs visible in SMB are due to my inaccurate PPU emulation and not about the 6502 code itself.<p># Why An Intermediary Representation?<p>Creating MSIL at runtime is pretty error prone and is hard to debug. If you have one simple mistake (such as passing a byte into a `ldc_i4` emit call) you get a generic "This is not a valid program" exception with no debugging.  So limiting how much MSIL I had to generate ended up pretty beneficial.<p>One significant benefit though is simplicity. The 6502 has 56 official instructions, each with some significant side effects. Creating MSIL for each of these with all the different memory addressing modes they could contain would spiral out.<p>However, it turns out you can create any 6502 instruction by composing about 12 smaller instructions together. This made it much simpler to write the MSIL for each IR instruction, and much easier test coverage to ensure they actually compile and work.<p># Assembly Creation<p>There are code paths (disabled) that can actually create real dll files for each function generated.  In theory this means that if you run an application for a sufficient amount of time, you could collect all the dlls and piece them together for a MSIL precompiled build.<p># NES Limitations<p>The NES emulator side isn't complete. It can run games as long as they are up to 32k ROMs with 16K character data. This is just because I didn't bother adding support for all the different bank/memory switchers that cartridges implement.<p># What's Next?<p>Not sure.  I'm tempted to add some other 6502 emulations. Atari 2600 would work but may not be interesting. Using this to fully JIT the commodore 64 is something that is interesting, though I'm not totally sure how much of a rabbit hole emulating the video and other I/O devices would be.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45645096">https://news.ycombinator.com/item?id=45645096</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 20 Oct 2025 15:32:34 +0000</pubDate><link>https://github.com/KallDrexx/Dotnet6502</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45645096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45645096</guid></item><item><title><![CDATA[New comment by KallDrexx in "Zed's Pricing Has Changed: LLM Usage Is Now Token-Based"]]></title><description><![CDATA[
<p>I don't know about for people using CC on a regular basis, but according to `ccusage`, I can trivially go over $20 of API credits in a few days of hobby use.  I'd presume if you are paying for a $200 plan then you know you have heavy usage and can easily exceed that.</p>
]]></description><pubDate>Wed, 24 Sep 2025 19:09:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45364702</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45364702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45364702</guid></item><item><title><![CDATA[New comment by KallDrexx in "DeepMind and OpenAI win gold at ICPC"]]></title><description><![CDATA[
<p>It's important to look closely at the details of how these models actually do these things.<p>If you look at the details of how Google got gold at IMO, you'll see that AlphaGeometry only relies on LLMs for a very specific part of the whole system, and the LLM wasn't the core problem solving system in play.<p>Most of AlphaGeometry is standard algorithms at play solving geometry problems using known constraints.  When the algorithmic system gets stuck, it reaches out to LLMs that were fine tuned specifically for creating new geometric constraints.  So the LLM would create new geometric constraints and pass that back to the algorithmic parts to get it unstuck, and repeat.<p>Without more details, it's not clear if this win is also the Gpt-5 and Gemini models we use, or specially fine-tuned models that are integrated with other non-LLM and non-ML based systems to solve these.<p>Not being solved purely by LLM isn't a knock on it, but with the current conversations going on today with LLMs, these are heavily being marketed as "LLMs did this all by themselves", which doesn't match with a lot of the evidence I've personally seen.</p>
]]></description><pubDate>Wed, 17 Sep 2025 19:07:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45280051</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45280051</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45280051</guid></item><item><title><![CDATA[New comment by KallDrexx in "Jujutsu for everyone"]]></title><description><![CDATA[
<p>I wonder by your last comment if this is just is talking past each other.<p>I try very hard to keep my PRs very focused on one complete unit of work at a time.  So when the squash happens that single commit represents one type of change being made to the system.<p>So when going through history to pinpoint the cause of the big, I can still get what logical change and unit of work caused the change.  I don't see the intermediary commits of that unit of work, but I have not personally gotten value out of that level of granularity (especially on team projects where each person's commit practices are different).<p>If I start working on one PR that starts to contain a refactor or change imthat makes sense to isolate, I'll make that it's own pr that will be squashed.</p>
]]></description><pubDate>Mon, 01 Sep 2025 14:46:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45093055</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45093055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45093055</guid></item><item><title><![CDATA[New comment by KallDrexx in "Jujutsu for everyone"]]></title><description><![CDATA[
<p>Yeah I don't have much experience outside of GitHub for team projects, so maybe gitlab works better.  For GitHub it just gives up and claims it can't give you diff since the last review</p>
]]></description><pubDate>Mon, 01 Sep 2025 14:37:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45092976</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45092976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45092976</guid></item><item><title><![CDATA[New comment by KallDrexx in "Jujutsu for everyone"]]></title><description><![CDATA[
<p>Not the OP but for me, no I don't actually.<p>In a PR branch, my branches usually have a bunch of WIP commits, especially if I've worked on a PR across day boundaries. It's common for more complex PRs that I started down one path and then changed to another path, in which case a lot of work that went into earlier commits is no longer relevant to the picture as a whole.<p>Once a PR has been submitted for review, I <i>NEVER</i> want to change previous commits and force push, because that breaks common tooling that other team mates rely on to see what changes since their last review.  When you do a force push, they now have to review the <i>full</i> PR because they can't be guaranteed exactly which lines changed, and your commit message for the old pr is now muddled.<p>Once the PR has been merged, I prefer it merged as a single squashed commit so it's reflective of the single atomic PR (because most of the intermediary commits have never actually mattered to debugging a bug caused by a PR).<p>And if I've already merged a commit to main, then I 100% don't want to rewrite the history of that other commit.<p>So personally I have never found the commit history of a PR branch useful enough that rewriting past commits was beneficial.  The commit history of main is immensely useful, enough that you never want to rewrite that either.</p>
]]></description><pubDate>Sun, 31 Aug 2025 18:05:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45085388</link><dc:creator>KallDrexx</dc:creator><comments>https://news.ycombinator.com/item?id=45085388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45085388</guid></item></channel></rss>