<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jballanc</title><link>https://news.ycombinator.com/user?id=jballanc</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 09:44:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jballanc" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jballanc in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>I've been working on an ML model capable of robust continuous learning, resistant to catastrophic forgetting without relying on replay, an external memory system, or unbounded parameter growth. Last week I confirmed the first non-toy, 580M parameter version soundly beat LoRA, EWC, and full fine tuning. This week I'm scaling up to 4.4B parameters...</p>
]]></description><pubDate>Mon, 13 Apr 2026 00:24:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47746053</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47746053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47746053</guid></item><item><title><![CDATA[New comment by jballanc in "We're running out of benchmarks to upper bound AI capabilities"]]></title><description><![CDATA[
<p>We need benchmarks that can distinguish between continuous learning and long-context extrapolation.</p>
]]></description><pubDate>Fri, 10 Apr 2026 22:09:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47724258</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47724258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47724258</guid></item><item><title><![CDATA[Ask HN: What would you do with an AI model capable of continuous learning?]]></title><description><![CDATA[
<p>Let's say, hypothetically, that you had a model that did not need to re-train in order to incorporate new information in its weights. What would you do with such a model?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47711381">https://news.ycombinator.com/item?id=47711381</a></p>
<p>Points: 4</p>
<p># Comments: 6</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:57:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711381</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47711381</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711381</guid></item><item><title><![CDATA[Ask HN: Why don't frontier AI model providers continuously improve their models?]]></title><description><![CDATA[
<p>Just what the title says: I'm wondering why we're still, years after ChatGPT, having to wait weeks or months for "the next version" of a model when so much else in the software world has moved toward continuous improvement?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47698187">https://news.ycombinator.com/item?id=47698187</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 09 Apr 2026 01:11:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698187</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47698187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698187</guid></item><item><title><![CDATA[New comment by jballanc in "Music for Programming"]]></title><description><![CDATA[
<p>Based on what you've already mentioned, there's a good chance you're familiar, but on the off chance you're not: "Funkungfusion" (or, really, anything off the Ninja Tune label) might be right up your alley.</p>
]]></description><pubDate>Mon, 06 Apr 2026 17:42:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47664276</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47664276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47664276</guid></item><item><title><![CDATA[New comment by jballanc in "Arm AGI CPU"]]></title><description><![CDATA[
<p>Eh, I'm not so sure it'll be that big a deal. The whole supply chain is so twisted and tangled all the way up and down. Shuffling out one piece doesn't seem like it will, on its own, be so major. Samsung made the chips for the iPhone, then made their own phone, then Apple designed their own chips made by TSMC, now Apple is exploring the possibility of having Samsung make those chips again.<p>Also, it takes a willful ignorance of history for ARM to claim this is the first time they've manufactured hardware. I mean, maaaaybe, teeeeechnically that's true, but ARM was the Acorn RISC Machine, and Acorn was in the hardware business...at least as much as Apple was for the first iPhone.</p>
]]></description><pubDate>Tue, 24 Mar 2026 20:31:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47508758</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47508758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47508758</guid></item><item><title><![CDATA[New comment by jballanc in "Theodosian Land Walls of Constantinople (2025)"]]></title><description><![CDATA[
<p>Fun fact about that cannon: it took so long for the cannon to cool off between shots that the Byzantines were able to patch each hole it caused before the next shot.</p>
]]></description><pubDate>Sun, 22 Mar 2026 23:27:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47483456</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47483456</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47483456</guid></item><item><title><![CDATA[The Singularity Is Coming]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.manhattanmetric.com/blog/2026/03/the-singularity-is-coming">https://www.manhattanmetric.com/blog/2026/03/the-singularity-is-coming</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47467223">https://news.ycombinator.com/item?id=47467223</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 21 Mar 2026 14:14:06 +0000</pubDate><link>https://www.manhattanmetric.com/blog/2026/03/the-singularity-is-coming</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47467223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47467223</guid></item><item><title><![CDATA[New comment by jballanc in "Ask HN: Where should an independent researcher publish work on ML?"]]></title><description><![CDATA[
<p>Thanks for the pointer! Definitely looks interesting...</p>
]]></description><pubDate>Fri, 20 Mar 2026 15:47:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47456287</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47456287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456287</guid></item><item><title><![CDATA[Ask HN: Where should an independent researcher publish work on ML?]]></title><description><![CDATA[
<p>A quick bit of background: I have a Ph.D. in Evolutionary Biology and published a peer-reviewed paper on my thesis topic when I was in grad school. Then, I went to work as a software engineer.<p>Now, since leaving academia, I have by no means lost interest in science. If anything, I've followed the world of research with as much interest and attention as ever and, because I no longer have to play the perpetual game of one-upmanship that pervades academic departments, I have been free to spread my interest around to more diverse topics. It's been rather freeing.<p>At the same time, my paycheck depends on delivering code, and so I've delivered code and not academic papers. Then, something interesting happened: I splurged on a Claude subscription and suddenly I have the most attentive research assistant I could have imagined all without the need for an academic department or drawn-out grant proposal process.<p>The only hurdle remaining is: where should I publish? Unfortunately, as I don't have an academic affiliation, I cannot get automatic access to publish to arXiv (and anyone I know who could endorse me is focused in on the biological sciences subjects from my time in grad school, not cs.LG). I've thought about simply posting to GitHub and linking from my personal site, but I worry if that's enough to establish priority and/or garner real critique and feedback.<p>I'm contemplating PLOS One, but I don't know the ML community well enough to know if that's an appropriate venue? Any help on how to re-enter the world of scientific publishing (in a new area) would be much appreciated!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47456005">https://news.ycombinator.com/item?id=47456005</a></p>
<p>Points: 2</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 20 Mar 2026 15:29:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47456005</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47456005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456005</guid></item><item><title><![CDATA[New comment by jballanc in "ArXiv declares independence from Cornell"]]></title><description><![CDATA[
<p>I exited academia for industry 15 years ago, and since then I haven't had nearly as much time to read review papers as I would like. For that reason, my view may be a bit outdated, but one thing I remember finding incredibly useful about review papers is that they provided a venue for speculation.<p>In the typical "experimental report" sort of paper, the focus is typically narrowed to a knifes edge around the hypothesis, the methods, the results, and analysis. Yes, there is the "Introduction" and a "Discussion", but increasingly I saw "Introductions" become a venue to do citation bartering (I'll cite your paper in the intro to my next paper if you cite that paper in the intro to your next paper) and "Discussion" turn into a place to float your next grant proposal before formal scoring.<p>Review papers, on the other hand, were more open to speculation. I remember reading a number that were framed as "here's what has been reported, here's what that likely means...and here's where I think the field could push forward in meaningful ways". Since the veracity of a review is generally judged on how well it covers and summarizes what's already been reported, and since no one is getting their next grant from a review, there's more space for the author to bring in their own thoughts and opinions.<p>I agree that LLMs have largely removed the need for review papers as a reference for the current state of a field...but I'll miss the forward-looking speculation.<p>Science is staring down the barrel of a looming crisis that looks like an echo chamber of epic proportions, and the only way out is to figure out how to motivate reporting negative results and sharing speculative outsider thinking.</p>
]]></description><pubDate>Fri, 20 Mar 2026 14:14:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47454847</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47454847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47454847</guid></item><item><title><![CDATA[Show HN: Vibe – a language for humans and AI to reason about programs together]]></title><description><![CDATA[
<p>For the past 15 years I have wanted to write a Scheme implementation that was, effectively, macros to LLVM. When I first got my hands on Cursor and was looking for some random, non-critical project to "vibe code" on, I remembered this idea from long ago. After a heady weekend of truly vibing with Cursor, having it ask all the right questions and make amazing suggestions, I had a complete project that...crashed immediately at any attempt to even compile.<p>The code was complete garbage, but the feeling was real, and the Vibe programming language was born. It's taken a year of nights-and-weekends, but I finally have a self-hosting variant of a Scheme built up on top of LLVM.<p>Along the way, I have learned so much about what it means to code with an LLM as your partner. I have tried, true to its name, to only vibe-code Vibe...though I have had to get my hands dirty and touch the code on a few occasions. (Cursor's Composer 1.5 has a real problem with balancing parens...which makes me think that implementing runtime structured editing might start moving up on the list of desired features.)<p>It should go without saying: Vibe is <i>not</i> even on the horizon of "production ready"...but I have a sense that it could get there faster than you might expect.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47426415">https://news.ycombinator.com/item?id=47426415</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 18 Mar 2026 14:41:29 +0000</pubDate><link>http://vibe-lang.org</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47426415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47426415</guid></item><item><title><![CDATA[LLMs – How did they get so good?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.manhattanmetric.com/blog/2026/03/how-did-llms-get-so-good">https://www.manhattanmetric.com/blog/2026/03/how-did-llms-get-so-good</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47413724">https://news.ycombinator.com/item?id=47413724</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 17 Mar 2026 15:06:19 +0000</pubDate><link>https://www.manhattanmetric.com/blog/2026/03/how-did-llms-get-so-good</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47413724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47413724</guid></item><item><title><![CDATA[LLMs – What aren't they good for?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.manhattanmetric.com/blog/2026/02/what-are-llms-bad-at">https://www.manhattanmetric.com/blog/2026/02/what-are-llms-bad-at</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47335418">https://news.ycombinator.com/item?id=47335418</a></p>
<p>Points: 5</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 11 Mar 2026 13:37:26 +0000</pubDate><link>https://www.manhattanmetric.com/blog/2026/02/what-are-llms-bad-at</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47335418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47335418</guid></item><item><title><![CDATA[New comment by jballanc in "Tony Hoare has died"]]></title><description><![CDATA[
<p>You can always check his entry on the Mathematics Genealogy Project: <a href="https://mathgenealogy.org/id.php?id=45760" rel="nofollow">https://mathgenealogy.org/id.php?id=45760</a></p>
]]></description><pubDate>Tue, 10 Mar 2026 00:21:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47317640</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47317640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47317640</guid></item><item><title><![CDATA[ChatGPT Told Me to Go Work for Anthropic]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.manhattanmetric.com/blog/2026/03/chatgpt-told-me-to-work-for-anthropic">https://www.manhattanmetric.com/blog/2026/03/chatgpt-told-me-to-work-for-anthropic</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47308842">https://news.ycombinator.com/item?id=47308842</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 09 Mar 2026 13:31:24 +0000</pubDate><link>https://www.manhattanmetric.com/blog/2026/03/chatgpt-told-me-to-work-for-anthropic</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47308842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47308842</guid></item><item><title><![CDATA[New comment by jballanc in "What does " 2>&1 " mean?"]]></title><description><![CDATA[
<p>Wait until you find out where "tty" comes from!</p>
]]></description><pubDate>Fri, 27 Feb 2026 01:44:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47175241</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47175241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47175241</guid></item><item><title><![CDATA[New comment by jballanc in "Study shows two child household must earn $400k/year to afford childcare"]]></title><description><![CDATA[
<p>When I was a young kid, my mother was a “stay at home mom”, which meant that she babysat the kids of 5 or 6 of the other families in our neighborhood where both parents worked. For me, it was a wonderful experience growing up having a ready-made group of close friends and my mother close at hand. It did mean that my mother effectively sacrificed her career (though she eventually went to work for my father as his office manager and was instrumental to his success), but I’m certain she was not charging $20k/yr/kid (or whatever the equivalent in 1980s dollars would be).<p>What Americans seem to only just now be waking up to is that lack of work/life balance, lack of family leave accommodations, and loss of community has a very real, very tangible dollar amount cost. I’m very, very tired of the knee-jerk response to every “socialist” proposal being, “yeah, that’s great, but how are you going to pay for it?”<p>How are you going to pay for not having family leave? How are you going to pay for not having universal healthcare? How are you going to pay for not having tuition-free college for all? These choices have a cost, and Americans are paying that cost every day!</p>
]]></description><pubDate>Tue, 24 Feb 2026 05:36:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47133246</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47133246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47133246</guid></item><item><title><![CDATA[New comment by jballanc in "Facebook is cooked"]]></title><description><![CDATA[
<p>Reporting blatant criminal violations is not the same thing as moderating otherwise-protected speech that could be construed as misleading, offensive, or objectionable in some other way.</p>
]]></description><pubDate>Sun, 22 Feb 2026 23:43:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47116089</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47116089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47116089</guid></item><item><title><![CDATA[New comment by jballanc in "Facebook is cooked"]]></title><description><![CDATA[
<p>The problem with this is that section 230 was specifically created to <i>promote</i> editorializing. Before section 230, online platforms were loath to engage in <i>any</i> moderation because they feared that a hint of moderation would jump them over into the realm of "publisher" where they could be held liable for the veracity of the content they published and, given the choice between no moderation at all or full editorial responsibility, many of the early internet platforms would have chosen no moderation (as full editorial responsibility would have been cost prohibitive).<p>In other words, that filter that keeps Nazis, child predators, doxing, etc. off your favorite platform only exists because of section 230.<p>Now, one could argue that the biggest platforms (Meta, Youtube, etc.) can, at this point, afford the cost of full editorial responsibility, but repealing section 230 under this logic only serves to put up a barrier to entry to any smaller competitor that might dislodge these platforms from their high, and lucrative, perch. I used to believe that the better fix would be to amend section 230 to shield filtering/removal, but not selective promotion, but TikTok has shown (rather cleverly) that selective filtering/removal can be just as effective as selective promotion of content.</p>
]]></description><pubDate>Sat, 21 Feb 2026 02:38:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47096925</link><dc:creator>jballanc</dc:creator><comments>https://news.ycombinator.com/item?id=47096925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47096925</guid></item></channel></rss>