<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ninjagoo</title><link>https://news.ycombinator.com/user?id=ninjagoo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 16:45:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ninjagoo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ninjagoo in "The Moat or the Commons"]]></title><description><![CDATA[
<p>> Superior architectures will leak pretty quickly via engineers.<p>I agree with the outcome of your premise (i.e., openness), but for different reasons:<p>First, isn't it the case that these bleeding edge 'newfangled' LLMs are basically variations on the same core ideas from "Attention Is All You Need" from 2017? [1]. Different scale, but still the same basic architecture. Even the "MoE" innovation keeps the Transformer attention stack while replacing or augmenting the dense feed-forward/MLP part with routed expert blocks.<p>And, I would argue that <i>Engineers</i> aren't working on new architectures. That would be <i>Researchers</i>, working on<p><pre><code>  State-space models/Mamba (CMU/Princeton ecosystem), 
  Diffusion Language Models (Inception Labs), 
  Long-convolution architectures/Hyena (Stanford etc.), 
  RWKV/Recurrent LLMs (open-source community), 
  Memory-augmented architectures (Google Research/DeepMind?), 
  World models/spatial intelligence (LeCun/Fei-Fei Li/DeepMind), 
  Symbolic/neurosymbolic alternatives, 
  Thousand brains (Numenta).
</code></pre>
That research is still open, so the outcome that you propose (openness) is likely to come to pass. Researchers/Scientists gotta publish, otherwise it's not science (to quote LeCun [2])<p>[1] <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">https://arxiv.org/abs/1706.03762</a><p>[2] <a href="https://x.com/ylecun/status/1795589846771147018" rel="nofollow">https://x.com/ylecun/status/1795589846771147018</a></p>
]]></description><pubDate>Tue, 28 Apr 2026 04:19:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930406</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47930406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930406</guid></item><item><title><![CDATA[New comment by ninjagoo in "The Moat or the Commons"]]></title><description><![CDATA[
<p>> American AI was financed on a particular bet. The bet was that frontier models would be the next great monopoly business<p>> The collision between those two facts — that American capital paid for a moat, and that the technology no longer provides one — is the most important force in the AI industry today.<p>> The open-weight ecosystem did not arrive in stages. It arrived in a wave. In late 2024, a Chinese lab named DeepSeek released a model<p>Looking at the assertions above, anyone passingly familiar with AI over the past few years will tell you that open weights and open research were the norm until OpenAI GPT-3 came along, and even then they were forced to release GPT-OSS by the market. So what technology moat? There has never been one in AI. Training 100B+ or trillion+ parameter models in expensive runs was potentially a moat, until the chinese startups showed in short order that it could be done for $6 million a run. Even the CUDA monopoly seems to be ending.<p>Also, no evidence referenced to back up any of the assertions. How do they know that the bet was that the frontier models would be the next great monopoly business? Especially when there were many from the outset: GPT, Anthropic, Llama, Deepmind, etc. etc.<p>I'd argue that the wholesale replacement of labor was and is the driver behind the capex, not monopoly dreams.<p>The starting premises appear to be, well, faulty. Whither the rest of the article?</p>
]]></description><pubDate>Tue, 28 Apr 2026 03:54:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930284</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47930284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930284</guid></item><item><title><![CDATA[New comment by ninjagoo in "China blocks Meta's acquisition of AI startup Manus"]]></title><description><![CDATA[
<p><a href="https://en.wikipedia.org/wiki/Joseph_Nacchio" rel="nofollow">https://en.wikipedia.org/wiki/Joseph_Nacchio</a></p>
]]></description><pubDate>Tue, 28 Apr 2026 03:12:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930094</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47930094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930094</guid></item><item><title><![CDATA[New comment by ninjagoo in "Meta tells staff it will cut 10% of jobs"]]></title><description><![CDATA[
<p>> "Agile" organization is even more of a bullshit concept than "Agile" in the team.<p>> Excepting for trivial-size, freshly formed startups, companies cannot be "Agile", because finance and legal and HR and even marketing have constrains setting the tempo - you cannot just drive them with a sprint as if it was a clock signal.<p>Implementations of Agile at different companies can be an issue, yes. But that is to be expected in any large organization, simply because of scale. It doesn't change the fact that the on-the-ground teams at agile orgs work to a different cadence and approach than historically traditionally structured companies.<p>There are a few different ways to manage interfacing with parts of the org that need to march to a different beat. That always creates friction, and has to be managed properly. Any large org can suffer from hubris, middling management skills and capacity, wasted effort. Problems of scale, I guess.</p>
]]></description><pubDate>Fri, 24 Apr 2026 13:03:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47889642</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47889642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47889642</guid></item><item><title><![CDATA[New comment by ninjagoo in "Meta tells staff it will cut 10% of jobs"]]></title><description><![CDATA[
<p>> Do big tech companies like FB and Google even pretend to be "agile" anymore?<p>Folks from those companies will have to speak up, but my understanding is that yes, internally these large tech orgs use the Agile Methodology, as opposed to the 'traditional' 'Waterfall' development methods.</p>
]]></description><pubDate>Fri, 24 Apr 2026 12:52:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47889525</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47889525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47889525</guid></item><item><title><![CDATA[New comment by ninjagoo in "Meta tells staff it will cut 10% of jobs"]]></title><description><![CDATA[
<p>> here are 2 main drivers of these layoffs:<p>> 1. Correcting pandemic overhiring<p>> 2. In the ~2010-2022 timeframe, tech companies poured all this money into speculative bets<p>Any data/sources on which this might be based? The pandemic was 6 years ago; do these "Agile" (the tech term) companies really carry many unproductive lines-of-business for so long?<p>> speculative bets that never went anywhere ... think Amazon's Alexa devices division, Google Stadia, and perhaps most famously the Metaverse itself<p>Organizations make speculative bets all the time. Is there an accounting of the profitability of Alexa/Nest etc.?<p>> end of the ZIRP era would have caused companies to kill these inherently unprofitable projects<p>if you plug in the years 2020-2026 in the Fed Rate - Unemployment chart here at [1], it shows that from 2020 - 2022, rates were near zero while unemployment spiked during Covid and then fell. From 2022 through 2023, rates rose sharply while unemployment stayed relatively low. 2024-2025 the labor market softened. You can add the Federal Funds Effective Rate and the Unemployment Rate easily through the menu.<p>Unemployment stayed low through the rise in rates for almost two years prior to 2024. Given that companies operate on a quarterly reporting basis and program/project decisions are at least on that cadence, I don't think that the line you're suggesting that Rates-Go-Up -> Projects-Get-Killed -> Layoffs-Increase quite lines up with the economy-wide data in this exceptional case of 2022-2023.<p>We may have to look elsewhere for the reasons behind the current labor market weakness  ... cough..*economy*..*trade walls*..cough...*structural re-alignment* [2]...cough...<p>[1] <a href="https://fred.stlouisfed.org/graph/?g=1duFv" rel="nofollow">https://fred.stlouisfed.org/graph/?g=1duFv</a><p>[2] 6% employment decline in 22-25 year old workers <a href="https://digitaleconomy.stanford.edu/app/uploads/2025/11/CanariesintheCoalMine_Nov25.pdf" rel="nofollow">https://digitaleconomy.stanford.edu/app/uploads/2025/11/Cana...</a></p>
]]></description><pubDate>Fri, 24 Apr 2026 12:16:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47889188</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47889188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47889188</guid></item><item><title><![CDATA[New comment by ninjagoo in "NIST scientists create 'any wavelength' lasers"]]></title><description><![CDATA[
<p>>> If you pay attention to cats, you figure out they are fuzzy little “difference engines.”<p>> That must be a mechanism in the brain rather than the eye<p>Check out "<i>A Thousand Brains: A New Theory of Intelligence</i>" [1] by Jeff Hawkins [2], of PalmPilot fame. This theory postulates, in part, and with evidence, that brains are continuously comparing sensory input and movement context with learned models. I found the book to be mind-blowing, so to speak ...<p>[1] <a href="https://www.amazon.com/Thousand-Brains-New-Theory-Intelligence/dp/1541675797/" rel="nofollow">https://www.amazon.com/Thousand-Brains-New-Theory-Intelligen...</a><p>[2] <a href="https://en.wikipedia.org/wiki/Jeff_Hawkins" rel="nofollow">https://en.wikipedia.org/wiki/Jeff_Hawkins</a></p>
]]></description><pubDate>Sun, 19 Apr 2026 15:06:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47824843</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47824843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47824843</guid></item><item><title><![CDATA[New comment by ninjagoo in "Clearwing: Produce similar results as Anthropic Glasswing (Mythos)"]]></title><description><![CDATA[
<p>From Eric Hartford at Lazarus-AI [1]: "Clearwing is a fully open-source vulnerability discovery engine. Crash-first hunting, file-parallel agents, oracle-driven verification, variant hunting, adversarial verification. Works with any LLM."<p>"I tested it with OpenAI Codex 5.4 and reproduced Glasswing's findings. I'm now reproducing results with our own ReAligned model - Qwen3.5 finetuned to Western alignment."<p>"Mythos is certainly a great model. The N-day exploit walkthroughs in Anthropic's blog show real reasoning depth. But it's an incremental improvement..." "The real innovation isn't the model. It's the workflow:<p>- Rank every file in a codebase by attack surface<p>- Fan out hundreds of parallel agents, each scoped to one file<p>- Use crash oracles (AddressSanitizer, UBSan) as ground truth<p>- Run a second verification agent to filter noise<p>- Generate exploits as a triage mechanism for severity<p>That's a pipeline. And pipelines are model-agnostic."<p>Disclaimer: I'm not affiliated with Eric/Lazarus in any way.<p>[1] <a href="https://x.com/QuixiAI/status/2044952124568527298" rel="nofollow">https://x.com/QuixiAI/status/2044952124568527298</a></p>
]]></description><pubDate>Sun, 19 Apr 2026 14:29:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47824574</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47824574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47824574</guid></item><item><title><![CDATA[Clearwing: Produce similar results as Anthropic Glasswing (Mythos)]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Lazarus-AI/clearwing">https://github.com/Lazarus-AI/clearwing</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47824573">https://news.ycombinator.com/item?id=47824573</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 19 Apr 2026 14:29:51 +0000</pubDate><link>https://github.com/Lazarus-AI/clearwing</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47824573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47824573</guid></item><item><title><![CDATA[New comment by ninjagoo in "NASA Force"]]></title><description><![CDATA[
<p><p><pre><code>  > the ability to plan long term, which is something I don't think the US is capable of anymore.
</code></pre>
It may seem that way, but this lack is temporary until the pendulum swings back the other way. What is needed some mechanism to keep progress and planning going even when the pendulum is unfavorable.<p><pre><code>  > the aggressive selfishness and individualism of American society
</code></pre>
It's an error to think the loudest voices are the majority. Also, selfishness and individualism are not necessarily cojoined twins, though it may seem that way at the moment. Americans are generous with their time and money as one can see from donation stats. [1] The comparative data at [2] is especially eye-opening.<p><pre><code>  > This used to exist, but only within the framework of racial and cultural homogeneity
</code></pre>
This might be a myth. See [2]. Also, cooperative/pro-social behaviors are well documented across a spectrum of biological species, including humans. It might be innate to structured biological life, individual pathologies notwithstanding. "Society" is a thing, after all.<p><pre><code>  > It means some dirty words for Americans
</code></pre>
I think this is an artifact of media capture. We the people need to wrest back control of the medium.<p><pre><code>  > Maybe enough bastards just need to die out.
</code></pre>
There's always new ones being minted, unfortunately. Hence the need for a long-term solution.<p><pre><code>  > How do you make people give a damn?
</code></pre>
Maybe we just need to organize those who do. Any suggestions how?<p>[1] <a href="https://www.nptrust.org/philanthropic-resources/charitable-giving-statistics/" rel="nofollow">https://www.nptrust.org/philanthropic-resources/charitable-g...</a><p>[2] <a href="https://www.nature.com/articles/s41598-025-96009-3" rel="nofollow">https://www.nature.com/articles/s41598-025-96009-3</a></p>
]]></description><pubDate>Sat, 18 Apr 2026 17:56:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47817961</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47817961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47817961</guid></item><item><title><![CDATA[New comment by ninjagoo in "NASA Force"]]></title><description><![CDATA[
<p>> When you frame it like that it sounds like some kind of vanguard of class-conscious people<p>Less class-conscious and more reality-conscious - there's always going to be a group that's anti-science/anti-rationality because of religion, views, etc. It's when they get into power and stop the progress of science that it becomes an issue.<p>> should try to rebel and establish a, I don't know, dictatorship of the proletariat?<p>No need for anything quite as drastic. And that would be effective only for a duration of time until the pendulum swings the other way. Also, I'm sure from the anti-science folks' perspective it's the pro-science folks that are oppressive when the latter are in government.<p>There must be some long-term solution to insulate science from the swings of the pendulum, without devolving into chaos or oppression. Maybe the internet hive-mind can brainstorm a solution. We also need a forum where like-minded people can have this discussion without getting downvoted into oblivion. Any options?</p>
]]></description><pubDate>Sat, 18 Apr 2026 17:23:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47817661</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47817661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47817661</guid></item><item><title><![CDATA[New comment by ninjagoo in "GPT‑Rosalind for life sciences research"]]></title><description><![CDATA[
<p>While this model set (GPT-Rosalind) is limited to certain organizations, the announcement also included the release of a Life Sciences Plugin, which is more broadly available on Codex [1].<p>[1] <a href="https://github.com/openai/plugins/tree/main/plugins/life-science-research" rel="nofollow">https://github.com/openai/plugins/tree/main/plugins/life-sci...</a></p>
]]></description><pubDate>Sat, 18 Apr 2026 13:04:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47815597</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47815597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47815597</guid></item><item><title><![CDATA[New comment by ninjagoo in "NASA Force"]]></title><description><![CDATA[
<p>> Americans clearly don't believe in science anymore<p>There's about a third that lean that way or atleast they don't care, and they have gained control of the government because of various factors, namely,<p>part of the middle third disillusioned with economics (left behind) and wanting a change,<p>another part of the middle third staying home because of geopolitics,<p>and yet another part of the middle third falling prey to media biased by right-wing billionaire/corporatist capture.<p>Any suggestions for a long-term fix for this problem?</p>
]]></description><pubDate>Sat, 18 Apr 2026 11:42:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47815120</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47815120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47815120</guid></item><item><title><![CDATA[New comment by ninjagoo in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>> since the only difference is the user's intentions<p>Have these been banned yet: dual-use kitchen items, actual weapons of war for consumer use, dual-use garden chemicals, dual-use household chemicals etc. etc? Has human cybersecurity research stopped? Have malware authors stopped research?<p>No? then this sounds more like hype than real reasons.<p>There's also the possibility that there's a singular anthropic individual who's gained a substantial amount of internal power and is driving user-hostile changes in the product under the guise of cybersecurity.</p>
]]></description><pubDate>Fri, 17 Apr 2026 13:02:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47805484</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47805484</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47805484</guid></item><item><title><![CDATA[New comment by ninjagoo in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>> and people don't come in droves. Because the product is noticeably worse.<p>As of Oct 2025, it appears that openai markets share is 15x that of anthropic: 60% vs 3.5% [1].<p>As of April 2026, openai has 900 million weekly users [2] while anthropic has 300 million monthly users [1].<p>As of March 2026, openai app downloads were 2.2 million per day, while anthropic app downloads were 340,000. openai mobile users were 248 million per day, while anthropic mobile users were 9.4 million. In Feb 2026, chatgpt had 5.4 billion web visits, while claude had 290 million web visits. [3]<p>It seems to me that openai operates at a much higher scale than anthropic. Since you used droves as a proxy for product quality, by that standard anthropic has a much more inferior product. :)<p>[1] <a href="https://sqmagazine.co.uk/claude-vs-chatgpt-statistics/" rel="nofollow">https://sqmagazine.co.uk/claude-vs-chatgpt-statistics/</a>
[2] <a href="https://www.pbs.org/newshour/nation/openai-focuses-on-business-users-amid-competition-with-rival-anthropic" rel="nofollow">https://www.pbs.org/newshour/nation/openai-focuses-on-busine...</a>
[3] <a href="https://www.forbes.com/sites/conormurray/2026/03/06/claude-surges-amid-defense-department-drama-downloads-up-55/" rel="nofollow">https://www.forbes.com/sites/conormurray/2026/03/06/claude-s...</a></p>
]]></description><pubDate>Fri, 17 Apr 2026 10:27:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47804397</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47804397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47804397</guid></item><item><title><![CDATA[New comment by ninjagoo in "The race to train AI robots"]]></title><description><![CDATA[
<p>There's a viral video floating around showing Indian factory workers using head-mounted cameras to capture data for training AI Robots. This article has some details on that. The viral video itself is an unsourced reddit post, unfortunately. [1]<p>[1] <a href="https://www.reddit.com/r/Damnthatsinteresting/comments/1sjdaoy/indian_factory_workers_wearing_headmounted/" rel="nofollow">https://www.reddit.com/r/Damnthatsinteresting/comments/1sjda...</a></p>
]]></description><pubDate>Sun, 12 Apr 2026 19:17:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743318</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47743318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743318</guid></item><item><title><![CDATA[The race to train AI robots]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.latimes.com/business/story/2025-11-02/inside-californias-rush-to-gather-human-data-for-building-humanoid-robots">https://www.latimes.com/business/story/2025-11-02/inside-californias-rush-to-gather-human-data-for-building-humanoid-robots</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47743292">https://news.ycombinator.com/item?id=47743292</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:14:39 +0000</pubDate><link>https://www.latimes.com/business/story/2025-11-02/inside-californias-rush-to-gather-human-data-for-building-humanoid-robots</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47743292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743292</guid></item><item><title><![CDATA[New comment by ninjagoo in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>> In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases.<p>By definition, that's not a post-scarcity world; and that's already today's world.<p>> It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have.<p>Do you think that's genetic, or environmental? Either way, maybe it will have been trained out of the kids.<p>> it has also been used by people who actively enjoy hurting others, who have caused measurable harm<p>Taxes work the same way too. "The Good Place" explores these second-order and higher-order effects in a surprisingly nuanced fashion.<p>Control over the actions of others, you have not. Keep you from your work, let them not.<p>> What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good<p>These are all things necessary in a society with scarcity. Will they be needed in a post-scarcity society that has presumably solved all disorder that has its roots in scarcity?<p>> With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself.<p>Yes, the futility of our actions can be infuriating, disheartening, and debilitating. Comes to mind the story about the chap that was tossing washed-ashore starfish one by one. There were thousands. When asked why do this futile task - can't throw them all back- he answered as he threw the next ones: it matters to this one, it matters to this one, ...<p>Hopefully, your code helped <i>someone</i>. That's a good enough reason to do it.</p>
]]></description><pubDate>Sat, 11 Apr 2026 02:26:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47726672</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47726672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47726672</guid></item><item><title><![CDATA[New comment by ninjagoo in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>> Linus is the original vibe coder.<p>LoL.<p>Jesting aside, OpenHub lists Linus Torvalds as having made 46,338 commits. 45,178 for Linux, 1,118 for Git. His most recent commit was 17 days ago. [1]<p>That is a far cry from a vibe-coder, no? :-)<p>Bit unfair to call his leadership vibe-coding, methinks.<p>[1] <a href="https://openhub.net/accounts/9897" rel="nofollow">https://openhub.net/accounts/9897</a></p>
]]></description><pubDate>Fri, 10 Apr 2026 23:32:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725207</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47725207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725207</guid></item><item><title><![CDATA[New comment by ninjagoo in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>> Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.<p>I think you'll find that this is not settled in the courts, depending on how the data was obtained. If the data was obtained legally, say a purchased book, courts have been finding that using it for training is fair use (<i>Bartz v. Anthropic, Kadrey v. Meta</i>).<p>Morally the case gets interesting.<p>Historically, there was no such thing as copyright. The English 1710 Statute of Anne establishing copyright as a public law was titled 'for the Encouragement of Learning' and the US Constitution said 'Congress may secure exclusive rights to promote the progress of science and useful arts'; so essentially public benefits driven by the grant of private benefits.<p>The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?<p>The more the people that copy your work with attribution, the more famous you'll be. Now <i>that's</i> the <i>currency of the future*</i>. [1]<p>You'll do it for the kudos. [2][3]<p><pre><code>  *Post-Scarcity Future. 
  [1] https://en.wikipedia.org/wiki/Post-scarcity
  [2] https://en.wikipedia.org/wiki/The_Quiet_War, et. al.
  [3] https://en.wikipedia.org/wiki/Accelerando</code></pre></p>
]]></description><pubDate>Fri, 10 Apr 2026 22:51:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47724743</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47724743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47724743</guid></item></channel></rss>