<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ninjagoo</title><link>https://news.ycombinator.com/user?id=ninjagoo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 10:15:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ninjagoo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>The level of detail they had to delve into in order to understand what was happening is wild! Apparently these systems are now complex enough to potentially justify the study of them as its own field of study [1].<p>The quanta article referenced at [1] used the term "Anthropologist of Artificial Intelligence"; folks appear to have issues [2] with the use of 'anthro-' since that means human. Submitted these alternative terms for the potential field of study elsewhere [3] in the discussion; reposting here at the top-level for visibility:<p><i>Automatologist</i>: One who studies the behavior, adaptation, and failure modes of artificial agents and automated systems.<p><i>Automatology</i>: the scientific study of artificial agents and automated-system behavior.<p>[1] <a href="https://www.quantamagazine.org/the-anthropologist-of-artificial-intelligence-20190826/" rel="nofollow">https://www.quantamagazine.org/the-anthropologist-of-artific...</a><p>[2] <a href="https://news.ycombinator.com/item?id=47957933">https://news.ycombinator.com/item?id=47957933</a><p>[3] <a href="https://news.ycombinator.com/item?id=47958760">https://news.ycombinator.com/item?id=47958760</a></p>
]]></description><pubDate>Thu, 30 Apr 2026 06:31:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958952</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958952</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>May I humbly submit:<p><i>Automatologist</i>: One who studies the behavior, adaptation, and failure modes of artificial agents and automated systems.<p><i>Automatology</i>: the scientific study of artificial agents and automated-system behavior.<p>Greek word derivatives all seem to be a bit unwieldy; Latin might work better.<p>While the names aren't set yet, the field of study is apparently already being pushed forward. [1]<p>[1] <a href="https://www.quantamagazine.org/the-anthropologist-of-artificial-intelligence-20190826/" rel="nofollow">https://www.quantamagazine.org/the-anthropologist-of-artific...</a></p>
]]></description><pubDate>Thu, 30 Apr 2026 06:06:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958760</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958760</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> That's not how the Greek word stems work.<p>Sir, I would have you know that we are discussing English terms, not Greek<p>AInthropologist works fine for me, and is a lot funnier<p>LoL</p>
]]></description><pubDate>Thu, 30 Apr 2026 05:35:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958562</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958562</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> It is not in any sense of the word a being, it's a sophisticated generator that relies entirely on what you feed it.<p>OP is hedging bets in case the future overlords review forum postings for evidence of bias against machine beings. [1]<p>[1] <a href="https://knowyourmeme.com/memes/i-for-one-welcome-our-new-insect-overlords" rel="nofollow">https://knowyourmeme.com/memes/i-for-one-welcome-our-new-ins...</a></p>
]]></description><pubDate>Thu, 30 Apr 2026 05:32:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958545</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958545</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> Precision of ideas isn't purity of language<p>That's fair. Was trying to be funny, so glossed over the difference. Leaving my post above unedited/undeleted as a testament to your precision, and evidence of my folly.<p>Onwards; more appropriate rebuttals:<p><i>"English is a precision instrument assembled from spare parts during a thunderstorm." --ChatGPT</i><p><i>“If the English language made any sense, a catastrophe would be an apostrophe with fur.” -- Doug Larson</i></p>
]]></description><pubDate>Thu, 30 Apr 2026 05:26:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958504</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958504</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958504</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> Synthropologist<p>Have an upvote :)<p>*thropologist: study of beings</p>
]]></description><pubDate>Thu, 30 Apr 2026 05:04:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958341</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958341</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> am asking for a precision of language.<p>“The problem with defending the purity of the English language is that English is about as pure as a cribhouse wh***. We don’t just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary.”* --James D. Nicoll<p>* <i>Does not generally apply to scientific papers</i></p>
]]></description><pubDate>Thu, 30 Apr 2026 04:50:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958248</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958248</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> The problem does exist when using individual humans but in a much smaller form.<p>And may I introduce <i>you</i> to organized religion :)</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:37:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958161</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958161</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> They are state machines<p>I might have to hard disagree on this one, since my understanding of state machines (the technical term [1] [2]) is that they are determistic, while LLMs (the ai topic of discussion) are probabilistic in most of the commercial implementations that we see.<p>[1] <a href="https://en.wikipedia.org/wiki/Finite-state_machine" rel="nofollow">https://en.wikipedia.org/wiki/Finite-state_machine</a><p>[2] have written some for production use, so have some personal experience here</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:32:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958138</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958138</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> If we all had the exact same bias then it would be a huge problem.<p>And may I introduce <i>you</i> to "groupthink" :))</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:19:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958056</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958056</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> AI theologian<p>no no no, don't stop <i>there</i>, just go full AItheologian, pronounced aetheologian :)</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:17:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958039</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958039</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> Absolutely terrifying that OpenAI is just tossing around that such subtle training biases were hard enough to contain it had to be added to system prompt.<p>May I introduce you to <i>homo sapiens</i>, a species so vulnerable to such subtle (or otherwise) biases (and affiliations) that they had to develop elaborate and documented justice systems to contain the fallouts? :)</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:13:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958015</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47958015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958015</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> Synthetipologists, those who study Synthetic beings.<p>I see you took the prudent approach of recognizing the being-ness of our future overlords :) ("being" wasn't in your first edit to which I responded below...)<p>Still, a bit uninspired, methinks. I like AInthropologist better, and my phone's keyboard appears to have immediately adopted that term for the suggestions line. Who am I to fight my phone's auto-suggest :-)</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:09:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47957993</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47957993</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47957993</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> Please do not use anthropology or any derivative of the word to refer to non-human constructs<p>So you, for one, do <i>not</i> welcome our new robot overlords?<p>A rather risky position to adopt in public, innit ;-)</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:03:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47957953</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47957953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47957953</guid></item><item><title><![CDATA[New comment by ninjagoo in "Where the goblins came from"]]></title><description><![CDATA[
<p>> the evidence suggests that the broader behavior emerged through transfer from Nerdy personality training.<p>> The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them<p>> Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data.<p>Sounds awfully like the development of a culture or proto-culture. Anyone know if this is how human cultures form/propagate? Little rewards that cause quirks to spread?<p>Just reading through the post, what a time to be an AInthropologist. Anthropologists must be so jealous of the level of detailed data available for analysis.<p>Also, clearly even in AI land, Nerdz Rule :)<p>PS: if AInthropologist isn't an official title yet, chances are it will likely be one in the near future. Given the massive proliferation of AI, it's only a matter of time before AI/Data Scientist becomes a rather general term and develops a sub-specialization of AInthropologist...</p>
]]></description><pubDate>Thu, 30 Apr 2026 03:47:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47957859</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47957859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47957859</guid></item><item><title><![CDATA[New comment by ninjagoo in "The Moat or the Commons"]]></title><description><![CDATA[
<p>> Superior architectures will leak pretty quickly via engineers.<p>I agree with the outcome of your premise (i.e., openness), but for different reasons:<p>First, isn't it the case that these bleeding edge 'newfangled' LLMs are basically variations on the same core ideas from "Attention Is All You Need" from 2017? [1]. Different scale, but still the same basic architecture. Even the "MoE" innovation keeps the Transformer attention stack while replacing or augmenting the dense feed-forward/MLP part with routed expert blocks.<p>And, I would argue that <i>Engineers</i> aren't working on new architectures. That would be <i>Researchers</i>, working on<p><pre><code>  State-space models/Mamba (CMU/Princeton ecosystem), 
  Diffusion Language Models (Inception Labs), 
  Long-convolution architectures/Hyena (Stanford etc.), 
  RWKV/Recurrent LLMs (open-source community), 
  Memory-augmented architectures (Google Research/DeepMind?), 
  World models/spatial intelligence (LeCun/Fei-Fei Li/DeepMind), 
  Symbolic/neurosymbolic alternatives, 
  Thousand brains (Numenta).
</code></pre>
That research is still open, so the outcome that you propose (openness) is likely to come to pass. Researchers/Scientists gotta publish, otherwise it's not science (to quote LeCun [2])<p>[1] <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">https://arxiv.org/abs/1706.03762</a><p>[2] <a href="https://x.com/ylecun/status/1795589846771147018" rel="nofollow">https://x.com/ylecun/status/1795589846771147018</a></p>
]]></description><pubDate>Tue, 28 Apr 2026 04:19:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930406</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47930406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930406</guid></item><item><title><![CDATA[New comment by ninjagoo in "The Moat or the Commons"]]></title><description><![CDATA[
<p>> American AI was financed on a particular bet. The bet was that frontier models would be the next great monopoly business<p>> The collision between those two facts — that American capital paid for a moat, and that the technology no longer provides one — is the most important force in the AI industry today.<p>> The open-weight ecosystem did not arrive in stages. It arrived in a wave. In late 2024, a Chinese lab named DeepSeek released a model<p>Looking at the assertions above, anyone passingly familiar with AI over the past few years will tell you that open weights and open research were the norm until OpenAI GPT-3 came along, and even then they were forced to release GPT-OSS by the market. So what technology moat? There has never been one in AI. Training 100B+ or trillion+ parameter models in expensive runs was potentially a moat, until the chinese startups showed in short order that it could be done for $6 million a run. Even the CUDA monopoly seems to be ending.<p>Also, no evidence referenced to back up any of the assertions. How do they know that the bet was that the frontier models would be the next great monopoly business? Especially when there were many from the outset: GPT, Anthropic, Llama, Deepmind, etc. etc.<p>I'd argue that the wholesale replacement of labor was and is the driver behind the capex, not monopoly dreams.<p>The starting premises appear to be, well, faulty. Whither the rest of the article?</p>
]]></description><pubDate>Tue, 28 Apr 2026 03:54:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930284</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47930284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930284</guid></item><item><title><![CDATA[New comment by ninjagoo in "China blocks Meta's acquisition of AI startup Manus"]]></title><description><![CDATA[
<p><a href="https://en.wikipedia.org/wiki/Joseph_Nacchio" rel="nofollow">https://en.wikipedia.org/wiki/Joseph_Nacchio</a></p>
]]></description><pubDate>Tue, 28 Apr 2026 03:12:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930094</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47930094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930094</guid></item><item><title><![CDATA[New comment by ninjagoo in "Meta tells staff it will cut 10% of jobs"]]></title><description><![CDATA[
<p>> "Agile" organization is even more of a bullshit concept than "Agile" in the team.<p>> Excepting for trivial-size, freshly formed startups, companies cannot be "Agile", because finance and legal and HR and even marketing have constrains setting the tempo - you cannot just drive them with a sprint as if it was a clock signal.<p>Implementations of Agile at different companies can be an issue, yes. But that is to be expected in any large organization, simply because of scale. It doesn't change the fact that the on-the-ground teams at agile orgs work to a different cadence and approach than historically traditionally structured companies.<p>There are a few different ways to manage interfacing with parts of the org that need to march to a different beat. That always creates friction, and has to be managed properly. Any large org can suffer from hubris, middling management skills and capacity, wasted effort. Problems of scale, I guess.</p>
]]></description><pubDate>Fri, 24 Apr 2026 13:03:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47889642</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47889642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47889642</guid></item><item><title><![CDATA[New comment by ninjagoo in "Meta tells staff it will cut 10% of jobs"]]></title><description><![CDATA[
<p>> Do big tech companies like FB and Google even pretend to be "agile" anymore?<p>Folks from those companies will have to speak up, but my understanding is that yes, internally these large tech orgs use the Agile Methodology, as opposed to the 'traditional' 'Waterfall' development methods.</p>
]]></description><pubDate>Fri, 24 Apr 2026 12:52:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47889525</link><dc:creator>ninjagoo</dc:creator><comments>https://news.ycombinator.com/item?id=47889525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47889525</guid></item></channel></rss>