<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: aniijbod</title><link>https://news.ycombinator.com/user?id=aniijbod</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 04:27:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=aniijbod" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Why, After All These Years, MZI-Based Transistorlessness Might Finally Be Here]]></title><description><![CDATA[
<p>Article URL: <a href="https://write.as/mnggfj7asl07k">https://write.as/mnggfj7asl07k</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47770405">https://news.ycombinator.com/item?id=47770405</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 14 Apr 2026 19:37:51 +0000</pubDate><link>https://write.as/mnggfj7asl07k</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=47770405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47770405</guid></item><item><title><![CDATA[New comment by aniijbod in "Bertrand Russell on Apricots (1935)"]]></title><description><![CDATA[
<p>The History of the word "Apricot"<p>The word originally entered English as an adaptation of the Portuguese albricoque or Spanish albaricoque.<p>However, it was subsequently changed to match the related French word abricot (where the 't' is silent).<p>It is also useful to compare this to the Italian albercocca or albicocca and the Old Spanish albarcoque.<p>These all stem from the Spanish-Arabic al-borcoque, which itself comes from the Arabic al-burqūq (literally "the" + "birqūq").<p>This Arabic term was adapted from Greek, appearing in the writings of Dioscorides around the year 100 AD.<p>The Greek word was probably adapted from the Latin præcoquum, a variant of præcox (plural præcocia), which translates to "early-ripe" or "ripe in summer."<p>In earlier Roman times, the fruit was actually called the "Armenian plum" or "Armenian apple."<p>By around the year 350, the writer Palladius was using both terms, referring to them as "Armenian or early-ripe" fruits.<p>The reason we use a "p" in English (apricot) instead of a "b" (abricot) is likely due to a mistake in etymology.<p>In 1617, the scholar Minsheu explained the name as if it meant in aprico coctus, or "cooked in a sunny place."<p>This "sunny" explanation stuck, even though it was technically incorrect!</p>
]]></description><pubDate>Wed, 28 Jan 2026 20:23:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46801018</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46801018</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46801018</guid></item><item><title><![CDATA[New comment by aniijbod in "Case study: Creative math – How AI fakes proofs"]]></title><description><![CDATA[
<p>In the theory of the psychology of creativity, there are phenomena which constitute distortions of the motivational setting for creative problem-solving which are referred to as 'extrinsic rewards'. Management theory bumped into this kind of phenomenon with the advent of the introduction of the first appearance of 'gamification' as a motivational toolkit, where 'scores' and 'badges' were awarded to participants in online activities. The psychological community reacted to this by pointing out that earlier research had shown that whilst extrinsics can indeed (at least initially) boost participation by introducing notions of competitiveness, it turned out that they were ultimately poor substitutes for the far more sustainable and productive intrinsic motivational factors, like curiosity, if it could be stimulated effectively (something which itself inevitably required more creativity on the part of the designer of the motivational resources). It seems that the motivational analogue in inference engines is an extrinsic reward process.</p>
]]></description><pubDate>Mon, 26 Jan 2026 02:40:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46761232</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46761232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46761232</guid></item><item><title><![CDATA[New comment by aniijbod in "A Proclamation Regarding the Restoration of the Dash"]]></title><description><![CDATA[
<p>"WHEREAS, the Large Language Model has merely mimicked a sophistication it cannot truly possess": says who(m)?</p>
]]></description><pubDate>Fri, 26 Dec 2025 19:18:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46395232</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46395232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46395232</guid></item><item><title><![CDATA[Ask HN: Will SLMs be what bursts the LLM bubble cos you can run them on a phone?]]></title><description><![CDATA[
<p>Almost no latency, they don't even need a top-end phone, they understand what you say.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46372291">https://news.ycombinator.com/item?id=46372291</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 24 Dec 2025 03:58:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46372291</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46372291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46372291</guid></item><item><title><![CDATA[The Humanoid Redundancy Principle]]></title><description><![CDATA[
<p>If a production task truly requires a human, the production line is incorrectly designed.
If it does not require a human, a humanoid adds no value.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46353256">https://news.ycombinator.com/item?id=46353256</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 22 Dec 2025 11:21:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46353256</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46353256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46353256</guid></item><item><title><![CDATA[New comment by aniijbod in "The REAL Reason Behind Enshittification"]]></title><description><![CDATA[
<p>Personal brands developed by ChatGPT</p>
]]></description><pubDate>Mon, 15 Dec 2025 18:39:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46278498</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46278498</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46278498</guid></item><item><title><![CDATA[The REAL Reason Behind Enshittification]]></title><description><![CDATA[
<p>Sneer all you like at marketing consultants, but once profit has plateaued, then those 'give it all away free to get traction' and 'keep innovating just to get eyeballs' and 'blow them away with Zappos-level UX' (anyone remember Zappos? Holacracy? Amazon buying them for a billion?) brigade get replaced with 'accountants' who cut all of that 'waste' out. Removing everything they call shit. Whatever you think of marketing bozos, the enshittification is what happens when they're suddenly conspicuous by their absence.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46276112">https://news.ycombinator.com/item?id=46276112</a></p>
<p>Points: 2</p>
<p># Comments: 3</p>
]]></description><pubDate>Mon, 15 Dec 2025 15:55:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46276112</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46276112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46276112</guid></item><item><title><![CDATA[Could Endpoint SLMs Replace Cloud LLMs? Would Datacenter Race Shudder to a Halt?]]></title><description><![CDATA[
<p>Yeah, an SLM on an endpoint like a phone will have fresh latency issues as it goes online to fill gaps in its inference engine's knowledge base that a cloud LLM might not have, but cloud LLMs aren't exactly latency-free either, so the latency/performance issue isn't necessarily LLM's winning card.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46079091">https://news.ycombinator.com/item?id=46079091</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 28 Nov 2025 14:44:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46079091</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46079091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46079091</guid></item><item><title><![CDATA[Designable Emergence: The Next Frontier After the Artificial Nucleolus]]></title><description><![CDATA[
<p>Article URL: <a href="https://medium.com/@peter_9588/designable-emergence-the-next-frontier-after-the-artificial-nucleolus-9cbba87e2b2e">https://medium.com/@peter_9588/designable-emergence-the-next-frontier-after-the-artificial-nucleolus-9cbba87e2b2e</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46064152">https://news.ycombinator.com/item?id=46064152</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 27 Nov 2025 00:53:24 +0000</pubDate><link>https://medium.com/@peter_9588/designable-emergence-the-next-frontier-after-the-artificial-nucleolus-9cbba87e2b2e</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=46064152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46064152</guid></item><item><title><![CDATA[Composing the Idea: Why "Next Word Prediction" Misses the Point]]></title><description><![CDATA[
<p>Article URL: <a href="https://medium.com/@peter_9588/composing-the-idea-why-next-word-prediction-misses-the-point-81eea25813f9">https://medium.com/@peter_9588/composing-the-idea-why-next-word-prediction-misses-the-point-81eea25813f9</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45804435">https://news.ycombinator.com/item?id=45804435</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 03 Nov 2025 21:06:34 +0000</pubDate><link>https://medium.com/@peter_9588/composing-the-idea-why-next-word-prediction-misses-the-point-81eea25813f9</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=45804435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45804435</guid></item><item><title><![CDATA[New comment by aniijbod in "Show HN: Strange Attractors"]]></title><description><![CDATA[
<p>I don't care about the math, the computation, the physics. This is just by far the most beautiful thing(s) I have ever seen.</p>
]]></description><pubDate>Sat, 01 Nov 2025 03:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45779010</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=45779010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45779010</guid></item><item><title><![CDATA[Tell HN: Avoid ostentatious eloquence and crisp structure or be slop]]></title><description><![CDATA[
<p>I used to pride myself on my wordsmithery, now it condemns itself and myself to opprobrium or obscurity, but I defiantly refuse to resort to deliberate typos or humaniser apps. I'm even sorely tempted to go back to peppering my meisterwerks with em dashes.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45192423">https://news.ycombinator.com/item?id=45192423</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 10 Sep 2025 02:30:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45192423</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=45192423</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45192423</guid></item><item><title><![CDATA[Post AI bubble pitch: all mentions of AI suddenly conspicuous by their absence?]]></title><description><![CDATA[
<p>Today's No. 1 investability credential: maybe it ends up as tomorrow's VC third rail?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44972149">https://news.ycombinator.com/item?id=44972149</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 21 Aug 2025 12:53:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44972149</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=44972149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44972149</guid></item><item><title><![CDATA[The bet against AI was quite safe, until it wasn't]]></title><description><![CDATA[
<p>There was a firm, implicit conviction in business that things that AI would decimate still had a future because AI was still a long way off.<p>Like the time traveler who went back in time but never bet on things that actually happened, those that had the most faith in the developments we see today before they took off never shorted the things AI would undermine, at least as far as we know, but maybe the reason we don't know is because AI hasn't properly undermined them yet?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44930851">https://news.ycombinator.com/item?id=44930851</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 17 Aug 2025 11:29:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44930851</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=44930851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44930851</guid></item><item><title><![CDATA[New comment by aniijbod in "What's the strongest AI model you can train on a laptop in five minutes?"]]></title><description><![CDATA[
<p>Oh no! I thought that was Windows 11</p>
]]></description><pubDate>Thu, 14 Aug 2025 21:14:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44905762</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=44905762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44905762</guid></item><item><title><![CDATA[New comment by aniijbod in "What's the strongest AI model you can train on a laptop in five minutes?"]]></title><description><![CDATA[
<p>Let the AI efficiency olympics begin!<p>On a laptop, on a desktop, on a phone?<p>Train for 5 minutes, an hour, a day, a week?<p>On a boat? With a goat?</p>
]]></description><pubDate>Thu, 14 Aug 2025 13:01:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44899879</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=44899879</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44899879</guid></item><item><title><![CDATA[Why Duracell Hasn't Embraced Lithium Iron Phosphate – Yet]]></title><description><![CDATA[
<p>Article URL: <a href="https://old.reddit.com/r/batteries/comments/1k4gm19/why_duracell_hasnt_embraced_lithium_iron/">https://old.reddit.com/r/batteries/comments/1k4gm19/why_duracell_hasnt_embraced_lithium_iron/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43753941">https://news.ycombinator.com/item?id=43753941</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 21 Apr 2025 16:48:24 +0000</pubDate><link>https://old.reddit.com/r/batteries/comments/1k4gm19/why_duracell_hasnt_embraced_lithium_iron/</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=43753941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43753941</guid></item><item><title><![CDATA[Hawking's Reputation: A Black Hole Which Swallowed His Debt to Penrose]]></title><description><![CDATA[
<p>I grew up thinking Stephen Hawking was the most important scientist in the world.
Not because I understood his work. Not even because I cared much about physics back then.
But because of the image: the man in the wheelchair, locked in his body, talking through a machine, and somehow explaining the entire universe better than anyone else alive. It was unbelievable. He was a symbol. To me, he looked like tragedy and genius rolled into one.
And I assumed—like almost everyone else I know—that if he was that famous for discovering black holes, he must’ve been the one who discovered them.
But recently, I read Patchen Barss’s new biography. And I honestly still haven’t recovered.
Because it turns out, that’s not how it happened. Not even close.
It was Penrose who did the hard part.
The guy who actually figured out how black holes work? The one who proved they weren’t just possible, but guaranteed under Einstein’s theory?
It wasn’t Hawking.
It was Roger Penrose.
A name I barely knew until about two months ago.
In 1965, Penrose came out of nowhere with this totally new kind of mathematics that proved what nobody had been able to prove before: that if a massive star collapses, it doesn’t just die quietly—it crushes space itself and creates a singularity. That’s the real origin of black hole theory as we know it.
Then Hawking came in, picked up Penrose’s ideas, and applied them to the entire universe. The Big Bang, cosmic beginnings, time running backward—all of that. It was very clever. He made the whole thing his own. But he started with Penrose’s playbook.
I didn’t know any of this
And honestly, I think that’s the most shocking part.
Nobody told me.
Not in school. Not in documentaries. Not even in the obituaries I read when Hawking died.
Because let’s face it: Hawking wasn’t famous because of his black hole work.
He was famous because he was a genius in a completely disabled body. Because of the robotic voice. Because of how impossible his life was. That’s what made him stand out. That’s what people saw. His love life. His wives. His kids.
He became the symbol of scientific brilliance because he looked like someone who shouldn’t be able to speak at all, let alone explain the universe.
And I get why that stuck with people. I really do. But it also meant no one looked too closely at who actually did what.
Hawking’s death, Penrose’s prize, and… still no shift?
Hawking died in 2018, and I remember the world stopping for a minute<p>Then, two years later, in 2020, Penrose got the Nobel Prize—finally—for his black hole work.
No shared credit. No Hawking. Just Penrose.
And still, the public story didn’t change. Hawking was still the name people knew. Penrose stayed basically invisible among all the other Nobel winners that nobody really remembers or cares about.
It actually seemed unfair that Penrose was getting the Nobel that should have gone to Hawking.
Until now
It’s only with the release of Barss’s new book—right now, years after Hawking’s death and even after Penrose’s Nobel—that I finally saw the whole picture for what it is.
It was Penrose who changed physics. The one who did the part that actually got recognized at the highest level.
And the fact that it took this long for someone to put it all together in one place—and say it clearly—is kind of unbelievable.
Why it gets to me
I respected Hawking. I still do. But his story became so massive, so unshakable, that it swallowed everything around it—including Penrose.
And I don’t want that to happen anymore.
Now when people talk about Hawking, I can’t not mention Penrose. I can’t let the story go back to being just one guy overcoming the odds. It was always more than that. It was always a team effort. And Penrose was the one who cracked the code first.
So yeah, Hawking’s reputation really did act like a black hole. It pulled everything in, and for a long time, no light escaped.
Thanks to Barss, maybe it finally has.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43586250">https://news.ycombinator.com/item?id=43586250</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 04 Apr 2025 18:42:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43586250</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=43586250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43586250</guid></item><item><title><![CDATA[Structuring AI cognition around game-like principles]]></title><description><![CDATA[
<p>To structure AI cognition around game-like principles in neural networks, we must move beyond logic trees and embrace latent spaces, predictive models, and emergent learning.<p>1. From Logic Trees to Latent Spaces<p>Symbolic AI relies on explicit rules (if X, then Y), while neural networks encode information in latent spaces—continuous, high-dimensional structures that capture relationships implicitly.<p>Challenge:<p>How do we shape latent spaces so game-like structures emerge, enabling neural networks to interact with information as if playing a game?<p>Instead of hand-coded strategies, we must design architectures that naturally develop game-like reasoning through optimization.<p>2. From Rule-Based Games to Reinforcement Learning (RL)
Games involve feedback, prediction, and strategy formation, aligning with reinforcement learning (RL):<p>Predicting outcomes = simulating moves.<p>Refining strategies = adapting through trial and error.<p>Developing world models = optimizing future choices.<p>Challenge:<p>Can we generalize RL structures beyond reward-driven environments, making learning game-like even outside traditional RL frameworks?<p>Self-play, curiosity-driven exploration, and intrinsic motivation push RL beyond explicit games into general cognition.<p>3. From Decsion Trees to Continuous Prediction Loops
Symbolic AI treats cognition as discrete steps; neural networks continuously predict and update expectations. This mirrors predictive processing, where:
The brain (or AI) anticipates sensory inputs.
Errors update internal models, much like refining a game strategy.<p>Challenge:<p>Can we structure AI cognition around predictive loops rather than strict reward maximization?
This aligns with active inference, where minimizing prediction error becomes the "game" itself.
4. From Hardcoded Game Rules to Emergent Learning
Symbolic AI relies on predefined mechanics (e.g., chess rules), while neural networks thrive on unstructured data.
A game-like AI must:<p>Discover meaningful rules autonomously.<p>Learn exploratory behaviors without explicit incentives.
Generalize strategies across domains.<p>Challenge:<p>Can AI construct its own "games" from raw data, learning useful representations without predefined objectives?
This requires self-supervised learning and meta-learning—teaching AI how to learn.<p>5. From External Tasks to a Game-Like Cognitive Framework
Traditional AI sees games as external challenges. But human cognition is game-like by nature, constantly refining strategies.<p>A truly game-like AI must:<p>Interact with all data as an adaptive challenge.
Set its own challenges, much like a player defining objectives.<p>Develop game-theoretic relationships with its environment.<p>Challenge:<p>Can AI treat all interactions—perception, memory, learning—as internal "games" where it dynamically sets rules and strategies?<p>This suggests that game-like cognition should be a fundamental AI principle, not just an application.<p>Conclusion: Can AI "Play" Its Way to Intelligence?<p>If cognition is fundamentally game-like, AI must go beyond playing games—it must turn reality into an evolving, self-directed learning process.<p>Instead of being trained to win pre-set games, AI should be designed to play its way to understanding, setting its own objectives and iterating like a skilled player refining strategies.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42886254">https://news.ycombinator.com/item?id=42886254</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 31 Jan 2025 10:00:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=42886254</link><dc:creator>aniijbod</dc:creator><comments>https://news.ycombinator.com/item?id=42886254</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42886254</guid></item></channel></rss>