<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: trashtester</title><link>https://news.ycombinator.com/user?id=trashtester</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 09:23:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=trashtester" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by trashtester in "Why Greenland's natural resources are nearly impossible to mine"]]></title><description><![CDATA[
<p>The main problem with ice, is that it moves all the time. The glaciers on Iceland move up to 46m per day. Also, any tunnel created in fast moving ice could easily be crushed by the pressure of the ice.</p>
]]></description><pubDate>Fri, 16 Jan 2026 15:39:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46647520</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=46647520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46647520</guid></item><item><title><![CDATA[New comment by trashtester in "Human brains are preconfigured with instructions for understanding the world"]]></title><description><![CDATA[
<p>Indeed. But most instincts involve elements of learning. Meaning the instincts may be stored using a much smaller number of bits than if they were stored as traditional IF-THEN-ELSE computer program.<p>For instance, the pattern the brain seeks to optimize to learn to work may be much smaller than the full algorithm for walking.<p>And if the brain learns quickly enough (and if a newborn animal started learning elements such as balance, moving legs, etc, before even being born), learning to walk may be learned in minutes instead of months.</p>
]]></description><pubDate>Tue, 25 Nov 2025 23:25:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46052078</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=46052078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46052078</guid></item><item><title><![CDATA[New comment by trashtester in "Implications of AI to schools"]]></title><description><![CDATA[
<p>For now, Sora will not be able to actually produce all the text, I think. Maybe next year.</p>
]]></description><pubDate>Tue, 25 Nov 2025 09:04:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46043881</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=46043881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46043881</guid></item><item><title><![CDATA[New comment by trashtester in "Implications of AI to schools"]]></title><description><![CDATA[
<p>Student unions tend to focus on all sorts of other issues, I wouldn't trust them to handle cases like this.<p>The only way to reliably prevent the use of AI tools without punishing innocent students is to monitor the students while they work.<p>Schools can either do that by having essays be written on premise, either by hand or by using computers managed by the school.<p>But students that are worried that they will be targeted can also do this themselves, by setting up their phone to film them while working.<p>And if they do this, and the teacher tries to punish someone who can prove they wrote the essay themselves, either the teacher or the school should hopefully learn that such tools can't be trusted.</p>
]]></description><pubDate>Tue, 25 Nov 2025 08:53:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46043790</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=46043790</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46043790</guid></item><item><title><![CDATA[New comment by trashtester in "Human brains are preconfigured with instructions for understanding the world"]]></title><description><![CDATA[
<p>Certainly, and I don't think anyone really doubts this.<p>Still, people are sometimes surprised by how DNA may affect more parts of behavior than they previously thought.<p>Not necessarily by directly coding for the behavior. In many cases, the DNA will just modulate how we learn from the environment. And if the environment is fairly constant, observed behavior can correlate more strongly with DNA that one might have expected.</p>
]]></description><pubDate>Tue, 25 Nov 2025 08:38:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46043686</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=46043686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46043686</guid></item><item><title><![CDATA[New comment by trashtester in "Human brains are preconfigured with instructions for understanding the world"]]></title><description><![CDATA[
<p>I don't think human DNA generally codes for the behavior derectly.  Rather, DNA can code for how the brain learns from incoming data streams.<p>If the brain naturally tunes into some sources or patterns of input rather than others, it may learn very quickly from the preferred sources. And as long as those sources carry signals that are fairly invariant over time, it may seem like those signals are instinctual.<p>For instance, it may appear that humans learn to build relationships with kin (both parents and children) and friends, to build revenue streams (or gather food in more primitive societies) and reproduce.<p>Instead, the brain may come preloaded to generate brain chemicals when detecting certain stimuli. Like oxytocin near caregivers (as children) or small fluffy things (as adults). When exposed to parents/babies, this triggers. But it can also trigger around toys, pets, adopted children, etc.<p>Friendship-seeking can be, in part, related to seretonin-production in certain social situations. But may be hijacked by social media.<p>Revenue-seeking behavior can come from dopamin-stimulus from certain goal-optimzing situations. But may also be triggered by video games.<p>And the best known part: Reproductive behavior may primarily come from sexual arousal, and hijacked by porn or birth control.<p>Each of the above may be coded by a limited number of bytes of DNA, and it's really the learning algorithm combined with the data stream of natural environments that causes specific behaviors.</p>
]]></description><pubDate>Tue, 25 Nov 2025 08:27:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46043615</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=46043615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46043615</guid></item><item><title><![CDATA[New comment by trashtester in "Google boss says AI investment boom has 'elements of irrationality'"]]></title><description><![CDATA[
<p>As technology changes over history, governments tend to emerge that reflect the part of the population that can maintain a monopoly of violence.<p>In the Classical Period, it was the citizen soldiers of Rome and Greece, at least in the west. These produced the ancient republics and proto-democracies.<p>Later replaced by professional standing armies under people like Alexander and the Ceasars. This allowed kings and emperors.<p>In the Early to Mid Medieaval time, they were replaced by knights, elites who allowed a few men to defeat commoners many times their number. This caused feudalism.<p>Near the end of the period, pikes and crossbows and improved logistic systems shifted power back to central governments, primarily kings/emperors.<p>Then, with rifles, this swung the pendulum all the way back to citizen soldiers between the 18th and early 20th century, which brought back democracies and republics.<p>Now the pendulum is going in the opposite direction. Technology and capital distribution has already effectively moved a lot of power back to an oligarchic elite.<p>And if full AGI combined with robots more physically capable than humans, it can swing all the way. In principle a single monarch could gain monopoly of violence over an entire country.<p>Do not take for granted that our current undertanding of what the government is, is going to stay the same.<p>Some kind of merger between capital and power seems likely, where democratic elections quickly become completely obsolete.<p>Once the police and military have been mostly automated, I don't think our current system is going to last very long.</p>
]]></description><pubDate>Wed, 19 Nov 2025 08:07:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45977006</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=45977006</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45977006</guid></item><item><title><![CDATA[New comment by trashtester in "The Gentle Singularity"]]></title><description><![CDATA[
<p>The fears from 3 Mile Island and Fukushima were almost completely irrational. The death toll from those was too low to measure.<p>And the fears from Chernobyl was MOSTLY irrational.<p>The reason for the extreme fears that are generated from even very moderate spills from nuclear plants comes in part from the association with nuclear bombs and in part from fear of the unknown.<p>A lot (if not most) people shut their rational thinking off when the word "nuclear" is used, even those who SHOULD understand that a lot more people die from coal and gas plants EVERY YEAR than have died from nuclear energy throughout history.<p>Indeed, the safety level at Chernobyl may have been atrocious. But so was the coal industry in the USSR. Indeed, even if just considering the USSR, the death toll from coal alone caused a similar number of deaths (or a bit more) than the deaths caused by Chernobyl EVERY YEAR [1].<p>[1] <a href="https://www.science.org/doi/pdf/10.1126/science.238.4823.11.d" rel="nofollow">https://www.science.org/doi/pdf/10.1126/science.238.4823.11....</a></p>
]]></description><pubDate>Wed, 11 Jun 2025 07:24:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44245058</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=44245058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44245058</guid></item><item><title><![CDATA[New comment by trashtester in "The Gentle Singularity"]]></title><description><![CDATA[
<p>The "next token prediction" is a distraction. That's not where the interesting part of an AI model happens.<p>If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.<p>Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.<p>In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.<p>From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.<p>I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:<p>- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception)
- Symbolic processing of the best LLM's.
- Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems
- Optionally: Optimized for the use of a few specific tools, including a humanoid robot.<p>Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.</p>
]]></description><pubDate>Wed, 11 Jun 2025 06:56:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44244898</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=44244898</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44244898</guid></item><item><title><![CDATA[New comment by trashtester in "It's the end of observability as we know it (and I feel fine)"]]></title><description><![CDATA[
<p>For now, the people able to glue all the necessary ingredients together are the same ones who can understand the output if they drill into it.<p>Indeed, these may be the last ones to be fired, as they can become efficient enough to do the jobs of everyone else one day.</p>
]]></description><pubDate>Wed, 11 Jun 2025 06:28:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44244761</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=44244761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44244761</guid></item><item><title><![CDATA[New comment by trashtester in "It's the end of observability as we know it (and I feel fine)"]]></title><description><![CDATA[
<p>Ironically, finding ways to spin stories like that IS one way of taking responsibility, even if it's only a way to take responsibility for the narratives that are created after things happen.<p>Because those narratives play an important role in the next outcome.<p>The error is when you expect them to play for your team. Most people will (at best) be on the same team as those they interact with directly on a typical day. Loyalty 2-3 steps down a chain of command tends to be mostly theoretic. That's just human nature.<p>So what happens when the "#¤% hts the fan, is that those near the top take responsbility for themselves, their families and their direct reports and managers first. Meaning they externalize damage to elsewhere, which would include "you and me".<p>Now this is baseline human nature.  Indeed, this is what natural empathy dictates. Because empathy as an emotions is primarily triggered by those we interact with directly.<p>Exceptions exist. Some leaders really are idealists, governed more by the theories/principles they believe in than the basic human impulses.<p>But those are the minority, and this may aven be a sign of autism or something similar where empathy for oneself and one's immediate surrounding is disabled or toned down.</p>
]]></description><pubDate>Wed, 11 Jun 2025 06:21:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44244723</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=44244723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44244723</guid></item><item><title><![CDATA[New comment by trashtester in "Marines being mobilized in response to LA protests"]]></title><description><![CDATA[
<p>It would not collapase. But it would shift some purchaing power from the middle class to the working class if all of them would leave, as working class salaries would go up even faster than the inflatino it would cause.</p>
]]></description><pubDate>Tue, 10 Jun 2025 10:31:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44234994</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=44234994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44234994</guid></item><item><title><![CDATA[New comment by trashtester in "Marines being mobilized in response to LA protests"]]></title><description><![CDATA[
<p>Presidents may not be able to pardon themselves, but they ARE immune from prosecution through the regular legal system for any actions taken as part of the office as president.<p>The only way to go after them (given the current SCOTUS, who made the ruling above), is impeachment. And for that, the president has to do something so bad that 67 senators are willing to find the president guilty.</p>
]]></description><pubDate>Tue, 10 Jun 2025 10:26:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44234955</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=44234955</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44234955</guid></item><item><title><![CDATA[New comment by trashtester in "I genuinely don't understand why some people are still bullish about LLMs"]]></title><description><![CDATA[
<p>Language models are closing the gaps that still remain at an amazing rate. There are still a few gaps, but if we consider what has happened just in the last year, and extrapolated 2-3 years out....</p>
]]></description><pubDate>Fri, 28 Mar 2025 12:38:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43504635</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=43504635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43504635</guid></item><item><title><![CDATA[New comment by trashtester in "I genuinely don't understand why some people are still bullish about LLMs"]]></title><description><![CDATA[
<p>If the training data had a lot of humans saying "I don't know", then the LLM's would too.<p>Humans don't and LLM's are essentially trained to resemble most humans.</p>
]]></description><pubDate>Fri, 28 Mar 2025 12:21:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=43504483</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=43504483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43504483</guid></item><item><title><![CDATA[New comment by trashtester in "I genuinely don't understand why some people are still bullish about LLMs"]]></title><description><![CDATA[
<p>Authority, yes, accountable, not so much.<p>Basically at the level of other publishers, meaning they can be as biased as MSNBC or Fox News, depending on who controls them.</p>
]]></description><pubDate>Fri, 28 Mar 2025 12:19:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=43504458</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=43504458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43504458</guid></item><item><title><![CDATA[New comment by trashtester in "I genuinely don't understand why some people are still bullish about LLMs"]]></title><description><![CDATA[
<p>Wikipedia is one of the better sources out there for topics that are not seen as political.<p>For politically loaded topics, though, Wikipedia has become increasingly biased towards one side over the past 10-15 years.</p>
]]></description><pubDate>Fri, 28 Mar 2025 08:08:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43502774</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=43502774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43502774</guid></item><item><title><![CDATA[New comment by trashtester in "Has the decline of knowledge work begun?"]]></title><description><![CDATA[
<p>Somebody, or SOMETHING.<p>There will not be much work that cannot be done by Figure, Optimus, Atlas, Claude, Grok or GPT by 2035.</p>
]]></description><pubDate>Thu, 27 Mar 2025 16:14:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43495113</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=43495113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43495113</guid></item><item><title><![CDATA[New comment by trashtester in "Has the decline of knowledge work begun?"]]></title><description><![CDATA[
<p>With all due respect, this attitude typically comes with age. I see it in myself, too (I'm over 50).<p>You're right that an important reason why it's hard to replace those 30+ year old  systems, and that part of the reason is that the current devs are not necessarily at the same level as those who built the original. But at least in part, this is due to survivorship bias.<p>Plenty of the systems that were built 30-50 years ago HAVE been shut down, and those that were not tend to be the most useful ones.<p>A more important tell, though, is that you see traditional IT systems as the measuring stick for progress. If you do a review of history, you'll see that what is seen as the measuring stick changes over time.<p>For instance, in the 50's and 60's, the speed of cars and airplanes was a key measuring sticks. Today, we don't even HAVE planes in operation that match the SR-71 or Concorde, and car improvements are more incremental and practical than spectacular.<p>In the 70s and into the 80s, space exploration and flying cars had the role. We still don't have flying cars, and very little happened in space from 1985 until Elon (who grew up in that era) resumed it, based on his dream of going to Mars.<p>In the 90s, as Gen-X'ers (who had been growing up with C64/Amiga's) grew up, computers (PC) were the rage. But over the last 20 years little has happened with the hardware (and traditional software) except that the number of cores/socket has been going up.<p>In the 2000s, mobile phones were the New Thing, alongside apps like social media, uber, etc. Since 2015, that has been pretty slow, too, though.<p>Every generations tends to devalue the breakthroughs that came after they turned 30.<p>Boomers were not impressed by computers. Many loved their cars, but remained nostalgic about the old ones.<p>X-ers would often stay with PC's as the milennials switched to phones-only. Some X-ers may still be a bit disappointed that there's no flying cars, Moon Base and  no Mars Colony yet (though Elon, an X'er is working on those).<p>And now, some Milennials do not seem to realize that we're in the middle of the greatest revolution in human history (or pre-history for that matter).<p>And developers (both X'ers and millennials) in particular seem to resist it more than most. They want to keep their dependable von Neumann architecture computing paradigm. The skills they have been building up over their career. The source of their pride and their dignity.<p>They don't WANT AI to be the next paradigm. Instead, they want THEIR paradigm to improve even further. They hold on to it as long as they can get away with it. They downplay of revolutionary it is.<p>The fact, though, is that every kid today walks around with R2D2 and C3PO in their pockets. And production of physical robots have gone exponential, too. A few more years at this rate, and it will be everywhere.<p>Walking around today, 2025 isn't all that different from 2015. But 2035 may well be as different from 2025 as 2025 is to 1925.<p>And you say the West is declining?<p>Well, for Europe (including Russia), this is true. Apart from DeepMind (London), very little happens in Europe now.<p>Also, China is a competitor now. But so was the USSR a couple of generations ago, especially with Sputnik.<p>The US is still in the leadership position, though, if only barely. China is catching up, but they're still behind in many areas.<p>Just like with Sputnik, the US may need to pull itself together to maintain the lead.<p>But if you think all development has ended, you're like a boomer in 2010, using planes and cars as the measuring stick that thinks that nothing significant happened since 1985.</p>
]]></description><pubDate>Thu, 27 Mar 2025 16:05:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43495038</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=43495038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43495038</guid></item><item><title><![CDATA[New comment by trashtester in "Extreme poverty in India has dropped to negligible levels"]]></title><description><![CDATA[
<p>Try to live below that poverty line for a few months, and I'm pretty sure you will understand it.</p>
]]></description><pubDate>Tue, 11 Mar 2025 13:31:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43332269</link><dc:creator>trashtester</dc:creator><comments>https://news.ycombinator.com/item?id=43332269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43332269</guid></item></channel></rss>