<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: aithrowawaycomm</title><link>https://news.ycombinator.com/user?id=aithrowawaycomm</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 01 May 2026 12:14:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=aithrowawaycomm" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by aithrowawaycomm in "Japanese scientists create new plastic that dissolves in saltwater overnight"]]></title><description><![CDATA[
<p>I don't understand your ridiculous pedantry! I am talking about DALL-E and Stable Diffusion. I am not talking about other front ends to these services, nor did I dispute that your example deserved copyright protection. Invoke is very very different from plain text-to-image generation, WHICH IS WHAT I WAS TALKING ABOUT.<p>I think it's best if I log off and ignore your replies.</p>
]]></description><pubDate>Sat, 29 Mar 2025 04:02:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43512616</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43512616</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43512616</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Japanese scientists create new plastic that dissolves in saltwater overnight"]]></title><description><![CDATA[
<p>No, in general you cannot copyright them:<p><a href="https://www.reuters.com/legal/ai-created-images-lose-us-copyrights-test-new-technology-2023-02-22/" rel="nofollow">https://www.reuters.com/legal/ai-created-images-lose-us-copy...</a><p><a href="https://www.reuters.com/legal/litigation/us-copyright-office-denies-protection-another-ai-created-image-2023-09-06/" rel="nofollow">https://www.reuters.com/legal/litigation/us-copyright-office...</a><p><a href="https://www.reuters.com/world/us/us-appeals-court-rejects-copyrights-ai-generated-art-lacking-human-creator-2025-03-18/" rel="nofollow">https://www.reuters.com/world/us/us-appeals-court-rejects-co...</a><p>See the other reply for a half-counterexample, but the major difference is the specific software is more like generative PhotoShop, and the final image involved a lot of manual human work. Simply tweaking a prompt is not enough - again you can get copyright for <i>curation</i>, just not the images."<p>Of course AI can't be credited with copyright - neither can a random-character generator, even if it monkeys its way into a masterpiece. You need legal standing to sue or be sued in order to hold copyright.</p>
]]></description><pubDate>Sat, 29 Mar 2025 03:58:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43512601</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43512601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43512601</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Japanese scientists create new plastic that dissolves in saltwater overnight"]]></title><description><![CDATA[
<p>I am specifically talking about  DALL-E or Stable Diffusion, your link describes something very different. The point was the "Google Images" analogy, which applies to 99.999% of AI art but this is an exception.</p>
]]></description><pubDate>Sat, 29 Mar 2025 03:43:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43512533</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43512533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43512533</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Japanese scientists create new plastic that dissolves in saltwater overnight"]]></title><description><![CDATA[
<p>"AI art generators enable the creation of ignorant and lazy illustrations by outsourcing understanding to an idiot robot."<p>"Yes, but is it not the intent of the artist to be ignorant and lazy?"<p>It is possible to repeatedly iterate AI art gen and get what you want, but that's not what happened here. And even so, it's not at all the same thing as drawing a picture: "iterating on what you want" is equivalent to <i>curating</i> art, not <i>creating</i> it. In the US you can copyright curation and that extends to curation of AI art - the US Copyright Office correctly said that tweaking prompts is the same thing as tweaking a Google Images search string for online image curation. But you can't copyright the actual AI-gen pictures, they are automatically public domain (unless they infringe someone else's copyright).</p>
]]></description><pubDate>Sat, 29 Mar 2025 00:56:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=43511712</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43511712</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43511712</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Preschoolers can reason better than we think, study suggests"]]></title><description><![CDATA[
<p>I am unable to think of a single animal behavior popularly ascribed to intelligence but in fact explainable by rote instinct. Do you have an example?<p>OTOH there are plenty of animal behaviors that can only be explained by intelligence: seeing-eye dogs perform a task which is economically useful for humans yet far beyond the ability of AI (even if the robotics mech eng issues are resolved). But it also doesn't really make sense that one of my cats is "instinctually" able to understanding my words while the other cat is "instinctually" able to  outsmart me when we play with toys. The more sensible explanation is cats are intelligent and intellectually diverse.</p>
]]></description><pubDate>Fri, 28 Mar 2025 14:44:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43506067</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43506067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43506067</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Elon Musk Pressured Reddit CEO Steve Huffman about Moderation"]]></title><description><![CDATA[
<p>The problem is that it also included "doxxing" DOGE employees, even though they are significant public figures and their anonymity is almost certainly illegal. Protecting the identity of government employees who are more powerful than cabinet secretaries is plain abuse of ToS. (I will add that I don't believe most of the alleged death threats were real. I think Musk's concern was the "doxxing" and the death threats were a pretext.)</p>
]]></description><pubDate>Fri, 28 Mar 2025 14:01:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=43505549</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43505549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43505549</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Patience is a coping strategy, not a virtue"]]></title><description><![CDATA[
<p><p><pre><code>  Specifically, better scores on the measures of impulsivity, emotional awareness and flexibility, and also the personality trait of agreeableness were all linked to higher patience scores.... But these results do suggest that patience is not so much virtue as a method to help us to deal with frustrations — and that some of us are better equipped to employ this coping mechanism than others.
</code></pre>
The flaw in this argument is that non-impulsiveness, emotional awareness, and agreeableness are all considered virtues! And I really don’t like the implicit suggestion that impatient people are just born that way and patient people won the luck of the draw: I was far more impatient before I started getting mental health treatment.</p>
]]></description><pubDate>Fri, 28 Mar 2025 10:31:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43503660</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43503660</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43503660</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Tracing the thoughts of a large language model"]]></title><description><![CDATA[
<p>> Here, we modified the part of Claude’s internal state that represented the "rabbit" concept. When we subtract out the "rabbit" part, and have Claude continue the line, it writes a new one ending in "habit", another sensible completion. We can also inject the concept of "green" at that point, causing Claude to write a sensible (but no-longer rhyming) line which ends in "green". This demonstrates both planning ability and adaptive flexibility—Claude can modify its approach when the intended outcome changes.<p>This all seems explainable via shallow next-token prediction. Why is it that subtracting the concept means the system can adapt and create a new rhyme instead of forgetting about the -bit rhyme, but overriding it with green means the system cannot adapt? Why didn't it say "green habit" or something? It seems like Anthropic is having it both ways: Claude continued to rhyme after deleting the concept, which demonstrates planning, but also Claude coherently filled in the "green" line despite it not rhyming, which...also demonstrates planning? Either that concept is "last word" or it's not! There is a tension that does not seem coherent to me, but maybe if they had n=2 instead of n=1 examples I would have a clearer idea of what they mean. As it stands it feels arbitrary and post hoc. More generally, they failed to rule out (or even consider!) that well-tuned-but-dumb next-token prediction explains this behavior.</p>
]]></description><pubDate>Thu, 27 Mar 2025 21:05:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43498169</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43498169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43498169</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Tracing the thoughts of a large language model"]]></title><description><![CDATA[
<p>I struggled reading the papers - Anthropic’s white papers reminds me of Stephen Wolfram, where it’s a huge pile of suggestive empirical evidence, but the claims are extremely vague - no definitions, just vibes - the empirical evidence seems selectively curated, and there’s not much effort spent building a coherent general theory.<p>Worse is the impression that they are begging the question. The rhyming example was especially unconvincing since they didn’t rule out the possibility that Claude activated “rabbit” simply because it wrote a line that said “carrot”; later Anthropic claimed Claude was able to “plan” when the concept “rabbit” was replaced by “green,” but the poem fails to rhyme because Claude arbitrarily threw in the word “green”! What exactly was the plan? It looks like Claude just hastily autocompleted. And Anthropic made zero effort to reproduce this experiment, so how do we know it’s a general phenomenon?<p>I don’t think either of these papers would be published in a reputable journal. If these papers are honest, they are incomplete: they need more experiments and more rigorous methodology. Poking at a few ANN layers and making sweeping claims about the output is not honest science. But I don’t think Anthropic is being especially honest: these are pseudoacademic infomercials.</p>
]]></description><pubDate>Thu, 27 Mar 2025 18:26:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43496485</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43496485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43496485</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Waymos crash less than human drivers"]]></title><description><![CDATA[
<p>Even in self-driving, Telsa's behavior proves there is a market for cars that are programmed to speed and roll through stop signs. Waymos are safer than the average human, but the average human also intentionally chooses a strategy that trades risk for speed. Indeed, Waymo trips on average take about 2x as long as Ubers: <a href="https://futurism.com/the-byte/waymo-expensive-slower-taxis" rel="nofollow">https://futurism.com/the-byte/waymo-expensive-slower-taxis</a><p>What happens if an upstart self-driving competitor promises human-level ETAs? Is a speeding Waymo safer than a speeding human?</p>
]]></description><pubDate>Wed, 26 Mar 2025 22:04:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43487951</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43487951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43487951</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Airline Demand Between Canada and United States Collapses, Down 70%+"]]></title><description><![CDATA[
<p>I am not disputing any of that, nor am I trying to put a positive spin on anything coming out of Trump. My only point was that it's misinformation to say that the US is unilingual by policy - that only became partially true in March 2025 via a toothless (and blatantly unconstitutional) executive order.<p>And it's not "soft acceptance of Spanish here and there," all levels of government are legally required to print official documents in whatever languages their community speaks; cities usually have ballots in dozens of languages. This is a constitutional requirement, bolstered by the Voting Rights Act, and Trump has not yet done enough damage to make those legal requirements go away via diktat.</p>
]]></description><pubDate>Wed, 26 Mar 2025 19:59:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43486494</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43486494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43486494</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Airline demand between Canada and United States collapses, down 70%+"]]></title><description><![CDATA[
<p>> a country which is unilingual English by policy?<p>To be clear the US only has a unilingual policy because Trump signed an executive order this year (and I believe even this SCOTUS would strike the order down as unconstitutional if anyone had standing to sue over it).<p>The US has always been <i>de facto</i> unilingual, but <i>de jure</i> we don't have an official language since Trump has no legal authority to establish that. The "policy" is political and legally empty.</p>
]]></description><pubDate>Wed, 26 Mar 2025 19:46:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=43486316</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43486316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43486316</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "The Impact of Generative AI on Critical Thinking [pdf]"]]></title><description><![CDATA[
<p>This would be a valid POV if there was any solid evidence that LLMs truly increased worker productivity or reliability - at best it is a mixed bag. To stretch the food analogy, it seems like LLMs could be pure corn syrup, without any disease-resistant fruits and unnaturally plump chickens that actually make modern agriculture worthwhile.<p>Or, since LLMs seem to be <i>addictive</i>, it's like getting rid of the spinach farms and replacing them with opium poppies. (I really hate this tech.)</p>
]]></description><pubDate>Wed, 26 Mar 2025 19:38:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43486193</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43486193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43486193</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Deciphering language processing in the human brain through LLM representations"]]></title><description><![CDATA[
<p>That's 100% false, dogs and pigeons can obviously think, and it is childish to suppose that their thoughts are a sequence of woofs or coos. Trying to make an AI that thinks like a human without being able to think like a chimpanzee gives you reasoning LLMs that can spit out proofs in algebraic topology, yet still struggle with out-of-distribution counting problems which frogs and fish can solve.</p>
]]></description><pubDate>Wed, 26 Mar 2025 07:15:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=43479623</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43479623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43479623</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Peano's Axioms"]]></title><description><![CDATA[
<p>Maybe a roundabout answer to your question, but Peano's axioms are equiconsistent with many finite set theories (even ZFC without axiom of infinity), and I do think philosophically it makes more sense to say weak axiomatic set theory + predicate calculus forms building blocks of arithmetic[1]. The idea of "number" as conceived by Frege is an equivalence class on finite sets: A ~ B <-> there is a bijection, which is in fact a good way of explaining "counting with fingers" as an especially primitive building block of arithmetic:<p><pre><code>  {index, middle, ring} ~ 
  {apple, other apple, other other apple} ~
  {1, 2, 3}
</code></pre>
as representatives of the class "3" etc etc, predicates would be "don't include overripe apples when you count" etc. Then additions are unions and so on, and the Peano axioms are a consequence.<p>[1] In my view Peano axioms are the <i>Platonic ideal</i> of arithmetic, after the cruft of bijections and whatnot are tossed away. I agree this is splitting hairs.</p>
]]></description><pubDate>Mon, 24 Mar 2025 20:16:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43464983</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43464983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43464983</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "The Quantum Apocalypse Is Coming. Be Very Afraid"]]></title><description><![CDATA[
<p>Do these probabilities mean anything at all?<p><pre><code>  When Mosca and his colleagues surveyed cybersecurity experts last year, the forecast was sobering: a one-in-three chance that Q-Day happens before 2035. And the chances it has already happened in secret? Some people I spoke to estimated 15 percent—about the same as you’d get from one spin of the revolver cylinder.
</code></pre>
This whole thing that you can rate your confidence 1-100, chant "Bayes," and it becomes a probability is endlessly frustrating. Sigma-additivity, shmiga-additivity!</p>
]]></description><pubDate>Mon, 24 Mar 2025 19:59:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43464835</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43464835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43464835</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Curtis Yarvin Says Democracy Is Done. Powerful Conservatives Are Listening"]]></title><description><![CDATA[
<p>I have been told this guy is an irascible reactionary genius for like 15 years - he says so himself: "No. I’m an outsider, man. I’m an intellectual." Yet once again I see somebody with the sophistication of a teenager:<p><pre><code>  If you look at the administration of Washington, what is established looks a lot like a start-up. It looks so much like a start-up that this guy Alexander Hamilton, who was recognizably a start-up bro, is running the whole government — he is basically the Larry Page of this republic.

  [...]

  Understanding why Hitler was so bad, why Stalin was so bad, is essential to the riddle of the 20th century. But I think it’s important to note that we don’t see for the rest of European and world history a Holocaust. You can pull the camera way back and basically say, Wow, since the establishment of European civilization, we didn’t have this kind of chaos and violence. 
</code></pre>
This sounds like something that Ricken from <i>Severance</i> wrote in his self-help book:<p><pre><code>  It’s basically just a greater openness of mind and a greater ability to look around and say: We just assume that our political science is superior to Aristotle’s political science because our physics is superior to Aristotle’s physics. What if that isn’t so?
</code></pre>
And I laughed out loud at this, he's just a ridiculous idiot:<p><pre><code>  When I look at the status of women in, say, a Jane Austen novel, which is well before Enfranchisement, it actually seems kind of OK.</code></pre></p>
]]></description><pubDate>Mon, 24 Mar 2025 08:44:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43458680</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43458680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43458680</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "The Humans Building AI Scientists"]]></title><description><![CDATA[
<p>I will never stop being amazed at AI folks' childish views of animal cognition:<p>> A lot of your tools reference crows. What’s up with that?<p>> White: When I got started in this space around October 2022, I was red-teaming with GPT4. Around the same time, a paper called “Language Models are Stochastic Parrots” was circulating, and people were debating whether these models were just regurgitating their training data or truly reasoning. The analogy is appealing, and parrots are definitely known for mimicking speech. But what we saw was that pairing these language models with external tools made them much more accurate — a bit like crows, which can use tools to solve puzzles.<p>> In the work that led to ChemCrow,1 for instance, we found that giving the large language model access to calculators or chemistry software made its answers much better. So we kind of retconned a little bit to make “Crows” be agents that can interact with tools using natural language.<p>This is incredibly insulting to crows, who can spontaneously create tools and use bizarre man-made tools with no training. And when crows use tools for problem solving in the lab, the tools are not "solve the problem for me" like a calculator, they require much more creative thinking. What White really means - whether he knows it or not - is that crows are known for being intelligent and he wants to use this for marketing purposes.<p>I don't think anyone alive today will live to see an AI as smart as a crow, in no small part because AI researchers and investors refuse to take animal intelligence seriously.</p>
]]></description><pubDate>Sat, 22 Mar 2025 13:16:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43445500</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43445500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43445500</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Oxygen discovered in most distant known galaxy"]]></title><description><![CDATA[
<p>I didn't say "dogmatically believe," I said "accepted as plausible" - Nick Bostrom built an academic career out of this nonsense! Effective altruism people uses these theories to justify their advocacy. I would guess <i>most</i> tech people agree the paperclip maximizer is a serious concern.</p>
]]></description><pubDate>Thu, 20 Mar 2025 16:32:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43425419</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43425419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43425419</guid></item><item><title><![CDATA[New comment by aithrowawaycomm in "Artificial Intelligence: Foundations of Computational Agents"]]></title><description><![CDATA[
<p>The problem is they also put "worms" in the same category, and they aren't designed by humans to do anything! Why is it that the natural laws of a worm responding to Earth's environment are distinct from the natural laws of Jupiter responding to the solar system's environment? I suppose because of complexity. But then why is a thermostat different from Jupiter despite being considerably simpler? I suppose because it was designed by humans and can be controlled. But then what about the worm, which is just as natural as Jupiter? "Thermostat" is especially problematic because a cheap thermostat is very simple to describe completely as a thermo-electric balance equation: it is certainly simpler to describe than an irregular ball rolling down an irregular hill. Yet apparently the thermostat is an agent and the ball is not.<p>The definition is just incoherent! "Sometimes an agent is deterministic and in this case the term only includes manmade tools, other times an agent is an apparently nondeterministic automaton and in this case we can include natural life." It only allows "agent" to be labelled ad hoc, and in particular blurs the distinction between "nondeterministic tool" and "lifeform" in ways that are scientifically unjustifiable. The only people this pointless word game benefits are liars like Sam Altman and Mustafa Suleyman; if people are well-intentioned then these definitions bring nothing but confusion.</p>
]]></description><pubDate>Thu, 20 Mar 2025 16:20:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43425266</link><dc:creator>aithrowawaycomm</dc:creator><comments>https://news.ycombinator.com/item?id=43425266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43425266</guid></item></channel></rss>