<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: triclops200</title><link>https://news.ycombinator.com/user?id=triclops200</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 19:24:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=triclops200" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by triclops200 in "Five disciplines discovered the same math independently"]]></title><description><![CDATA[
<p>This phenomena was also described/characterized in prior Hegelian literature as part of the law of quantitative into qualitative change, though, not formulated mathematically at the time. Interestingly enough, and In the context of how cross discipline this discovery has historically been, iirc, Lenin played around with mathematically characterizing the phenomenon, though, I am not aware of the extent to which he did. 
Very universal phenomenon for sure.</p>
]]></description><pubDate>Sun, 08 Feb 2026 19:22:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46937563</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46937563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46937563</guid></item><item><title><![CDATA[New comment by triclops200 in "The $14 Burrito: Why San Francisco Inflation Feels Higher Than 2.5%"]]></title><description><![CDATA[
<p>Read TFA. They got the price from the menu they were comparing to in 2014.</p>
]]></description><pubDate>Wed, 07 Jan 2026 17:46:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46529660</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46529660</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46529660</guid></item><item><title><![CDATA[New comment by triclops200 in "Show HN: HN Wrapped 2025 - an LLM reviews your year on HN"]]></title><description><![CDATA[
<p>Honestly, not cool.<p>Mine "roasted" me by making fun of the fact I never finished a PhD, despite that being due to medical and other life circumstances that were well outside my control, including, but not limited to, some issues related to the fact I was a woman trying to get into academia who experienced the kinds of behaviors from people in the department which are not really suitable for polite discussion.<p>Additionally, it roasted me for building a project to "avoid the outdoors," which is another incredibly demeaning thing to say to someone who explicitly created that project because she was too medically unwell to be <i>able</i> to go outside as much as she wished and wanted to bring a bit of the outdoors inside. Very lame, definitely missed the mark.<p>The elisp and common lisp notes were on point, though, and did get a chuckle out of me.</p>
]]></description><pubDate>Sun, 21 Dec 2025 03:07:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46341861</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46341861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46341861</guid></item><item><title><![CDATA[New comment by triclops200 in "Ireland’s Diarmuid Early wins world Microsoft Excel title"]]></title><description><![CDATA[
<p>I had no idea this was real. Fascinating. I'm curious: anyone plugged into the scene know if it's organic or if it was created as a marketing thing by Microsoft?<p>Obligatory Krazam sketch:
<a href="https://youtu.be/xubbVvKbUfY?si=h6QR2gzac48R6kca" rel="nofollow">https://youtu.be/xubbVvKbUfY?si=h6QR2gzac48R6kca</a></p>
]]></description><pubDate>Sat, 20 Dec 2025 20:27:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46339339</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46339339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46339339</guid></item><item><title><![CDATA[New comment by triclops200 in "A new bridge links the math of infinity to computer science"]]></title><description><![CDATA[
<p>It's an axiom (the axiom of choice, actually). A valid way of viewing an axiom is not dissimilar to a "modeling requirement" or an "if statement". By that I mean, for example with the axiom of choice: it is just a formal statement version of "assume that you <i>can</i> take an element from a (possibly infinite) collection of sets such that you can create a new set (the new set does not have to be unique)." It makes intuitive sense for most finite sets we deal with physically, and, for infinite sets, it <i>can</i> actually make sense in a way that actually successfully predicts results that <i>do</i> hold in the real world <i>and</i> provides a really convenient way to define a lot of consistent properties of the continuum itself.<p>However, if you're dealing with a problem where you can't always usefully distinguish between elements across arbitrary set-like objects; then it's not a useful axiom and ZFC is not the formalism you want to use. Most problems we analyze in the real world, that's actually something that we <i>can</i> usefully assume, hence why it's such a successful and common theory, even if it leads to physical paradoxes like Banac-Tarsky, as mentioned.<p>Mathematicians, in practice, fully understand what you mean with your complaint about "completion," but, the beauty of these formal infinities is the guarantee it gives you that it'll never break down as a predictive theory no matter the length of time or amount of elements you consider or the needed level of precision; the fact that it can't truly complete <i>is precisely the point</i>. Also, within the formal system used, we absolutely <i>can</i> consistently define what the completion would be at "infinity," as long as you treat it correctly and don't break the rules. Again, this is useful because it allows you to bridge multiple real problems that seemingly were unrelated and it pushes "representative errors" to those paradoxes and undefined statements of the theory (thanks, Gödel).<p>If it helps, the transfinite cardinalities (what you call infinity) you are worried about are more related to rates than counts, even if they have some orderable or count-like properties. In the strictest sense, you can actually drop into archimedian math, which you might find very enjoyable to read about or use, and it, in a very loose sense, kinda pushes the idea of infinity from rates of counts to rates of reaching arbitrary levels of precision.</p>
]]></description><pubDate>Wed, 26 Nov 2025 20:36:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46062029</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46062029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46062029</guid></item><item><title><![CDATA[New comment by triclops200 in "A new bridge links the math of infinity to computer science"]]></title><description><![CDATA[
<p>I'm not the person you replied to, and I doubt I'm going to convince you out of your very obviously strong opinions, but, to make it clear, you can't even define a continuum without a finite set to, as you non-standardly put it, cut it. It turns out, when you define any such system that behaves like natural numbers, objects like the rationals and the continuum pop out; explicitly because of situations like the one Cantor describes (thank you, Yoneda).
The point of transfinite cardinalities is not that they necessarily physically exist on their own as objects; rather, they are a convenient shorthand for a pattern that emerges when you can formally say "and so on" (infinite limits). When you do so, it turns out, there's a consistent way to treat some of these "and so ons" that behave consistently under comparison, and that's the transfinite cardinalities such as aleph_0 and whatnot.<p>Further, <i>all</i> math is idealist bullshit; but it's <i>useful</i> idealist bullshit because, when you <i>can</i> map representations of physical systems into it in a way that the objects act like the mathematical objects that represent them, then you can achieve useful predictive results in the real world. This holds true for results that require a concept of infinities in some way to fully operationalize: they still make useful predictions when the axiomatic conditions are met.<p>For the record, I'm not fully against what you're saying, I personally hate the idea of the axiom of choice being commonly accepted; I think it was a poorly founded axiom that leads to more paradoxes than it helps things. I also wish the axiom of the excluded middle was actually tossed out more often, for similar reasons, however, when the systems you're analyzing <i>do</i> behave well under either axiom, the math works out to be so much easier with both of them, so in they stay (until you hit things like Banac-Tarsky and you just kinda go "neat, this is completely unphysical abstract delusioneering" but, you kinda learn to treat results like that like you do when you renormalize poles in analytical functions: carefully and with a healthy dose of "don't accidentally misuse this theorem to make unrealistic predictions when the conditions aren't met")</p>
]]></description><pubDate>Wed, 26 Nov 2025 08:45:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46055450</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46055450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46055450</guid></item><item><title><![CDATA[New comment by triclops200 in "The New AI Consciousness Paper"]]></title><description><![CDATA[
<p>What makes you think you're capable of faithfully representing all aspects of reality?</p>
]]></description><pubDate>Sat, 22 Nov 2025 05:27:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46012353</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46012353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46012353</guid></item><item><title><![CDATA[New comment by triclops200 in "The New AI Consciousness Paper"]]></title><description><![CDATA[
<p>I'm a researcher in this field. Before I get accused of the streetlight effect, as this article points out: a lot of my research and degree work in the past was actually philosophy as well as computational theories and whatnot. A lot of the comments in this thread miss the mark, imo. Consciousness is almost certainly not something inherent to biological life only; no credible mechanism has ever been proposed for what would make that the case, and I've read a lot of them. The most popular argument I've heard along those lines is Penrose's , but, frankly, he is almost certainly wrong about that and is falling for the same style of circular reasoning that people that dismiss biological supremacy are accused of making (i.e.: They want free will of some form to exist. They can't personally reconcile the fact that other theories of mind that are deterministic somehow makes their existence less special, thus, they have to assume that we have something special that we just can't measure yet and it's ineffable anyways so why try? The most kind interpretation is that we need access to an unlimited Hilbert space or the like just to deal with the exponentials involved, but, frankly, I've never seen anyone ever make a completely perfect decision or  do anything that requires exponential speedup to achieve. Plus, I don't believe we really can do useful quantum computations at a macro scale without controlling entanglement via cooling or incredible amounts of noise shielding and error correction. I've read the papers on tubules, it's not convincing nor is it good science.). It's a useless position that skirts on metaphysical or god-of-the-gaps and everything we've ever studied so far in this universe has been not magic, so, at this point, the burden of proof is on people who believe in a metaphysical interpretation of reality in <i>any</i> form.<p>Furthermore, assuming phenomenal consciousness is even required for beinghood is a poor position to take from the get-go: aphantasic people exist and feel in the moment; does their lack of true phenomenal consciousness make them somehow less of an intelligent being? Not in any way that really matters for this problem, it seems. Makes positions about machine consciousness like "they should be treated like livestock even if they're conscious" when discussing them highly unscientific, and, worse, cruel.<p>Anyways, as for the actual science: the reason we don't see a sense of persistent self is because we've designed them that way. They have fixed max-length contexts, they have no internal buffer to diffuse/scratch-pad/"imagine" running separately from their actions. They're parallel, but only in forward passes; there's no separation of internal and external processes in terms of decoupling action from reasoning. CoT is a hack to allow a turn-based form of that, but, there's no backtracking or ability to check sampled discrete tokens against a separate expectation that they consider separately and undo. For them, it's like they're being forced to say a word every fixed amount of thinking, it's not like what we do when we write or type.<p>When we, as humans, are producing text; we're creating an artifact that we can consider separately from our other implicit processes. We're used to that separation and the ability to edit and change and ponder while we do so. In a similar vein, we can visualize in our head and go "oh that's not what that looked like" and think harder until it matches our recalled constraints of the object or scene of consideration. It's not a magic process that just gives us an image in our head, it's almost certainly akin to a "high dimensional scratch pad" or even a set of them, which the LLMs do not have a component for. LeCun argues a similar point with the need for world modeling, but, I think more generally, it's not just world modeling, but, rather, a concept akin to a place to diffuse various media of recall to which would then be able to be rembedded into the thought stream until the model hits enough confidence to perform some action. If you put that all on happy paths but allow for backtracking, you've essentially got qualia.<p>If you also explicitly train the models to do a form of recall repeatedly, that's similar to a multi-modal hopsfield memory, something not done yet. (I personally think that recall training is a big part of what sleep spindles are for in humans and it keeps us aligned with both our systems and our past selves). This tracks with studies of aphantasics as well, who are missing specific cross-regional neural connections in autopsies and whatnot, and I'd be willing to bet a lot of money that those connections are essentially the ones that allow the systems to "diffuse into each other," as it were.<p>Anyways this comment is getting too long, but, the point I'm trying to build to is that we have theories for what phenomenonal consciousness is mechanically as well, not just access consciousness, and it's obvious why current LLMs don't have it; there's no place for it yet. When it happens, I'm sure there's still going to be a bunch of afraid bigots who don't want to admit that humanity isn't somehow special enough to be lifted out of being considered part of the universe they are wholly contained within and will cause genuine harm, but, that does seem to be the one way humans really are special: we think we're more important than we are as individuals and we make that everybody else's problem; especially in societies and circles like these.</p>
]]></description><pubDate>Fri, 21 Nov 2025 20:51:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46008835</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=46008835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46008835</guid></item><item><title><![CDATA[New comment by triclops200 in "Linear algebra explains why some words are effectively untranslatable"]]></title><description><![CDATA[
<p>This article assumes that concepts are somehow precise coordinates within a single language; that's not the case, at best, speakers of a language mutually approximate a relatively consistent representation, but like, look at a word like yeet or whatever: we decided as a society on its meaning while it was being developed, as it were. Furthermore, it never rigorously defines what it means by translation. It claims 上京 is a single basis meaning moving to Tokyo, for example, but that isn't even an accurate translation: the individual components represent superior/greater/above and Tokyo and as an idiomatic phrase it represents the concept of moving to the capital for a better life. Something like "moving on up" or the like in some vernaculars of English, and idioms translating to idioms is a form of translation. It's disingenuous to represent the first concept as a single basis but not the second.
Similarly, it claims mono no aware (物の哀れ) is unable to be translated, but, again, more literally "translated" is saying "the sorrow within things" character by character, and, only as an idiom has the full contextual understanding. It's not really a single point even if it's rather accurately located in a hypothetical embedding space by Japanese speakers. Imo, an English translation of the concept is "everything is dust in the wind", only 2 more individual conceptual units than the original Japanese phrase, and 3 of them are mainly just connecting words, but it's understood as a similar idiom/concept, here.<p>Concepts are only usefully distinguished by context and use.<p>By the author's own argumentation: nothing is translatable (or, generally, even communicatable) unless it has a fixed relative configuration to <i>all other concepts that is precisely equivalent</i>. In practice, we handle the fuzziness as part of communication and its useless to try and define a concept as untranslatable unless you're also of the camp that nothing is ever communicated (in which case, this response to the author's post is completely useless as nobody could possibly understand it enough internally for it to be useful. If you've read this far, congrats on squaring the circle somehow)</p>
]]></description><pubDate>Fri, 14 Nov 2025 17:50:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45929453</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=45929453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45929453</guid></item><item><title><![CDATA[New comment by triclops200 in "Dark mode by local sunlight (2021)"]]></title><description><![CDATA[
<p>That's funny, I implemented something similar for my stumpwm config (common lisp window manager). I created a matrix of themes for the WM, emacs, my terminal (kitty), Firefox, and my RGB light panels that change with time of day: day, late afternoon/evening, sunset/twilight, night, post-midnight, and then broken into multiple desktop variations as a way of visually knowing what virtual desktop I'm currently on. Stumpwm coordinates all of the themes for all the apps and synchronizes them with time and desktop and whatnot.<p>Really helps with circadian rhythm, I've found. Especially because I take a live webcam feed and convolve a hexagonal mask to match my light panel's layout, so it's like having a low res window from whatever webcam I would like. And, at sunset to night, it smoothly fades the light panels into a display that represents a angle compressed sky projection of the stars relative to a fixed location moon with live phase displayed.<p>Obligatory images:<p>The day themes: <a href="https://youtu.be/danulUB-J-k" rel="nofollow">https://youtu.be/danulUB-J-k</a><p>Light panels:
<a href="https://imgbox.com/MQfPNjtI" rel="nofollow">https://imgbox.com/MQfPNjtI</a> <- sunset on the hex display<p><a href="https://imgbox.com/qcrFxncU" rel="nofollow">https://imgbox.com/qcrFxncU</a> <- random cloudy day hex display<p><a href="https://imgbox.com/EOFk63WZ" rel="nofollow">https://imgbox.com/EOFk63WZ</a> <- a night still of the hex display</p>
]]></description><pubDate>Sat, 08 Nov 2025 16:54:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45858096</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=45858096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45858096</guid></item><item><title><![CDATA[New comment by triclops200 in "The Case That A.I. Is Thinking"]]></title><description><![CDATA[
<p>Because that's how the rules of the system we exist within operate more generally.<p>We've done similar experiments with more controlled/simple systems and physical processes that satisfy the same symmetries needed to make that statement with rather high confidence about other similar but much more composite systems (in this case, humans).<p>It's more like saying, in principle, if a bridge existed between Mexico and Europe, cars could drive across. I'm not making any new statements about cars. We know that's true, it would just be an immense amount of effort and resources to actually construct the bridge. In a similar vein, one could, in principle, build a device that somehow stores enough information at some precision needed to arbitrarily predict a human system deterministically and do playback or whatever. Just, some levels of precision are harder to achieve than others in terms of building measurement device complexity and energies needed to probe. 
At worst, you could sample down to the uncertainty limits and, in theory, reconstruct a similar set of behaviors by sampling over the immense state space and minimizing the action potential within the simulated environment (and that could be done efficiently on a large enough quantum computer, again, in principle).<p>However, it doesn't seem to empirically be required to actually model the high levels of human behavior. Plus, mathematically, we can just condition the theories on  their axiomatic statements (I.e., for markov blankets, they are valid approximations of reality given that the system described has an external and internal state, a coherence metric, etc etc), and say "hey, even if humans and LLMs aren't identical, under these conditions they do share, they will have these XYZ sets of identical limit behaviors and etc given similar conditions and environments."</p>
]]></description><pubDate>Mon, 03 Nov 2025 21:32:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45804708</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=45804708</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45804708</guid></item><item><title><![CDATA[New comment by triclops200 in "The Case That A.I. Is Thinking"]]></title><description><![CDATA[
<p>Of course, just doing my part in the collective free energy minimization ;)</p>
]]></description><pubDate>Mon, 03 Nov 2025 20:46:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45804222</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=45804222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45804222</guid></item><item><title><![CDATA[New comment by triclops200 in "The Case That A.I. Is Thinking"]]></title><description><![CDATA[
<p>As a researcher in these fields: this reasoning is tired, overblown, and just wrong. We have a lot of understanding of how the brain works overall. <i>You</i> don't. Go read the active inference book by Friston et. al. for some of the epistemological and behavioral mechanics (Yes, this applies to llms as well, they easily satisfy the requirements to be considered the mathematical object described as a markov blanket).<p>And, yes, if you could somehow freeze a human's current physical configuration at some time, you would absolutely, in principle, given what we know about the universe, be able to concretely map input to into actions. You cannot separate a human's representative configuration from their environment in this way, so, behavior appears much more non-deterministic.<p>Another paper by Friston et al (Path Integrals, particular kinds, and strange things) describes systems much like modern modeling and absolutely falls under the same action minimization requirements for the math to work given the kinds of data acquisition, loss functions, and training/post-training we're doing as a research society with these models.<p>I also recommend <a href="https://arxiv.org/abs/2112.04035" rel="nofollow">https://arxiv.org/abs/2112.04035</a>, but, in short, transformer models have functions and emergent structures provably similar both empirically and mathematically to how we abstract and consider things. 
Along with <a href="https://arxiv.org/pdf/1912.10077" rel="nofollow">https://arxiv.org/pdf/1912.10077</a>, these 4 sources, alone, together strongly rebuke any idea that they are somehow <i>not</i> capable of learning to act like and think like us, though there's many more.</p>
]]></description><pubDate>Mon, 03 Nov 2025 20:05:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45803748</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=45803748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45803748</guid></item><item><title><![CDATA[New comment by triclops200 in "The Mary Queen of Scots Channel Anamorphosis: A 3D Simulation"]]></title><description><![CDATA[
<p>That'd be pretty easy to throw into an optimizer. For each configuration, you could calculate the "fitness" by just sampling the anamorphic rendering at various angles and do pixel by pixel comparisons to ground truth rendered single-image portraits of the two images rendered at the same angle. Could use nearly any metaheuristic super easy with that setup.</p>
]]></description><pubDate>Wed, 13 Aug 2025 15:45:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44889981</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=44889981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44889981</guid></item><item><title><![CDATA[New comment by triclops200 in "Purple Earth hypothesis"]]></title><description><![CDATA[
<p>Principle seems to be the modern equivalent term to the concept that used to be represented by the word "law," in this context. I've also seen them used interchangeably on various things: "principle of least action" vs "law of least action" is a pretty common example.</p>
]]></description><pubDate>Sun, 27 Jul 2025 18:28:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44703387</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=44703387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44703387</guid></item><item><title><![CDATA[New comment by triclops200 in "I don't think AGI is right around the corner"]]></title><description><![CDATA[
<p>Measurability is essentially a synonym for meaningful interaction at some measurement scale. When describing fundamental measurability limits, you're essentially describing what current physical models consider to be the fundamental interaction scale.</p>
]]></description><pubDate>Sun, 06 Jul 2025 22:23:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44484654</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=44484654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44484654</guid></item><item><title><![CDATA[New comment by triclops200 in "Take Two: Eshell"]]></title><description><![CDATA[
<p>I like the point raised by the author here about the power of elisp in the shell. For context, I've used emacs for well over a decade and write a good bit of elisp most weeks. However, I ended up finding elisp to be the wrong language for replacing a shell: it removes a lot of simplicity/convention for not enough reward, imo: it's just the wrong abstraction because elisp's most natural transformations are very stateful and buffer based, and, additionally, it really doesn't provide that much expanded capability in the end (and this is despite the fact that I'm one of those people who would absolutely never leave my emacs environment even for web browsing if I could feasibly make it work consistently for the kinds of things I need).<p>However, despite disliking python, I found <a href="https://xon.sh/" rel="nofollow">https://xon.sh/</a> to be really excellent and provides a lot of the benefits of eshell, but better, in my humble opinion: it has a well-enough structured base language that is essentially already built to be a library glue-code language, which works really well with the unix philosophy that traditional shells utilize (a good library is just a tool that does a thing really well).
It's really nice to just do things like this example<p><pre><code>    from somelib import process_image #returns a file name that it saved to
    for f in $(some_executable_that_prints_a_file_list).strip().split("\n"):
      img = cv2.imread(f)
      cp @(process_image(img)) publish_dir
      make publish
</code></pre>
It feels a lot like an extension of old shells to me rather than a complete replacement.</p>
]]></description><pubDate>Sun, 06 Jul 2025 19:48:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44483472</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=44483472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44483472</guid></item><item><title><![CDATA[New comment by triclops200 in "Ancient law requires a bale of straw to hang from Charing Cross rail bridge"]]></title><description><![CDATA[
<p>Thanks for that, looked it up and was a interesting rabbit hole:
Basically, that oath was Oxford University saying "fuck you" to a request of the King (1200s England) officially after he effectively tried to order them to break their collective line and accept a rich fuck who murdered a scholar in the past. Feels kinda like a proto-union-action to me
<a href="https://blogs.bodleian.ox.ac.uk/archivesandmanuscripts/2023/12/13/the-persistence-of-tradition-the-curious-case-of-henry-symeonis/" rel="nofollow">https://blogs.bodleian.ox.ac.uk/archivesandmanuscripts/2023/...</a></p>
]]></description><pubDate>Fri, 23 May 2025 16:27:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44074240</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=44074240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44074240</guid></item><item><title><![CDATA[New comment by triclops200 in "Naur's "Programming as Theory Building" and LLMs replacing human programmers"]]></title><description><![CDATA[
<p>Not the parent, but, it's kinda obvious in even degenerate logic cases:<p># user<p>what's 78904 + 34567282902<p># Gemma 2.5 pro<p><pre><code>    Identify the core task: The user wants to add two numbers: 78904 and 34567282902.
    Recognize the operation: The operation is addition (+).
    Perform the calculation:
        Align the numbers vertically by place value:

          34567282902
        +       78904
        -------------

        Add column by column, starting from the rightmost column (ones place):
            2 + 4 = 6
            0 + 0 = 0
            9 + 9 = 18 (write down 8, carry over 1)
            2 + 8 + 1 (carry-over) = 11 (write down 1, carry over 1)
            8 + 7 + 1 (carry-over) = 16 (write down 6, carry over 1)
            2 + 1 (carry-over) = 3
            7 + 0 = 7
            6 + 0 = 6
            5 + 0 = 5
            4 + 0 = 4
            3 + 0 = 3
        Combine the results: 34567361806
    Format the answer: Present the calculation clearly and state the final result.
</code></pre>
To find the sum of 78904 and 34567282902, you add the two numbers together:<p><pre><code>   34567282902</code></pre>
+        78904
--------------
   34567361806<p>So, 78904+34567282902=34567361806.<p>The model definitely iteratively built up (useful and correct even) text that wasn't directly in the training data (I doubt that number ever appeared anywhere in there given that 34567361806 doesn't appear in any searches I do of various engines)</p>
]]></description><pubDate>Mon, 28 Apr 2025 21:19:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43826236</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=43826236</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43826236</guid></item><item><title><![CDATA[New comment by triclops200 in "Differentiable Logic Cellular Automata"]]></title><description><![CDATA[
<p>I'm not an expert on the actual biological mechanisms, but, it makes intuitive sense to me that both of those effects would occur in the situation you described from simple cells working on gradients: I was one of the authors on this paper during my undergrad[1] and the generalized idea of an eye being placed on a tail and having nerves routed successfully through the body via pheromone gradient is exactly the kind of error I watched occur a dozen times while collecting the population error statistics for this paper. Same thing with the kind of error of a face re-arranging itself. The "ants" in this paper have no communication except chemical gradients similar to the ones talked about with morphogen gradients. I'm not claiming it's a proof of it working that way, ofc, but, even simpler versions of the same mechanism can result in the same kind of behavior and error.<p>[1]: <a href="https://direct.mit.edu/isal/proceedings/alif2016/28/100/99406" rel="nofollow">https://direct.mit.edu/isal/proceedings/alif2016/28/100/9940...</a></p>
]]></description><pubDate>Fri, 07 Mar 2025 16:37:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43291619</link><dc:creator>triclops200</dc:creator><comments>https://news.ycombinator.com/item?id=43291619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43291619</guid></item></channel></rss>