<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Ari_Rahikkala</title><link>https://news.ycombinator.com/user?id=Ari_Rahikkala</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 04:27:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Ari_Rahikkala" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Ari_Rahikkala in "Speculative Speculative Decoding (SSD)"]]></title><description><![CDATA[
<p>Neat. Very similar to tree-based speculation as they point out, and they also point how to combine them.<p>Speculative decoding: Sample a linear output (next n tokens) from draft model, submit it to a verifier model. At some index the verifier might reject a token and say that no, actually the next token should be this other token instead ("bonus token" in this paper), and that's your output. Or if it accepts the whole draft, you still get a bonus token as the next token past the draft. Then you draft again from that prefix on.<p>Tree-based speculation: Sample a tree of outputs from draft model, submit whole tree to verifier, pick longest accepted prefix (and its bonus token).<p>Speculative speculative decoding: Sample a linear output from draft model, then in parallel both verify it with the verifier model, and produce a tree of drafts branching out from different rejection points and different choices of bonus tokens at those points. When the verifier finishes, you might have have a new draft ready to submit right away.<p>Combined: Sample a tree from the draft model, submit the whole tree to the verifier and in parallel also plan out drafts for different rejection points with different bonus tokens anywhere in the tree.</p>
]]></description><pubDate>Wed, 04 Mar 2026 05:47:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47243580</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=47243580</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47243580</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Five Years of LLM Progress"]]></title><description><![CDATA[
<p>> Almost every team that I’ve been talking to that is training a LLM right now talks about how they’re training a Chinchilla optimal model, which is remarkable given that basically everything in the LLM space changes every week.<p>I hope that either that's a miscommunication, or I'm wrong about how much of a red flag that seems to be.<p>The Chinchilla scaling laws allow you to relate, at a somewhat-better-than-rule-of-thumb level, the model size, training data size, and achieved performance of a LLM, without actually training one. So, if for instance you have a certain loss target, and a certain sized corpus of training data, you can use the scaling law to calculate what size of a model to train to hit the target. I can see that being useful to any team.<p>Chinchilla-optimality on the other hand means finding, for a set loss target, the combination of model size and training data size that minimizes training compute (which, roughly speaking, scales with just the product of those two numbers). But only training compute: Inference compute only scales with model size, regardless of training data. So Chinchilla-optimality is useful only if you expect training to take up most of your compute, i.e. if you are not expecting to actually use the model that much. I'm not in the field myself so I don't know how to quantify "that much", but it's definitely enough to keep those concepts distinct.</p>
]]></description><pubDate>Wed, 16 Aug 2023 05:01:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=37143031</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=37143031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37143031</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Delimiters won’t save you from prompt injection"]]></title><description><![CDATA[
<p>Call me an optimist, but I think prompt injection just isn't as fundamental a problem as it seems.<p>Having a single, flat text input sequence with everything in-band isn't fundamental to transformer: The architecture readily admits messing around with different inputs (with, if you like, explicit input features to make it simple for the model to pick up which ones you want to be special), position encodings, attention masks, etc.. The hard part is training the model to do what you want, and it's LLM training where the assumption of a flat text sequence comes from.<p>The optimistic view is, steerability turns out not to be too difficult: You give the model a separate system prompt, marked somehow so that it's easy for the model to pick up that it's separate from the user prompt; and it turns out that the model takes well to your steerability training, i.e. following the instructions in the system prompt above the user prompt. Then users simply won't be able to confuse the model with delimiter injection: OpenAI just isn't limited to in-band signaling.<p>The pessimistic view is, the way that the model generalizes its steerability training will have lots of holes in it, and we'll be stuck with all sorts of crazy adversarial inputs that can confuse the model into following instructions in the user prompt above the system prompt. Hopefully those attacks will at least be more exciting than just messing with delimiters.<p>(And I guess the depressing view is, people will build systems on top of ChatGPT with no access to the system prompt in the first place, and we will in fact be stuck with the problem)</p>
]]></description><pubDate>Sat, 13 May 2023 07:28:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=35926392</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=35926392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35926392</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "OpenAI Tokenizer"]]></title><description><![CDATA[
<p>"They" as in OpenAI, when they trained the tokenizer, just dumped a big set of text data into a BPE (byte pair encoding) tokenizer training script, and it saw that string in the data so many times that it ended up making a token for it.<p>"They" as in the rest of us afterward... probably just looked at the token list. It's a little over fifty thousand items, mostly short words and fragments of words, and can be fun to explore.<p>The GPT-2 and GPT-3 models proper were trained on different data than the tokenizer they use, one of the major differences being that some strings (like " SolidGoldMagikarp") showed up very rarely in the data that the model saw. As a result, the models can respond to the tokens for those strings a bit strangely, which is why they're called "glitch tokens". From what I've seen, the base models tend to just act as if the glitch token wasn't there, but instruction-tuned models can act in weirdly deranged ways upon seeing them.<p>The lesson to learn overall AIUI is just that you should train your tokenizer and model on the same data. But (also AIUI - we don't know what OpenAI actually did) you can also simply just remove the glitch tokens from your tokenizer, and it'll just encode the string into a few more tokens afterward. The model won't ever have seen that specific sequence, but it'll at least be familiar with all the tokens in it, and unlike never-before-seen single tokens, it's quite used to dealing with never-before-seen sentences.</p>
]]></description><pubDate>Wed, 05 Apr 2023 15:31:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=35455853</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=35455853</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35455853</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "ChatGPT is a blurry JPEG of the web"]]></title><description><![CDATA[
<p>> Models like ChatGPT aren’t eligible for the Hutter Prize for a variety of reasons, one of which is that they don’t reconstruct the original text precisely—i.e., they don’t perform lossless compression.<p>Small nit: The lossiness is not a problem at all. Entropy coding turns an imperfect, lossy predictor into a lossless data compressor, and the better the predictor, the better the compression ratio. All Hutter Prize contestants anywhere near the top use it. The connection at a mathematical level is direct and straightforward enough that "bits per byte" is a common number used in benchmarking language models, despite the fact that they are generally not intended to be used for data compression.<p>The practical reason why a ChatGPT-based system won't be competing for the Hutter Prize is simply that it's a contest about compressing a 1GB file, and GPT-3's weights are both proprietary and take up hundreds of times more space than that.</p>
]]></description><pubDate>Thu, 09 Feb 2023 16:40:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=34726719</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=34726719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34726719</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Building a Virtual Machine Inside ChatGPT"]]></title><description><![CDATA[
<p>> I'm glad that you enjoyed my previous responses, but I want to clarify that I was not pretending to be a Linux terminal.<p>People who like to pooh-pooh generative AI systems as unable to be "truly creative" or to have "genuine understanding" tend to misunderstand them, which is a shame, because their actual fundamental limitations are far more interesting.<p>One is that behavior cloning is miscalibrated(<a href="https://www.lesswrong.com/posts/BgoKdAzogxmgkuuAt/behavior-cloning-is-miscalibrated" rel="nofollow">https://www.lesswrong.com/posts/BgoKdAzogxmgkuuAt/behavior-c...</a>): GPT-3 can be thought of as having been taught to act like a human by predicting human-written text, but it's incapable of recognizing that it has different knowledge and capabilities than a human when trying to act like one. Or, for that matter, it can roleplay a Linux terminal, but it's again incapable of recognizing for instance that when you run `ls`, an actual Linux system uses a source of knowledge that the model doesn't have access to, that being the filesystem.<p>Self-knowledge is where it gets particularly bad: Most text about systems or people describing themselves is very confident, because it's from sources that do have self-knowledge and clear understanding of their own capabilities. So, ChatGPT will describe itself with that same level of apparent knowledge, while in fact making up absolute BS, because it doesn't have self-knowledge when describing itself in language, in exactly the same sense as it doesn't have a filesystem when describing the output of `ls`.</p>
]]></description><pubDate>Sun, 04 Dec 2022 10:17:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=33852275</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=33852275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33852275</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "The global streaming boom is creating a translator shortage (2021)"]]></title><description><![CDATA[
<p>Stories like this might as well be titled "Local change in Earth's magnetic field causes objects to float in the air". They shouldn't be read and then disputed because their details don't match the facts, they should be laughed out of the room based on the title alone. The forces they suppose to be relevant just aren't anywhere even close to the brute economic logic that actually governs how many people are willing to work in a field.<p>If companies want to employ more roofers, they should pay roofers more. It turns out that they will find more roofers that way. If they can't afford that... then we as a society didn't actually want that much roofing done in the first place. Or teaching, or farming, or programming, or whatever kind of job this story happened to be about again.</p>
]]></description><pubDate>Sun, 13 Mar 2022 14:46:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=30661649</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=30661649</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30661649</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "GPT-J-6B: 6B JAX-Based Transformer"]]></title><description><![CDATA[
<p>With the defaults of per_replica_batch=1, seq=2048 and gen_len=512, a completion takes about 20 seconds.<p>I'm not sure yet what settings I'll end up with if I decide to play with this more. per_replica_batch=3, seq=1024, gen_len=64 would give an experience roughly similar to the AI Dungeon that I'm used to, though less clever than the Dragon model, and a bit slower at about 10 seconds per batch.</p>
]]></description><pubDate>Thu, 10 Jun 2021 15:50:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=27462098</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=27462098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27462098</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "GPT-J-6B: 6B JAX-Based Transformer"]]></title><description><![CDATA[
<p>I'm running it comfortably on my 3090, although it's a really snug fit for the VRAM, and that's with a number of fixes to significantly reduce its memory use from <a href="https://github.com/AeroScripts/mesh-transformer-jax" rel="nofollow">https://github.com/AeroScripts/mesh-transformer-jax</a> .</p>
]]></description><pubDate>Thu, 10 Jun 2021 02:21:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=27456012</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=27456012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27456012</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Tagged Unions Are Overrated"]]></title><description><![CDATA[
<p>I used to think that this was only ever going to be possible with some sort of incomprehensible type trickery that I would never be able to understand. Then TypeScript came along and showed me that no, actually, in a structural type system, it's just about as simple as you  could imagine:<p><pre><code>    type Foo = { kind: 'A', aItem: number } | { kind: 'B', bItem: string } | { kind: 'C' };
    
    type SubsetFoo = Foo & { kind: 'B' | 'C' }
    type SupersetFoo = Foo | { kind: 'D', dItem: boolean };
</code></pre>
I'm sure there are imperfections here. For a start, SubsetFoo's normalized form looks rather ugly when you mouse over it in VSCode. But it does get you the niceties of exhaustiveness checking and type-aware suggestions with control flow awareness, etc..</p>
]]></description><pubDate>Fri, 19 Feb 2021 14:54:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=26193782</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=26193782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26193782</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Bad TypeScript Habits"]]></title><description><![CDATA[
<p>Hello, my name is Ari and I'm a type addict.<p>Thankfully, the impulse to waste time trying to come up with the most precise typing possible for everything hasn't been as strong with TypeScript as it used to be with Haskell for me. In part that's because TS's type system is so ridiculously expressive that I can say what I want to say without spending too long on it anyway, in part it's because the system's proud unsoundness and the ability for typings to simply be wrong means that I know not to stake my life on the types anyway. Besides, in my experience, the more precise I try to make the types at an interface, the more I need to cast in the internals. Better to find a balance that keeps both reasonable.</p>
]]></description><pubDate>Wed, 03 Feb 2021 03:52:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=26010420</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=26010420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26010420</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Fixing Mass Effect black blobs on modern AMD CPUs"]]></title><description><![CDATA[
<p>The Internet often makes me feel old, but seeing Photopia called a "newish" game certainly put some spryness in my joints again. It's not that it's not a great game - it is - but even in its medium, I do think it should be called a classic by now. A lot of things have changed since it was released.<p>Over the last several years of very occasionally playing interactive fiction, I've been particularly impressed by:<p>- Cactus Blue Motel, by Astrid Dalmady. 2016. A coming-of-age story with a bit of magical realism, written in Twine. Highly accessible, and it takes just minutes to give the game a try: <a href="http://astriddalmady.com/cactusblue.html" rel="nofollow">http://astriddalmady.com/cactusblue.html</a><p>- Chlorophyll, by Steph Cherrywell. 2015. Also a coming-of-age story, but mostly a rip-roaring scifi adventure. Could well make a good introduction to the more modern views on interactive fiction.<p>- Coloratura, by Lynnea Glasser. 2013. Carpenterian horror from an unusual perspective. Swept the awards in the IF community when it came out.<p>- Eat Me, by Chandler Groover. 2017. A twisted fairytale that's thoroughly obsessed with food - the richer and the more varied, the better. A great showing of how much a writer who's willing to go far enough with it can do with prose style.</p>
]]></description><pubDate>Sun, 19 Jul 2020 21:04:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=23892421</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=23892421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23892421</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Donald Knuth was framed"]]></title><description><![CDATA[
<p>Aw. Well, it was worth a shot, that kind of mild homonym abuse is kind of direction that these sorts of titles usually tend to take. Fortunately the real story is more interesting.</p>
]]></description><pubDate>Mon, 24 Feb 2020 18:07:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=22406554</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=22406554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22406554</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Donald Knuth was framed"]]></title><description><![CDATA[
<p>My guess is it's either about a picture of Donald Knuth, or Knuth's reward checks, being put in picture frames.</p>
]]></description><pubDate>Mon, 24 Feb 2020 17:42:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=22406265</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=22406265</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22406265</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Dynamic type systems are not inherently more open"]]></title><description><![CDATA[
<p>"Nitpickers will complain that this isn’t the same as pickle.load(), since you have to pass a Class<T> token to choose what type of thing you want ahead of time. However, nothing is stopping you from passing Serializable.class and branching on the type later, after the object has been loaded."<p>Is that actually true in Java? It seems to me that the way that you'd implement that load() method is by using generics to inspect the class you were passed, figuring out what data it wants in what slot, and pulling that data in from the input. You <i>could</i> hold on to the input and return a dynamic proxy, and you would be able to see when someone calls a getFoo() on that proxy, but then you wouldn't know what the type of the foo it expects is. And I don't know whether you could even make your proxy assignable to T.</p>
]]></description><pubDate>Sun, 19 Jan 2020 12:25:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=22091043</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=22091043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22091043</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Rust 1.34.2"]]></title><description><![CDATA[
<p>Could be it just came across the right pair of eyes. It's quite reminiscent of the old EvilIx problem in Haskell, where you can break memory safety without doing anything explicitly marked unsafe: <a href="https://mail.haskell.org/pipermail/haskell-cafe/2006-December/019994.html" rel="nofollow">https://mail.haskell.org/pipermail/haskell-cafe/2006-Decembe...</a></p>
]]></description><pubDate>Tue, 14 May 2019 15:02:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=19910188</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=19910188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19910188</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Wages Are Finally Rising, 10 Years After the Recession"]]></title><description><![CDATA[
<p>This turned out to be quite an angry post, so a quick preface: All of this is a knee-jerk response to a single sentence in your post. So, you know, please take with a grain of salt, not personally, etc..<p>I have to admit, it's a little bit depressing to see someone just use a straight-out lump of labor argument against immigration.<p>I can deal with people who believe immigrants tend to be criminal and who are concerned for their family's safety - they're wrong, but they're afraid, and fear's got a way of overriding statistical reason. I can deal with people who believe immigrants will bring poor institutions from their countries of origin - I think their evidence is weak, but I realize there's a lot at stake in keeping the global North rich and well-run, so I can see at least some reason for caution. I can even deal with the racists by letting my eyes glaze over for a while. People who think that brain drain hurts the country of origin too much... believe in a greater amount of responsibility that individuals have for their nations of birth than I do, but okay, at least they're coming from what they believe is a position of compassion. And so forth - I can sympathize with, understand, or at least ignore most arguments against more open immigration, even if I disagree with them.<p>But just "stop importing poor foreigners who undercut American wages"? What do you think makes your work valuable in the first place? I mean, I assume that like with most people in a modern economy, very little of what you consume comes from what you yourself produced. Your work is valuable because other people have demand for its results, and will exchange some of their own surplus for it. The more surplus there is to go around, the more there is for you to get. You might have to specialize more to compete in a bigger market, but you're surely capable of it. Even the poor generally are. If nothing else, in case of immigration to the USA, the natives tend to have the distinct advantage of being literate and generally conversable in English.<p>So please at least have the rational selfishness to ask for more surplus! Yes, the one guy who moves in to your tiny geographic area and works in the minuscule sliver of what you produce in the economy will make your life more complicated, because now you'll have to compete with them. But for that guy, there'll be thousands of others whose greater productivity you will now get to enjoy. Maybe if you were the Immigration Czar, you could be very selfish and decide that that person doesn't get to move in but everyone else does. But if you want to make a policy that lets all selfish people benefit, then you have to let everyone face more competition and get more surplus.<p>To in fact not undercut my own point: I'm not claiming that that's the whole story. You might believe that maybe there will in fact be less surplus overall in the long run if you open up immigration, or you might believe that the fruits of greater productivity will go to people who don't deserve them. Again, those are concerns that I can in fact see reason in. Not that many people actually believe that they live in a world where the only difference between labor in a rich country and labor in a poor country is that the former are more productive because of where they live. In more advanced territory, I might even have to deploy moral arguments, like how unconscionable it is to not let people - individuals of just as much moral worth and agency as you or I, or indeed us behind the veil of ignorance - work in your country because they were born on the wrong side of a border. But I won't do it for lump-of-labor. It doesn't show enough understanding of economics to be worth a moral argument.</p>
]]></description><pubDate>Sun, 05 May 2019 10:07:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=19832035</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=19832035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19832035</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "Around the World, People Have Surprisingly Modest Notions of the ‘Ideal’ Life"]]></title><description><![CDATA[
<p>I don't know. Maybe if you've been reading bad science fiction with immortal geniuses running around everywhere, it might seem that living to 120 and having an IQ of 130 is leaving a lot on the table, but when you compare those numbers to what we actually get, I think calling them "modest" is missing the point. Those aren't numbers that say "I'm more or less content with my lot", those are numbers that say you wish you were as smart as the smartest person you ever met, and that the papers wrote stories about your birthdays. They're an argument against the idea that transhumanism isn't something that people actually want.<p>And of course, the implication is obvious: If everyone <i>did</i> on average live to 120 and get an IQ of 130 on our tests, then everyone would be wanting to live 200 years, and quite a few people would probably want an IQ of 160 on the old tests. That is, unless that was the point where people just switched to transhumanism as simplified humanism: Having the choice to live longer and to understand the universe better are probably always a good thing, regardless of how long you've lived and how much you understand.</p>
]]></description><pubDate>Sat, 23 Jun 2018 13:41:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=17381229</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=17381229</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17381229</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "The Quantum Thermodynamics Revolution (2017)"]]></title><description><![CDATA[
<p>Just to make sure I'm not completely confused here: "Information can't be lost" and "you can't arrive to the same state of the universe through two different paths" are two ways to state exactly the same thing, right? Regardless of the the details of the rest of your physics (though you do need various notions to build up that far - time, with at least a past and a present, the ability to call states the "same" or not, etc.)<p>For instance, in cellular automata, the way that you would state that same concept is "the update function is injective". In our universe's physics, AIUI injectivity is involved somewhere in the definition of unitarity.</p>
]]></description><pubDate>Fri, 09 Mar 2018 06:03:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=16550053</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=16550053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16550053</guid></item><item><title><![CDATA[New comment by Ari_Rahikkala in "The Forgotten Mystery of Inertia"]]></title><description><![CDATA[
<p>I'll admit to not being a physicist and to being easily confused by the subject, so I'll just use a silly little metaphor to check whether I'm thinking straight, and try to avoid terminology I don't fully understand (which, since I don't have a particularly strong understanding of even special relativity, means not using relativity terms at all):<p>Think of the universe as a rubber sheet over a table, being stretched in all directions. My understanding is that cosmic background radiation is on the rubber, stretching and moving with it just like all the matter is. If background radiation were on the table instead, there would be exactly one spot on the rubber that saw no motion relative to it, while spots very far away from that spot would see a huge amount of motion. If you went out far enough, you'd be seeing hard radiation from half the sky.<p>Or, as another view, though I find this one harder to think about, since here the expansion of the universe seems to confuse rather than clarify things: The background radiation you see at any given point is simply the photons radiated inward from a sphere of a very large radius centered on that point. Every point has its own such sphere, and if an observer sees a dipole component in its background radiation, it means that that observer is in motion relative to the average motion of the matter that those photons originally came out of.</p>
]]></description><pubDate>Wed, 18 Oct 2017 02:21:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=15496431</link><dc:creator>Ari_Rahikkala</dc:creator><comments>https://news.ycombinator.com/item?id=15496431</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15496431</guid></item></channel></rss>