<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hashmap</title><link>https://news.ycombinator.com/user?id=hashmap</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 05:04:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hashmap" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hashmap in "Meta removes ads for social media addiction litigation"]]></title><description><![CDATA[
<p>at certain scales, reality has to win out over whatever ideal you have in your head about how things should be. facebook is massive, a lot of society is on it, and its a problem to make recourse invisible to people most affected by the thing stealing their attention.</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:18:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704105</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47704105</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704105</guid></item><item><title><![CDATA[New comment by hashmap in "LLM Neuroanatomy II: Modern LLM Hacking and Hints of a Universal Language?"]]></title><description><![CDATA[
<p>yann lecun has been saying this for years. but its not a language really its an abstract geometric representation. so similar semantic meanings of sentences in different languages land in the same place in different models, just rotated.</p>
]]></description><pubDate>Fri, 27 Mar 2026 18:17:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47546309</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47546309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47546309</guid></item><item><title><![CDATA[New comment by hashmap in "Allow me to get to know you, mistakes and all"]]></title><description><![CDATA[
<p>adhd'er here too. maybe the practice is good, but it takes a lot of energy, which is finite. i find that leaning on my strengths gets me far, far better results than trying to get up to par with everyone else on things im bad at. if a tool just lets you get started, and you can breeze through getting started on things that you might otherwise just never even start, it seems like using the tool is the way to go.<p>ive been fighting the way my brain works my whole life, and only recently have i switched to trying to work with the way it wants to work. i get so many more things done that are important to me, and i get them done without the implicit "i need to flagellate myself with this thing i hate because there is something wrong with me" that comes with those fights.<p>and yeah, the ai's come with their own problems. but the trade is so exponentially in the direction of being worth it. even just the being a decent rubber duck aspect of them can keep me on a task when i would never otherwise hope to see it through.</p>
]]></description><pubDate>Sun, 15 Mar 2026 18:31:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47390332</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47390332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47390332</guid></item><item><title><![CDATA[New comment by hashmap in "The 100 hour gap between a vibecoded prototype and a working product"]]></title><description><![CDATA[
<p>if something like a popup appears that i didnt ask the page to do i snap close the page and never look at it again</p>
]]></description><pubDate>Sun, 15 Mar 2026 17:57:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47389934</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47389934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47389934</guid></item><item><title><![CDATA[New comment by hashmap in "Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs"]]></title><description><![CDATA[
<p>oh neat ill check that one out. i dont get that much speedup from ssd/128gb unified vs vram if im doing like a predefined set of prompts, since i have it load it from disk anyway and im just doing one forward pass per prompt, and just like load part of it at a time. its a bit slower if im doing cpu inferencing but i only had to do that with one model so far.<p>but yeah on demand would be a lot of ssd churn so id just do it for testing or getting some hidden state vectors.</p>
]]></description><pubDate>Sat, 14 Mar 2026 00:40:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47372005</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47372005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47372005</guid></item><item><title><![CDATA[New comment by hashmap in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>this is neat but to me seems like the circuitous path to just skipping autoregression, whereas the direct path is to just not do autoregression. get your answers from the one forward pass, and instead of backprop just do lookups and updates as the same operation.</p>
]]></description><pubDate>Fri, 13 Mar 2026 16:06:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47366292</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47366292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47366292</guid></item><item><title><![CDATA[New comment by hashmap in "Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs"]]></title><description><![CDATA[
<p>im kind of wondering like what the ceiling would be on reasoning for something like the 1.5T models with the repeating technique, but they would take a long time to download. i think if you have them already it would take maybe an hour or so to check against a swath of prompts. whats the reasoningest open model at the moment?<p>my guess is that large models trained on large corpuses there is just some ceiling of "reasoning you can do" given the internal geometry implied by the training data, cause text is lossy and low-bandwidth anyway, and theres only really so much of it. past some point you just have to have models learning from real-world interactions and my guess is we're already kind of there.</p>
]]></description><pubDate>Fri, 13 Mar 2026 15:43:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47365978</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47365978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47365978</guid></item><item><title><![CDATA[New comment by hashmap in "Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs"]]></title><description><![CDATA[
<p>dude thats sick! i tried it out and it works. theres a couple layers in there that are part of the voidy block that doesnt do much for the selected answer, so i narrowed it down to L48-53 where this model is mapping out its reasoning strategy, and repeated that twice, i got a big improvement over the original config (i chose some questions from atropos and claude code made some up so idk not like a real dataset).<p>so thats about %15 more compute per forward pass with 0 extra memory which is just nuts, so for a streaming or disk-based setup its just free better answers. def wasnt gonna think of this myself.<p><pre><code>  config               layers   overall    delta          math     reasoning  word problems
  baseline                 80    0.5391  +0.0000        0.5850        0.6357        0.3500
  rys                      87    0.5452  +0.0061        0.6706        0.6000        0.2723
  cartographer_repeat_x2   92    0.7741  +0.2350        0.8455        0.8214        0.6000
</code></pre>
looks like the model gets a second/third go at figuring out how to approach the problem and it gets better answers.<p>i tried a matrix of other configurations and stuff gets totally weird. like playing em through backwards in that block doesnt make much of a difference / order doesnt seem to matter (?!). doubling each layer got a benefit, but if i doubled the layers and doubled that block there was interference. doubling the block where the model is architecting/crystallizing its plans improves reasoning but at the cost of other stuff. other mixes of blocks showed some improvements for certain kinds of prompts but didnt stand out as much.</p>
]]></description><pubDate>Fri, 13 Mar 2026 04:15:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47360629</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47360629</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47360629</guid></item><item><title><![CDATA[New comment by hashmap in "How I use Claude Code: Separation of planning and execution"]]></title><description><![CDATA[
<p>i literally suggested this metaphor earlier yesterday to someone trying to get agents to do stuff they wanted, that they had to set up their guardrails in a way that you can let the agents do what they're good at, and you'll get better results because you're not sitting there looking at them.<p>i think probably once you start seeing that the behavior falls right out of the geometry, you just start looking at stuff like that. still funny though.</p>
]]></description><pubDate>Sun, 22 Feb 2026 08:44:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47109432</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47109432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47109432</guid></item><item><title><![CDATA[New comment by hashmap in "How I use Claude Code: Separation of planning and execution"]]></title><description><![CDATA[
<p>these sort-of-lies might help:<p>think of the latent space inside the model like a topological map, and when you give it a prompt, you're dropping a ball at a certain point above the ground, and gravity pulls it along the surface until it settles.<p>caveat though, thats nice per-token, but the signal gets messed up by picking a token from a distribution, so each token you're regenerating and re-distorting the signal. leaning on language that places that ball deep in a region that you want to be makes it less likely that those distortions will kick it out of the basin or valley you may want to end up in.<p>if the response you get is 1000 tokens long, the initial trajectory needed to survive 1000 probabilistic filters to get there.<p>or maybe none of that is right lol but thinking that it is has worked for me, which has been good enough</p>
]]></description><pubDate>Sun, 22 Feb 2026 01:57:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47107307</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=47107307</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47107307</guid></item><item><title><![CDATA[New comment by hashmap in "Doom has been ported to an earbud"]]></title><description><![CDATA[
<p>Yeah, that's more or less what I'm getting at.</p>
]]></description><pubDate>Mon, 26 Jan 2026 00:22:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46760204</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46760204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46760204</guid></item><item><title><![CDATA[New comment by hashmap in "Doom has been ported to an earbud"]]></title><description><![CDATA[
<p>I can sort of see one angle for it, and the parent story kind of supports it. Bad software is a forcing function for good hardware - the worse that software has gotten in the past few decades the better hardware has had to get to support it. Such that if you actually tried like OP did, you can do some pretty crazy things on tiny hardware these days. Imagine what we could do on computers if they weren't so bottlenecked doing things they don't need to do.</p>
]]></description><pubDate>Sun, 25 Jan 2026 17:39:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46756160</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46756160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46756160</guid></item><item><title><![CDATA[New comment by hashmap in "A macOS app that blurs your screen when you slouch"]]></title><description><![CDATA[
<p>if im not sitting on my right foot with left knee under my chin my thinking takes a hit, but i also have to constantly switch how im sitting so i dont get annoyed. its hard not to slouch/melt into whatever im sitting on and i think the only way to offset all that is the gym.</p>
]]></description><pubDate>Sun, 25 Jan 2026 16:43:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46755617</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46755617</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46755617</guid></item><item><title><![CDATA[New comment by hashmap in "Scott Adams has died"]]></title><description><![CDATA[
<p>"DEI" is an inherent part of the system - being "against DEI" is simply a statement about what kind of "DEI" you actually want.</p>
]]></description><pubDate>Wed, 14 Jan 2026 16:23:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46618004</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46618004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46618004</guid></item><item><title><![CDATA[New comment by hashmap in "GraphQL: The enterprise honeymoon is over"]]></title><description><![CDATA[
<p>I do like the look of this! It seems like it nicely provides that without like kicking you into React, which I have ended up having to draw a hard line against in development after my first couple experiences not only with it, but how the distributions in AI models make it a real trap to touch. I'll swap this in in one of my projects and give it a go. Thanks!</p>
]]></description><pubDate>Mon, 15 Dec 2025 18:00:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46277993</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46277993</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46277993</guid></item><item><title><![CDATA[New comment by hashmap in "GraphQL: The enterprise honeymoon is over"]]></title><description><![CDATA[
<p>> GraphQL isn’t bad. It’s just niche. And you probably don’t need it.<p>> Especially if your architecture already solved the problem it was designed for.<p>What I need is to not want to fall over dead. REST makes me want to fall over dead.<p>> error handling is harder than it needs to be
GraphQL error responses are… weird.
> Simple errors are easier to reason about than elegant ones.<p>Is this a common sentiment? Looking at a garbled mash of linux or whatever tells me a lot more than "500 sorry"<p>I'm only trying out GraphQL for the first time right now cause I'm new with frontend stuff, but from life on the backend having a whole class of problems, where you can have the server and client agree on what to ask for and what you'll get, be compiled away is so nice. I don't actually know if there's something better than GraphQL for that, but I wish when people wrote blogs like this they'd fill them with more "try these things instead for that problem" than simply "this thing isn't as good as you think it is you probably don't need it".</p>
]]></description><pubDate>Sun, 14 Dec 2025 19:38:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46266060</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46266060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46266060</guid></item><item><title><![CDATA[New comment by hashmap in "Australia begins enforcing world-first teen social media ban"]]></title><description><![CDATA[
<p>The execution didn't finish; it started. Big policy changes typically take time to solidify, and it'll probably take a bit to get a reliable read on its trajectory. But there is international momentum on this, so making predictions based on whatever percentage of people that were supposed to have their accounts deactivated actually did the day of (if we even have that data, and I doubt that we do), is probably not going to be useful.</p>
]]></description><pubDate>Wed, 10 Dec 2025 17:31:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46220644</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46220644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46220644</guid></item><item><title><![CDATA[New comment by hashmap in "I wasted years of my life in crypto"]]></title><description><![CDATA[
<p>It is both telling and very, very funny to confuse asking for specifics for making an argument against something.</p>
]]></description><pubDate>Tue, 09 Dec 2025 02:27:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46200573</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46200573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46200573</guid></item><item><title><![CDATA[New comment by hashmap in "I wasted years of my life in crypto"]]></title><description><![CDATA[
<p>> I would encourage you to learn about them as if they were just incredibly robust databases that even governments would struggle to take down. Surely you can think of something cool to build with that, which doesn't involve money.<p>Why is it so popular for someone in tech to assign everyone else the task of thinking up something useful to do with technology x they think is cool?<p>> It's not an overstatement to say that distributed ledgers are as big of an advancement for human coordination as democracy was.<p>Ok, if that's really your thinking then you need to lay out: here's an impossible-to-ignore thing we can do with this, and this is how, and this is why this wouldn't be possible without this thing.</p>
]]></description><pubDate>Mon, 08 Dec 2025 17:38:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46195189</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46195189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46195189</guid></item><item><title><![CDATA[New comment by hashmap in "The programmers who live in Flatland"]]></title><description><![CDATA[
<p>I am literally asking you to show me what you are talking about.<p>Who retired early? What did they make with Lisp? Do you have a link to that thing they made? Was it actually something special about Lisp or did they just happen to use Lisp while making something that could have just been as easily - or more easily - done with something else? Is that real-world, specific, material outcome replicable by other people and they should know about it?</p>
]]></description><pubDate>Mon, 08 Dec 2025 17:11:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46194837</link><dc:creator>hashmap</dc:creator><comments>https://news.ycombinator.com/item?id=46194837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46194837</guid></item></channel></rss>