<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: AStrangeMorrow</title><link>https://news.ycombinator.com/user?id=AStrangeMorrow</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 08:46:08 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=AStrangeMorrow" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by AStrangeMorrow in "Git commands I run before reading any code"]]></title><description><![CDATA[
<p>I also like meaningful commit names. But am sometimes guilty of “hope this works now” commits, but they always follow a first fix that it turns out didn’t cut it.<p>I work on a lot of 2D system, and the only way to debug is often to plot 1000s of results and visually check it behaves as expected. Sometimes I will fix an issue, look at the results, and it seems resolved (was present is say 100 cases) only to realize that actually there are still 5 cases where it is still present. Sure I could amend the last commit, but I actually keep it as a trace of “careful this first version mostly did the job but actually not quite”</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:26:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695053</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47695053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695053</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Ask HN: Should AI credits be refunded on mistakes?"]]></title><description><![CDATA[
<p>I mean “mistakes” can be hard to define. IMHO there is an area of responsibility between the LLM, the LLM user, and the code itself.<p>Did it make a mistake because I didn’t follow instructions properly or hallucinated some content?<p>Did it make a mistake because the prompt was unclear/open to interpretation or plain wrong?<p>Did it make a mistake because it lacked some context? Or too much context and it starts getting confused?<p>Is not handling edge cases automatically when that was not requested a mistake?<p>I am not just trying to defend LLMs, in many cases they make obvious mistakes and just don’t follow my arguably clear instructions properly. But sometimes it is not so clear cut. Maybe I didn’t link a relevant file (you can argue it could have looked to it), maybe my prompt just wasn’t that clear etc</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:16:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47694910</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47694910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47694910</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "The Future of Everything Is Lies, I Guess"]]></title><description><![CDATA[
<p>I have still mixed feelings about LLMs.<p>If I take the example of code, but that extends to many domains, it can sometimes produce near perfect architecture and implementation if I give it enough details about the technical details and fallpits. Turning a 8h coding job into a 1h review work.<p>On the other hand, it can be very wrong while acting certain it is right. Just yesterday Claude tried gaslighting me into accepting that the bug I was seeing was coming from a piece of code with already strong guardrails, and it was adamant that the part I was suspecting could in no way cause the issue. Turns out I was right, but I was starting to doubt myself</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:05:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692100</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47692100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692100</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "iNaturalist"]]></title><description><![CDATA[
<p>I mean I do agree, and on iNat I can clearly see my house and the house of a few other people in the neighborhood. However you can easily find the current owner information for a given house in the state I live in, and since we bought the house, our name.<p>I guess it is different once you look at people renting, and also you could track a specific person posts to see when they are posting away from home for example. But as far as revealing your home address, sadly there are many other ways in a lot of cases</p>
]]></description><pubDate>Sat, 04 Apr 2026 22:18:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47644098</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47644098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47644098</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "4D Doom"]]></title><description><![CDATA[
<p>I imagine they are taking about 4D golf by CodeParade, as seen here: <a href="https://youtu.be/y53UNskR-zU?si=iUfCkxYqkACx955t" rel="nofollow">https://youtu.be/y53UNskR-zU?si=iUfCkxYqkACx955t</a><p>Steam link: <a href="https://store.steampowered.com/app/2147950/4D_Golf" rel="nofollow">https://store.steampowered.com/app/2147950/4D_Golf</a><p>The person goes over quite a few technical details on their Youtube, though they talk about a bunch of other coding experiments too.</p>
]]></description><pubDate>Tue, 31 Mar 2026 23:08:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47594674</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47594674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47594674</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "We haven't seen the worst of what gambling and prediction markets will do"]]></title><description><![CDATA[
<p>I am not sure, but you don’t have to technically bet on assassination. You can bet on an event which would happen as a result of said assassination. X won’t get re-elected. Company Y CEO will change in 2027. This is artist Z last tour. Athlete K won’t participate in this event etc.</p>
]]></description><pubDate>Thu, 26 Mar 2026 20:42:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47535447</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47535447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47535447</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "We haven't seen the worst of what gambling and prediction markets will do"]]></title><description><![CDATA[
<p>The issue is the combined risk of insider trading coupled with the bias of disaster-centric betting, or at least event-centric betting. This means if you have the means to create an “out of the ordinary” event you have a strong incentive to make it happen and to bet on it. These must be controllable events, so not natural or complex systems. On the gentler side it would be sports fixing, which has always existed. On the worse side it would be causing war, making economic decisions that will impact many, betting on people death and so on. These kind of things are seemingly already happening to a certain degree.</p>
]]></description><pubDate>Thu, 26 Mar 2026 20:31:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47535332</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47535332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47535332</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "French e, è, é, ê, ë – what's the difference?"]]></title><description><![CDATA[
<p>Arguably so is “aim/ein etc” and “in”, though more dialect dependent and more subtle.<p>The former for me have a bit more exhale and round sound while the “in” are a tad drier.<p>For example “fin” and “faim” are distinct for me. However “faim” and “feint”</p>
]]></description><pubDate>Thu, 26 Mar 2026 18:09:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47533784</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47533784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47533784</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Antimatter has been transported for the first time"]]></title><description><![CDATA[
<p>I am curious about how much energy needs to be expanded to contain the anti-matter. Say it the matter/anti-matter is to be used for propulsion/energy generation can we reach a threshold were we are actually energy positive</p>
]]></description><pubDate>Wed, 25 Mar 2026 17:22:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47520397</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47520397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47520397</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Is anybody else bored of talking about AI?"]]></title><description><![CDATA[
<p>Yes it feels like a full time job just to try to keep up. And I’ve been in AI for close to 10 years so I feel like I have to keep up at least a minimum.<p>An other thing for me is that it has gotten a lot harder for small teams with few ressources, let one person, to release anything that can really compete with anything the big player put out.<p>Quite a few years back I was working on word2vec models / embeddings. With enough time and limited ressources I was able to, through careful data collection and preparation, produce models that outperformed existing embeddings for our fairly generic data retrieval tasks. You could download from models Facebook (fasttext) or other models available through gensim and other tools, and they were often larger embeddings (eg 1000 vs 300 for mine), but they would really underperform. And when evaluating on general benchmarks, for what existed back then, we were basically equivalent to the best models in English and French, if not a little better at times. Similarly later some colleagues did a new architecture inspired by BeRT after it came out, that outperformed again any existing models we could find.<p>But these days I feel like there is nothing much I can do in NLP. Even to fine-tune or distillate the larger models, you need a very beefy setup.</p>
]]></description><pubDate>Tue, 24 Mar 2026 21:33:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47509701</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47509701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47509701</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "What young workers are doing to AI-proof themselves"]]></title><description><![CDATA[
<p>Might be surprising but I am kinda willing to believe it. Since we bought our house, we had quite a bit of work done by professionals. But whenever I can I do things myself.<p>Like I had multiple companies quote me $300-500 based on the job for things that take me maybe 2-3 hours total to do, including learning about it (will be faster next time), getting the materials, and doing the job.<p>When you have a few of these a months they add up. It is usually nothing for a month and then 4-5 things to fix/improve the next</p>
]]></description><pubDate>Mon, 23 Mar 2026 14:31:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47490086</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47490086</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47490086</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Walmart: ChatGPT checkout converted 3x worse than website"]]></title><description><![CDATA[
<p>I remember having to describe a standard model to predict online shopping behaviors for my ML class exam in university. That was close to 10 years ago now.<p>Also remember a teacher telling us about that story of a company finding a woman was pregnant from her shopping behavior and pushing relevant recommendation. Prompting people around her like her dad or something to find out she was pregnant</p>
]]></description><pubDate>Mon, 23 Mar 2026 14:12:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47489835</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47489835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47489835</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "If AI brings 90% productivity gains, do you fire devs or build better products?"]]></title><description><![CDATA[
<p>Idk basically everyone is my org has seen some good value out of it. We have people complaining about limitations, but would still rather have that tooling than not.<p>For me the main difference is now some people can explain what their code does. While some other only what it wants to achieve</p>
]]></description><pubDate>Sun, 22 Mar 2026 14:55:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47478171</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47478171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47478171</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Ask HN: AI productivity gains – do you fire devs or build better products?"]]></title><description><![CDATA[
<p>For me, the main thing is to never have it write anything based on the goal (what the end result should look like and how it should behave). And only on the implementation details (and coding practices that I like).<p>Sure it is not as fast to understand as code I wrote. But at least I mostly need to confirm it followed how it implemented what I asked. Not figuring out WHAT it even decided to implement in the first place.<p>And in my org, people move around projects quite a bit. Hasn’t been uncommon for me to jump in projects with 50k+ lines of code a few times a year to help implement a tricky feature, or help optimize things when it runs too slow. Lots of code to understand then. Depending on who wrote it, sometimes it is simple: one or two files to understand, clean code. Sometimes it is an interconnected mess and imho often way less organized that Ai generated code.<p>And same thing for the review process, lots of having to understand new code. At least with AI you are fed the changes a a slower pace.</p>
]]></description><pubDate>Sun, 22 Mar 2026 14:52:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47478125</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47478125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47478125</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>We do lose something, but really I still see it as an extension of autocomplete.<p>I had some pieces of code I wrote I was quite proud of: well documented, clear code yet clever designs and algorithm.<p>But really what always mattered most to me was designing the solution. Then the coding part, even though I take some pride in the code I write, was most a mean to an end. Especially once I start having to add things like data validation and API layers, plotting analysis results and so many other things that are time consuming but easy and imho not very rewarding</p>
]]></description><pubDate>Sat, 14 Mar 2026 21:54:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47381648</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47381648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47381648</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Wired headphone sales are exploding"]]></title><description><![CDATA[
<p>Yeah for me the main sell of wireless is mobility/freedom of movement.<p>I can use them while charging my phone or working out. Can play a video while cooking and moving around the kitchen. Or while watching TV/playing a game in the TV where a cable can’t reach.<p>However when static I used wired. That’s mostly when on the computer, but like many people here I am assuming that’s a good part of the day.</p>
]]></description><pubDate>Sat, 14 Mar 2026 16:20:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378209</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47378209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378209</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "1M context is now generally available for Opus 4.6 and Sonnet 4.6"]]></title><description><![CDATA[
<p>Prompt quality does matter, but at some point context side does matter.<p>I’ve had thing like a system that has a collection of procedural systems. I would say “replace the following set of defaults that are passed all around for system X (list of files) and in the managed (file) by a config” and it would do that but I’d suddenly see it be like “wait mu and projection distance are also present in system Y and Z. Let me replace that by a config too with the same values”. When system Y and Z uses a different set of optimized values, and that was clearly outside of the scope.<p>Never had that kind of mistakes happen when dealing with small contexts, but with larger contexts (multiple files, long “thinking” sequences) it does happen sometimes.<p>Definitely some times when I though “oh well my bad, I should have clarified NOT to also change that other part”, all the while thinking that no human would have thought to change both</p>
]]></description><pubDate>Sat, 14 Mar 2026 16:10:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378093</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47378093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378093</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "1M context is now generally available for Opus 4.6 and Sonnet 4.6"]]></title><description><![CDATA[
<p>Yeah absolutely, at this point I also start new chats after 3-4 prompts. Especially with thinking models that produce so many tokens.<p>Usually things go smoothly but sometimes I have situations like: “please add feature X, needs to have ABCD.” -> does ABC correct but D wrong -> “here is how to fix D” -> fixes D but breaks AB -> “remember I also want AB this way, you broke it” -> fixes AB but removes C and so on</p>
]]></description><pubDate>Sat, 14 Mar 2026 15:58:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377958</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47377958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377958</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "RFC 454545 – Human Em Dash Standard"]]></title><description><![CDATA[
<p>I know, I find myself in this silly situation where I have to adjust my writing style because I write like an AI: always loved my bullet points and dashes.<p>At work I also always tended to send slightly longer but structured answers. I found that it allowed to skip over the irrelevant sections and focus on what the changes are. Eg a list of changes with in the format -> bullet point -> change name -> change details. So people could easily focus on changes they cared about. Instead of a dense paragraph that people often just skip.<p>Hell I even found myself wanting to add a typo just to give a more human fell, or skip final “.” to make my text imperfect and more human. That’s getting silly</p>
]]></description><pubDate>Tue, 10 Mar 2026 15:52:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47324937</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47324937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47324937</guid></item><item><title><![CDATA[New comment by AStrangeMorrow in "Yann LeCun raises $1B to build AI that understands the physical world"]]></title><description><![CDATA[
<p>Yeah. I feel like that like many projects the last 20% take 80% of time, and imho we are not in the last 20%<p>Sure LLMs are getting better and better, and at least for me more and more useful, and more and more correct. Arguably better than humans at many tasks yet terribly lacking behind in some others.<p>Coding wise, one of the things it does “best”, it still has many issues: For me still some of the biggest issues are still lack of initiative and lack of reliable memory. When I do use it to write code the first manifests for me by often sticking to a suboptimal yet overly complex approach quite often. And lack of memory in that I have to keep reminding it of edge cases (else it often breaks functionality), or to stop reinventing the wheel instead of using functions/classes already implemented in the project.<p>All that can be mitigated by careful prompting, but no matter the claim about information recall accuracy I still find that even with that information in the prompt it is quite unreliable.<p>And more generally the simple fact that when you talk to one the only way to “store” these memories is externally (ie not by updating the weights), is kinda like dealing with someone that can’t retain memories and has to keep writing things down to even get a small chance to cope. I get that updating the weights is possible in theory but just not practical, still.</p>
]]></description><pubDate>Tue, 10 Mar 2026 14:54:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47324104</link><dc:creator>AStrangeMorrow</dc:creator><comments>https://news.ycombinator.com/item?id=47324104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47324104</guid></item></channel></rss>