<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: unoti</title><link>https://news.ycombinator.com/user?id=unoti</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 20:17:30 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=unoti" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by unoti in "ICE's Tool to Monitor Phones in Neighborhoods"]]></title><description><![CDATA[
<p>> Are there decent wifi communicators on the market? I looked into some Lora projects for this but they never seem to actually ship or get past prorotype<p>Yes, 100%.  Meshtastic and Meshcore both do this, but I'd recommend Meshcore.  Here in the Seattle area we have a network that fairly reliably delivers messages from Canada through the Seattle metro area all the way down to Portland.  Fully encrypted with dual key cryptography.  Meshcore uses a different strategy than Meshtastic, which enables Meshcore to work more reliably.  To see what's happening in your area for Meshcore see <a href="https://analyzer.letsmesh.net/map" rel="nofollow">https://analyzer.letsmesh.net/map</a><p>Is very fun to set up a repeater for under $50 and see a noticeable difference in the coverage area.  Is a fun technical project that combines the best of hiking/walking/driving geocaching style, ham radio (but without a license requirement), antenna building, and more.  I'm getting acquainted with people in my neighborhood too which is a bonus.<p>Figuring out what hardware to buy that'll actually work can be a challenge, to get started search amazon for "heltec v3" and make sure you get something that includes a battery, and you'll see 2-packs of radios for $60.  There's a web flasher at the above link that'll put the software on the radios for you.</p>
]]></description><pubDate>Thu, 08 Jan 2026 17:34:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46543848</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=46543848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46543848</guid></item><item><title><![CDATA[New comment by unoti in "If you're going to vibe code, why not do it in C?"]]></title><description><![CDATA[
<p>> Why vibe code with a language that has human convenience and ergonomics in view?<p>Recently I've been preparing a series that teaches how to use AI to assist with coding, and in preparation for that there's this thing I've coded several times in several different languages.  In the process of that, I've observed something that's frankly bizarre:  I get a 100% different experience doing it in Python vs C#.  In C#, the agent gets tripped up in doing all kinds of infrastructure and overengineering blind alleys.  But it doesn't do that when I use Python, Go, or Elixir.<p>My theory is that there are certain habits and patterns that the agents engage with that are influenced by the ecosystem, and the code that it typically reads in those languages.  This can have a big impact on whether you're achieving your goals with the activity, either positive or negative.</p>
]]></description><pubDate>Tue, 09 Dec 2025 19:10:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46209176</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=46209176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46209176</guid></item><item><title><![CDATA[New comment by unoti in "Show HN: Gemini Pro 3 imagines the HN front page 10 years from now"]]></title><description><![CDATA[
<p>> I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.<p>Here's how to make it do that. Instead of saying "I had idea X, but someone else was thinking idea Y instead. what do you think" tell it "One of my people had idea X, and another had idea Y.  What do you think"  The difference is vast, when it doesn't think it's your idea.  Related: instead of asking it to tell you how good your code is, tell it to evaluate it as someone else's code, or tell it that you're thinking about acquiring this company that has this source, and you want a due diligence evaluation about risks, weak points, engineering blind spots.</p>
]]></description><pubDate>Tue, 09 Dec 2025 18:48:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46208873</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=46208873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46208873</guid></item><item><title><![CDATA[New comment by unoti in "Ask HN: How are Markov chains so different from tiny LLMs?"]]></title><description><![CDATA[
<p>If you're not sure about what a Markov Chain is, or if you've never written something from scratch that <i>learns</i>, take a look at this repo I made to try to bridge that gap and make it simple and understandable.  You can read it in a few minutes. It starts with nothing but Python, and ends with generating text based on the D&D Dungeon Master Manual.  <a href="https://github.com/unoti/markov-basics/blob/main/markov-basics.ipynb" rel="nofollow">https://github.com/unoti/markov-basics/blob/main/markov-basi...</a></p>
]]></description><pubDate>Thu, 20 Nov 2025 21:37:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45998106</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=45998106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45998106</guid></item><item><title><![CDATA[New comment by unoti in "Analyzing the Performance of WebAssembly vs. Native Code"]]></title><description><![CDATA[
<p>> Please just use Docker in a microVM or whatever. It's 0% slower and 100% more mature.<p>Wasm has different characteristics than docker containers and as a result can target different use cases and situations.  For example, Imagine needing plugins for game mods or an actor system, where you need hundreds of them or thousands, with low latency startup times and low memory footprints and low overheads.  This is something you can do sanely with wasm but not with containers.  So containers are great for lots of things but not every conceivable thing, there’s still a place for wasm.</p>
]]></description><pubDate>Wed, 05 Nov 2025 01:32:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45818005</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=45818005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45818005</guid></item><item><title><![CDATA[New comment by unoti in "Vibe engineering"]]></title><description><![CDATA[
<p>> Yes, I want to play in easy mode. Why would I want to play in hard mode?<p>Working alone can be much easier than managing others in a team. But also, working in a team can be far more effective if you can figure out how to pull it off.<p>It's much the same as working with agents.  Working alone, without the agents, it's easier to make exactly what you want happen.  But working with agents, you can get a lot more done a lot faster-- if you can figure out how to make it happen.  This is why you might want hard mode.</p>
]]></description><pubDate>Fri, 10 Oct 2025 00:22:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45534369</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=45534369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45534369</guid></item><item><title><![CDATA[New comment by unoti in "Vibe engineering"]]></title><description><![CDATA[
<p>> If everyone you’re managing is completely transparent and immediately tells you stuff, you’re playing in easy mode<p>So much this. There are many managers who are effective at managing people who do not need management.</p>
]]></description><pubDate>Wed, 08 Oct 2025 17:30:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45518556</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=45518556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45518556</guid></item><item><title><![CDATA[New comment by unoti in "Vibe engineering"]]></title><description><![CDATA[
<p>> effective management requires that you're able to trust that the person tells you when they've hit a snag or anything else you may need to know<p>This is what we shoot for, yes, but many of the most interesting war stories involve times when people should have been telling you about snags but weren't-- either because they didn't realize they were spinning their wheels, or because they were hoping they'd somehow magically pull off the win before the due date, or innumerable other variations on the theme.  People are most definitely not reliable about telling you things they should have told you.<p>> if you feel you have to review every line of code anyone on the team writes...<p>Somebody has to review the code, and step back and think about it. Not necessarily the manager, but someone does.</p>
]]></description><pubDate>Wed, 08 Oct 2025 17:16:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45518396</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=45518396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45518396</guid></item><item><title><![CDATA[New comment by unoti in "Vibe engineering"]]></title><description><![CDATA[
<p>> I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills.<p>If you’ve be been tasked with leadership of an engineering effort involving multiple engineers and stakeholders you know that this is in fact a crucial part of the role the more senior you get.  It is much the same with people: know their limitations, show them a path to success, help them overcome their limitations by laying down the right abstractions and giving them the right coaching, make it easier to do the right thing.  Most of the same approaches apply. When we do these things with people it’s called leadership or management. With agents, it’s context engineering.</p>
]]></description><pubDate>Wed, 08 Oct 2025 15:39:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45517372</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=45517372</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45517372</guid></item><item><title><![CDATA[New comment by unoti in "Dispelling misconceptions about RLHF"]]></title><description><![CDATA[
<p>> I can see why someone might say there's overlap between RL and SFT (or semi-supervised FT), but how is "traditional" SFT considered RL? What is not RL then? Are they saying all supervised learning is a subset of RL, or only if it's fine tuning?<p>Sutton and Barto define reinforcement learning as "learning what to do- how to map situations to actions-- so as to maximize a numerical reward signal".  This is from their textbook on the topic.<p>That's a pretty broad definition.  But the general formulation of RL involves a state of the world and the ability to take different actions given that state.  In the context of an LLM, the state could be what has been said so far, and the action could be what token to produce next.<p>But as you noted, if you take such a broad definition of RL, tons of machine learning is also RL.  When people talk about RL they usually mean the more specific thing of letting a model go try things and then be corrected based on the observations of how that turned out.<p>Supervised learning defines success by matching the labels. Unsupervised learning is about optimizing a known math function (for example, predicting the likelihood that words would appear near each other).  Reinforcement learning would maximize a reward function that may not be directly known by the model, and it learns to optimize it by trying things and observing the results and getting a reward/penalty.</p>
]]></description><pubDate>Sun, 17 Aug 2025 16:37:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44932847</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=44932847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44932847</guid></item><item><title><![CDATA[New comment by unoti in "Claude Sonnet 4 now supports 1M tokens of context"]]></title><description><![CDATA[
<p>> Having spent a couple of weeks on Claude Code recently, I arrived to the conclusion that the net value for me from agentic AI is actually negative.
> For me it’s meant a huge increase in productivity, at least 3X.
> How do we reconcile these two comments? I think that's a core question of the industry right now.<p>Every success story with AI coding involves giving the agent enough context to succeed on a task that it can see a path to success on.  And every story where it fails is a situation where it had not enough context to see a path to success on.  Think about what happens with a junior software engineer: you give them a task and they either succeed or fail.  If they succeed wildly, you give them a more challenging task. If they fail, you give them more guidance, more coaching, and less challenging tasks with more personal intervention from you to break it down into achievable steps.<p>As models and tooling becomes more advanced, the place where that balance lies shifts.  The trick is to ride that sweet spot of task breakdown and guidance and supervision.</p>
]]></description><pubDate>Tue, 12 Aug 2025 19:19:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44880691</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=44880691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44880691</guid></item><item><title><![CDATA[New comment by unoti in "The bitter lesson is coming for tokenization"]]></title><description><![CDATA[
<p>I imagine there’s actually combinatorial power in there though. If we imagine embedding something with only 2 dimensions x and y, we can actually encode an unlimited number of concepts because we can imagine distinct separate clusters or neighborhoods spread out over a large 2d map.  It’s of course much more possible with more dimensions.</p>
]]></description><pubDate>Tue, 24 Jun 2025 15:40:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44367418</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=44367418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44367418</guid></item><item><title><![CDATA[New comment by unoti in "Machine Code Isn't Scary"]]></title><description><![CDATA[
<p>> Is it enough to play Human Resource Machine!?<p>That is fun.  But this one truly is enough: Turing Complete.  You start with boolean logic gates and progressively work your way up to building your own processor, create your own assembly language, and use it to do things like solve mazes and more.  Super duper fun<p><a href="https://store.steampowered.com/app/1444480/Turing_Complete/" rel="nofollow">https://store.steampowered.com/app/1444480/Turing_Complete/</a></p>
]]></description><pubDate>Wed, 04 Jun 2025 21:24:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44185707</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=44185707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44185707</guid></item><item><title><![CDATA[New comment by unoti in "Machine Code Isn't Scary"]]></title><description><![CDATA[
<p>This is the video I wished I had seen when I was a kid, feeling like assembly was a dark art that I was too dumb to be able to do. Later in life I did a ton of assembly professionally on embedded systems. But as a kid I thought I wasn’t smart enough.  This idea is poison, thinking you’re not smart enough, and it ruins lives.<p><a href="https://youtu.be/ep7gcyrbutA?si=8HiMqH2mMwsJRNDg" rel="nofollow">https://youtu.be/ep7gcyrbutA?si=8HiMqH2mMwsJRNDg</a></p>
]]></description><pubDate>Wed, 04 Jun 2025 15:37:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44181905</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=44181905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44181905</guid></item><item><title><![CDATA[New comment by unoti in "AI Horseless Carriages"]]></title><description><![CDATA[
<p>> Why can’t the LLM just learn your writing style from your previous emails to that person?<p>It totally could. For one thing you could fine tune the model, but I don't think I'd recommend that.  For this specific use case, imagine an addition to the prompt that says """To help you with additional context and writing style, here snippets of recent emails Pete wrote to {recipient}:
---
{recent_email_snippets}
"""</p>
]]></description><pubDate>Thu, 24 Apr 2025 02:07:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43778644</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=43778644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43778644</guid></item><item><title><![CDATA[New comment by unoti in "Viral ChatGPT trend is doing 'reverse location search' from photos"]]></title><description><![CDATA[
<p>> LLMs lie about their reasoning<p>People do this all the time too!  Cat scans show that people make up their minds quickly, showing activations in one part of the brain that makes snap judgements, and then a fraction of a second later the part that shows rational reasoning begins to activate.  People in sales have long known this, wanting to give people emotional reasons to make the right decision, while also giving them the rational data needed to support it. [1]<p>I remember seeing this illustrated ourselves when our team of 8 or so people was making a big ERP purchasing decision between Oracle ERP and Peoplesoft long ago.  We had divided what our application needed to do into over 400 feature areas, and in each feature area had developed a very structured set of evaluation criteria for each area.  Then we put weights on each of those to express how important it was to us. We had a big spreadsheet to rank the things.<p>But along the way of the 9 month sales process, we really enjoyed working with the Oracle sales team a lot better.  We felt like we'd be able to work with them better.  In the end, we ran all the numbers, and Peoplesoft came out on top.  And we sat there and soberly looked each other in the eyes, and said "We're going with Oracle." (Actually I remember one lady on the team when asked for her vote said, "It's gotta be the big O.")<p>Salespeople know that ultimately it's a gut decision, even if the people buying things don't realize that themselves.<p>[1] <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC6310859/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC6310859/</a></p>
]]></description><pubDate>Fri, 18 Apr 2025 13:53:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43728114</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=43728114</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43728114</guid></item><item><title><![CDATA[New comment by unoti in "AI can't stop making up software dependencies and sabotaging everything"]]></title><description><![CDATA[
<p>I hear you. But consider this: substitute “LLM” in what you said above with “coworker” or “direct report”.<p>Does having a coworker automatically make a person dumb and no longer willing or able to grow? Does an engineer who becomes a manager instantly lose their ability to work or grow or learn? Sometimes, yes I know, but it’s not a foregone conclusion.<p>Agents are a new tool in our arsenal and we get to choose how we use them and what it will do for us, and what it will do to us, each as individuals.</p>
]]></description><pubDate>Sat, 12 Apr 2025 22:26:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43668392</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=43668392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43668392</guid></item><item><title><![CDATA[New comment by unoti in "AI can't stop making up software dependencies and sabotaging everything"]]></title><description><![CDATA[
<p>When using AI, you are still the one responsible for the code. If the AI writes code and you don't read every line, why did it make its way into a commit?  If you don't understand every line it wrote, what are you doing?  If you don't actually love every line it wrote, why didn't you make it rewrite it with some guidance or rewrite it yourself?<p>The situation described in the article is similar to having junior developers we don't trust committing code and us releasing it to production and blaming the failure on them.<p>If a junior on the team does something dumb and causes a big failure, I wonder where the senior engineers and managers were during that situation.  We closely supervise and direct the work of those people until they've built the skills and ways of thinking needed to be ready for that kind of autonomy.  There are reasons we have multiple developers of varying levels of seniority: trust.<p>We build relationships with people, and that is why we extend them the trust.  We don't extend trust to people until they have demonstrated they are worthy of that trust over a period of time. At the heart of relationships is that we talk to each other and listen to each other, grow and learn about each other, are coachable, get onto the same page with each other.  Although there are ways to coach llm's and fine tune them, LLM's don't do nearly as good of a job at this kind of growth and trust building as humans do.  LLM's are super useful and absolutely should be worked into the engineering workflow, but they don't deserve the kind of trust that some people erroneously give to them.<p>You still have to care deeply about your software.  If this story talked about inexperienced junior engineers messing up codebases, I'd be wondering where the senior engineers and leadership were in allowing that to mess things up.  A huge part of engineering is all about building reliable systems out of unreliable components and always has been.  To me this story points to process improvement gaps and ways of thinking people need to change more than it points to the weak points of AI.</p>
]]></description><pubDate>Sat, 12 Apr 2025 15:38:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=43665296</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=43665296</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43665296</guid></item><item><title><![CDATA[New comment by unoti in "Erlang's not about lightweight processes and message passing (2023)"]]></title><description><![CDATA[
<p>I came here looking for information about why Ericsson stopped using Erlang, and for more information about Joe's firing.<p>The short answer seems to be that they pivoted to Java for new projects, which marginalized Erlang.  Then Joe and colleagues formed Bluetail in 1998.  They were bought by Nortel.  Nortel was a telecom giant forming about a third of the value of the Toronto Stock Exchange.  In 2000 Nortel's stock reached $125 per share, but by 2002 the stock had gone down to less than $1.  This was all part of the dot com crash, and Nortel was hit particularly hard because of the dot com bubble burst corresponding with a big downturn in telecom spending.<p>It seems safe to look at Joe's layoff as more of a "his unit was the first to slip beneath the waves on a sinking ship" situation, as they laid off 60,000 employees or more than two thirds of their workforce.  The layoff was not a sign that he may not have been pulling his weight.  It was part of a big move of desperation not to be taken as a sign of the ineffectiveness of that business unit.</p>
]]></description><pubDate>Sat, 12 Apr 2025 01:08:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=43660383</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=43660383</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43660383</guid></item><item><title><![CDATA[New comment by unoti in "Show HN: Connecting an IBM 3151 terminal to a mainframe [video]"]]></title><description><![CDATA[
<p>> I never used much IBM hardware, and always assumed that their terminals used EBCDIC and not ASCII.<p>Generally this is true.  But around this era I used a ton of 3151 terminals on AIX, IBM's version of Unix.  They were connected to the RS/6000 line of AIX machines.  Good times!  These machines, as Unix machines, talked in ASCII to their terminals.  There was a whole line of port concentrators which would like you connect something like 32 ASCII terminals to a little block about the size of a modern ethernet router, and you could connect 4 of so of these blocks to a device in the main machine.</p>
]]></description><pubDate>Tue, 08 Apr 2025 20:14:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43626013</link><dc:creator>unoti</dc:creator><comments>https://news.ycombinator.com/item?id=43626013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43626013</guid></item></channel></rss>