<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bithive123</title><link>https://news.ycombinator.com/user?id=bithive123</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 08:32:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bithive123" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bithive123 in "MacBook M5 Pro and Qwen3.5 = Local AI Security System"]]></title><description><![CDATA[
<p>Before anyone buys a TPU for Frigate, try OpenVino on a cheap Intel N100 CPU.  My mini PC frigate installation can handle 5 cameras easily.</p>
]]></description><pubDate>Fri, 20 Mar 2026 17:35:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47457896</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=47457896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47457896</guid></item><item><title><![CDATA[New comment by bithive123 in "You are the scariest monster in the woods"]]></title><description><![CDATA[
<p>I struggle to understand how people attribute things we ourselves don't really understand (intelligence, intent, subjectivity, mind states, etc) to a computer program just because it produces symbolic outputs that we like.  We made it do that because we as the builders are the arbiters of what constitutes more or less desirable output.  It seems dubious to me that we would recognize super-intelligence if we saw it, as recognition implies familiarity.<p>Unless and until "AGI" becomes an entirely self-hosted phenomenon, you are still observing human agency.   That which designed, built, trained, the AI and then delegated the decision in the first place.  You cannot escape this fact.  If profit could be made by shaking a magic 8-ball and then doing whatever it says, you wouldn't say the 8-ball has agency.<p>Right now it's a machine that produces outputs that resemble things humans make.  When we're not using it, it's like any other program you're not running.  It doesn't exist in its own right, we just anthropomorphize it because of the way conventional language works.  If an LLM someday initiates contact on its own without anyone telling it to, I will be amazed.  But there's no reason to think that will happen.</p>
]]></description><pubDate>Wed, 15 Oct 2025 18:48:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45596837</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=45596837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45596837</guid></item><item><title><![CDATA[New comment by bithive123 in "Show HN: AI toy I worked on is in stores"]]></title><description><![CDATA[
<p>"Ho ho ho!  I'm sorry but our time is up.  If you'd like to keep talking to me, please provide a credit card number.  Merry christmas!"</p>
]]></description><pubDate>Mon, 13 Oct 2025 23:00:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45574228</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=45574228</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45574228</guid></item><item><title><![CDATA[New comment by bithive123 in "Inflation erased U.S. income gains last year"]]></title><description><![CDATA[
<p>"Inflation" simply refers to a rise in general price levels.  The cause of inflation is known: someone sets a price.<p>There isn't a single reason why someone might raise a price.  It could be that they have some ideology about the size of the money supply (i.e. "printing money") or it could be that the costs of their inputs went up ("inflation") due to tariffs, or other supply chain problems.  Or it could be a cynical bet that the market would bear a higher price ("using inflation as an excuse").<p>Blaming inflation on this-or-that cause is most definitely a political rather than theoretical exercise.</p>
]]></description><pubDate>Tue, 09 Sep 2025 22:46:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45190450</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=45190450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45190450</guid></item><item><title><![CDATA[New comment by bithive123 in "Protobuffers Are Wrong (2018)"]]></title><description><![CDATA[
<p>I don't know if the author is right or wrong; I've never dealt with protobufs professionally.  But I recently implemented them for a hobby project and it was kind of a game-changer.<p>At some stage with every ESP or Arduino project, I want to send and receive  data, i.e. telemetry and control messages.  A lot of people use ad-hoc protocols or HTTP/JSON, but I decided to try the nanopb library.  I ended up with a relatively neat solution that just uses UDP packets.  For my purposes a single packet has plenty of space, and I can easily extend this approach in the future.  I know I'm not the first person to do this but I'll probably keep using protobufs until something better comes along, because the ecosystem exists and I can focus on the stuff I consider to be fun.</p>
]]></description><pubDate>Fri, 05 Sep 2025 19:19:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45142524</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=45142524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45142524</guid></item><item><title><![CDATA[New comment by bithive123 in "Building AI products in the probabilistic era"]]></title><description><![CDATA[
<p>Strictly speaking, yes, but there is so much variability introduced by prompting that even keeping the seed value static doesn't change the "slot machine" feeling, IMHO.  While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.</p>
]]></description><pubDate>Thu, 21 Aug 2025 23:53:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44979609</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44979609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44979609</guid></item><item><title><![CDATA[New comment by bithive123 in "How to stop feeling lost in tech: the wafflehouse method"]]></title><description><![CDATA[
<p>I went through a very similar thing at around the same age, and one of the insights that really helped me was meditating on impermanence, and cultivating more mental proprioception (awareness of one's subtle thoughts, "mindfulness", whatever you want to call it.<p>Put simply, it's fine to have goals.  But chasing achievement can be unfulfilling.  Why?  Because all experiences are fleeting.  Even if you train for 5 years and win the gold medal, you get to stand on the podium for a few minutes and then life goes on.<p>It's easy to get people to agree with this intellectually, but you have to really see it on a deep level.  There is nothing really to achieve in life.  We make goals and cast them out ahead of ourselves in the future, but if that future comes, it doesn't last.  We put ourselves on a treadmill of achievement and becoming, then wonder why we feel stressed.<p>Instead of imagining some future state of completion, work on being aware of how your mind is moving, all the time.  Don't chase goals as a way of disproving some fundamental negative assumption about yourself.  Don't make happiness contingent on external conditions.</p>
]]></description><pubDate>Thu, 21 Aug 2025 22:05:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44978665</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44978665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44978665</guid></item><item><title><![CDATA[New comment by bithive123 in "Ask HN: Do you think programming as a job will end soon and if so, how soon?"]]></title><description><![CDATA[
<p>I think that if replacing programmers with "AI" was going well, the people doing it wouldn't shut up about it.<p>So no, I don't think programming as a job will end soon, because there's no reason to think that it could.  No plausible story I've seen about how that would even happen.<p>I do want to see big expensive products being built and released entirely by C-suites after laying off all their programmers/writers/directors/people who actually know how to do stuff.  That should put an end to this madness pretty quickly.</p>
]]></description><pubDate>Thu, 21 Aug 2025 21:50:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44978510</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44978510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44978510</guid></item><item><title><![CDATA[New comment by bithive123 in "Building AI products in the probabilistic era"]]></title><description><![CDATA[
<p>It became evident to me while playing with Stable Diffusion that it's basically a slot machine.  A skinner box with a variable reinforcement schedule.<p>Harmless enough if you are just making images for fun.  But probably not an ideal workflow for real work.</p>
]]></description><pubDate>Thu, 21 Aug 2025 21:28:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44978303</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44978303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44978303</guid></item><item><title><![CDATA[New comment by bithive123 in "What does Palantir actually do?"]]></title><description><![CDATA[
<p>War will happen as long as <i>ignorance</i> exists.  Ignorance may exist as long as humans exist, but let's not pretend that humans are not responsible for wars.<p>I take your general points.  There is a saying "there is no right or wrong, but right is right and wrong is wrong."<p>Violence is the unnecessary use of force.  It may occasionally be necessary to kill in self defense, but it is always a tragedy.  Killing people is both bad and a choice. This is actually a harder reality to face than "people be violent".</p>
]]></description><pubDate>Thu, 14 Aug 2025 23:09:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44906800</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44906800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44906800</guid></item><item><title><![CDATA[New comment by bithive123 in "US national debt reaches a record $37T, the Treasury Department reports"]]></title><description><![CDATA[
<p>On a long enough timeline we're all dead.  In the near term, expect a lot of stupid decisions and huffing and puffing based on an ideological framing of what the national debt is.<p>I am not an economist or finance guy, but I have noticed a lot of debt hysteria from people who don't seem to understand basic accounting.  That is, one party's asset is another party's liability.  You cannot have buying without selling, and so on.  Your mortgage is a liability for you, but an asset for your bank.  Your checking account is an asset for you, but a liability for your bank.<p>I'm not saying the debt can grow infinitely, but clearly if some of that debt is held as assets by the non-government (most of the world including you and me) then paying off that debt means a wealth transfer from the non-government back to the government.<p>This isn't necessarily in my interests.  If the government has to claw those dollars back from somewhere, I'd rather them start with the richest people.  But that doesn't happen for obvious reasons.</p>
]]></description><pubDate>Wed, 13 Aug 2025 21:42:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44894208</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44894208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44894208</guid></item><item><title><![CDATA[New comment by bithive123 in "LLMs aren't world models"]]></title><description><![CDATA[
<p>Generally the experience of insight is prior to any discursive expression.  We put our insights in terms of words, they do not arise as such.</p>
]]></description><pubDate>Wed, 13 Aug 2025 16:15:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44890395</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44890395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44890395</guid></item><item><title><![CDATA[New comment by bithive123 in "LLMs aren't world models"]]></title><description><![CDATA[
<p>There are no "things" in the universe.  You say this wave and that photon exist and represent this or that, but all of that is conceptual overlay.  Objects are parts of speech, reality is undifferentiated quanta.  Can you point to a particular place where the ocean becomes a particular wave?  Your comment already implies an understanding that our mind is behind all the hypothetical lines; we impose them, they aren't actually there.</p>
]]></description><pubDate>Wed, 13 Aug 2025 01:49:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44883862</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44883862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44883862</guid></item><item><title><![CDATA[New comment by bithive123 in "LLMs aren't world models"]]></title><description><![CDATA[
<p>I clearly do not meet the requirements to use the analogy.<p>I am hearing the term super intelligence a lot and it seems to me the only form that would take is the machine spitting out a bunch of symbols which either delight or dismay the humans.  Which implies they already know what it looks like.<p>If this technology will advance science or even be useful for everyday life, then surely the propositions it generates will need to hold up to reality, either via axiomatic rigor or empirically.  I look forward to finding out if that will happen.<p>But it's still just a movement from the known to the known, a very limited affair no matter how many new symbols you add in whatever permutation.</p>
]]></description><pubDate>Tue, 12 Aug 2025 22:27:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44882505</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44882505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44882505</guid></item><item><title><![CDATA[New comment by bithive123 in "LLMs aren't world models"]]></title><description><![CDATA[
<p>Nor am I.  I'm not claiming an LLM is a formal system, but it is mechanical and operates on symbols.  It can't deal in anything else.  That should temper some of the enthusiasm going around.</p>
]]></description><pubDate>Tue, 12 Aug 2025 22:11:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44882376</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44882376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44882376</guid></item><item><title><![CDATA[New comment by bithive123 in "LLMs aren't world models"]]></title><description><![CDATA[
<p>Right.  It's a dead thing that has no independent meaning.  It doesn't even exist as a thing except conceputally.  The referent is not even another dead thing, but a reality that appears nowhere in the map itself.  It may have certain limited usefulness in the practical realm, but expecting it to lead to new insights ignores the fact that it's fundamentally an abstraction of the real, not in relationship to it.</p>
]]></description><pubDate>Tue, 12 Aug 2025 22:09:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44882363</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44882363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44882363</guid></item><item><title><![CDATA[New comment by bithive123 in "LLMs aren't world models"]]></title><description><![CDATA[
<p>I knew someone would call me out on that.  I used the wrong word; what I meant was "expressed in a way that would satisfy" which implies proof within the symbolic order being used.  I don't claim to be a mathematician or philosopher.</p>
]]></description><pubDate>Tue, 12 Aug 2025 22:03:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44882310</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44882310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44882310</guid></item><item><title><![CDATA[New comment by bithive123 in "LLMs aren't world models"]]></title><description><![CDATA[
<p>Language models aren't world models for the same reason languages aren't world models.<p>Symbols, by definition, only represent a thing.  They are not the same as the thing.  The map is not the territory, the description is not the described, you can't get wet in the word "water".<p>They only have meaning to sentient beings, and that meaning is heavily subjective and contextual.<p>But there appear to be some who think that we can grasp truth through mechanical symbol manipulation.  Perhaps we just need to add a few million more symbols, they think.<p>If we accept the incompleteness theorem, then there are true propositions that even a super-intelligent AGI would not be able to express, because all it can do is output a series of placeholders.  Not to mention the obvious fallacy of knowing super-intelligence when we see it.  Can you write a test suite for it?</p>
]]></description><pubDate>Tue, 12 Aug 2025 21:21:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44881949</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44881949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44881949</guid></item><item><title><![CDATA[New comment by bithive123 in "Is the A.I. Boom Turning Into an A.I. Bubble?"]]></title><description><![CDATA[
<p>Of course it's a bubble.  It's only about 20% as useful as the claims driving the current irrational exuberance.  All it can do is generate pictures and text, and we had those _coming out our eyeballs_ for at least a decade.  Prior to generative AI, each of us already had more access to images and text with which to stimulate ourselves than we could consume in a lifetime.<p>When did we forget that discovering "truth" via symbol manipulation is a fraught proposition at best?  It was in the 17th century that Leibniz proposed that encoding logical propositions into a propositional calculus would allow all intellectual disputes to be resolved mechanically.<p>"For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate."<p>The original AI bro!  Any day now...<p>I've been thinking lately that the real value of a piece of code is that there is at least one human alive somewhere supporting it.  You remove that, and the value proposition gets extremely shaky.  Folks are going to have to learn this first hand as their brain becomes full of echoes of LLM output, rather than the output being an echo of some brain process (you know, _actual_ intelligence).<p>But sure, if you can convince enough people that you've invented a real magic 8-ball, you might be able to convince enough of them to shake it for the rest of their lives.  Me, I'm not convinced that the marginal value of "new" text and images is there.</p>
]]></description><pubDate>Tue, 12 Aug 2025 18:44:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44880256</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44880256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44880256</guid></item><item><title><![CDATA[New comment by bithive123 in "GitHub is no longer independent at Microsoft after CEO resignation"]]></title><description><![CDATA[
<p>That didn't take long.  There appears to be some kind of outage now, I'm seeing unicorns all over the place.  I even got a 403 from githubstatus.com.</p>
]]></description><pubDate>Mon, 11 Aug 2025 18:51:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44867967</link><dc:creator>bithive123</dc:creator><comments>https://news.ycombinator.com/item?id=44867967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44867967</guid></item></channel></rss>