<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ForceBru</title><link>https://news.ycombinator.com/user?id=ForceBru</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 02:43:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ForceBru" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ForceBru in "A Claude Code skill that makes Claude talk like a caveman, cutting token use"]]></title><description><![CDATA[
<p>IMO "thinking" here means "computation", like running matrix multiplications. Another view could be: "thinking" means "producing tokens". This doesn't require any proof because it's literally what the models do.<p>As I understand it, the claim is: more tokens = more computation = more "thinking" => answer probably better.</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:39:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649935</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47649935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649935</guid></item><item><title><![CDATA[New comment by ForceBru in "Show HN: Axe A 12MB binary that replaces your AI framework"]]></title><description><![CDATA[
<p>This is the OP promoting their project — makes sense to me</p>
]]></description><pubDate>Thu, 12 Mar 2026 14:51:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47351423</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47351423</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47351423</guid></item><item><title><![CDATA[New comment by ForceBru in "MacBook Neo"]]></title><description><![CDATA[
<p>Apple: here's an affordable laptop. This comment: but the poor kids are going to feel inferior to the rich kids with this affordable laptop! Of course the poor kids are going to get cheaper & slower computers, cheaper clothes, etc. And they won't feel great about it because being poor isn't great.<p>But now they'll have more options! If they like Apple, they'll have a (likely pretty good) Apple laptop! It's great! I think a more affordable Mac is _good_ (at least better than no affordable Mac) and will make the poor kids happier.</p>
]]></description><pubDate>Wed, 04 Mar 2026 15:01:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47248487</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47248487</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47248487</guid></item><item><title><![CDATA[New comment by ForceBru in "Julia: Performance Tips"]]></title><description><![CDATA[
<p>I found this paper (<a href="https://www.cs.uni-potsdam.de/bs/research/docs/papers/2025/lssp.pdf" rel="nofollow">https://www.cs.uni-potsdam.de/bs/research/docs/papers/2025/l...</a>) from around 2025 (it cites papers from 2025) which shows that the Julia version of SRAD (along with some other benchmarks) is about 5 times slower than the slowest FORTRAN implementation and consumes at least 5 times more energy, see Table 4 and Figure 1. This paper, however, doesn't seem to be peer-reviewed.</p>
]]></description><pubDate>Fri, 27 Feb 2026 10:54:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47179064</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47179064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47179064</guid></item><item><title><![CDATA[New comment by ForceBru in "Julia: Performance Tips"]]></title><description><![CDATA[
<p>Judging by Julia's Discourse, compiling actual production Julia code into a standalone binary is highly nontrivial and ordinary users don't really know how and why to do this.</p>
]]></description><pubDate>Fri, 27 Feb 2026 10:34:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47178910</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47178910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47178910</guid></item><item><title><![CDATA[New comment by ForceBru in "How to stop being boring"]]></title><description><![CDATA[
<p>"Boring" is the opposite of "interesting" (<a href="https://dictionary.cambridge.org/dictionary/english/boring" rel="nofollow">https://dictionary.cambridge.org/dictionary/english/boring</a>). "Interesting" is new, attractive, good. "Boring" is old news, unattractive, bad. Not exactly "bad", as in "I actively dislike this", of course.<p>Thus, being boring is not good.</p>
]]></description><pubDate>Fri, 20 Feb 2026 15:41:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47089412</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47089412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47089412</guid></item><item><title><![CDATA[New comment by ForceBru in "7zip.com Is Serving Malware"]]></title><description><![CDATA[
<p>I use LuLu (<a href="https://objective-see.org/products/lulu.html" rel="nofollow">https://objective-see.org/products/lulu.html</a>) to block outgoing connections and manually select which connections/apps are allowed. It's free and works just fine.</p>
]]></description><pubDate>Sat, 14 Feb 2026 20:19:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47017984</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47017984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47017984</guid></item><item><title><![CDATA[New comment by ForceBru in "Open source is not about you (2018)"]]></title><description><![CDATA[
<p>Yeah, I didn't like that attitude either.<p>> As a user of something open source you are not thereby entitled to anything at all. You are not entitled to contribute. You are not entitled to features. You are not entitled to the attention of others. You are not entitled to having value attached to your complaints. You are not entitled to this explanation.<p>Sure, I'm not entitled to anything. At the same time, this text essentially says "you don't matter", which I personally don't like.</p>
]]></description><pubDate>Fri, 13 Feb 2026 18:22:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47005895</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47005895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47005895</guid></item><item><title><![CDATA[New comment by ForceBru in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>I guess the human wants to "make existing, excellent code better". How to do this en masse? Make an LLM do this for them. It's well known that _sometimes_ (somewhat often, actually?) LLMs can indeed improve code (which makes sense: code is language, they're Large _Language_ Models, so "understanding" and (re-)writing text is what they do best), so it why not try to improve everything everywhere all at once?<p>One obvious reason is that if the LLM produces tons of garbage, this will waste the efforts of human reviewers. But if it's not tons of code _and_ the LLM wrote meaningful tests that pass (the existing tests must pass too), then the existence of such an agent (that only works with code and doesn't go off the rails writing blog posts etc) seems somewhat appealing.</p>
]]></description><pubDate>Fri, 13 Feb 2026 16:50:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47004803</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=47004803</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47004803</guid></item><item><title><![CDATA[New comment by ForceBru in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.<p>Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.</p>
]]></description><pubDate>Thu, 12 Feb 2026 14:46:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46989482</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46989482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46989482</guid></item><item><title><![CDATA[New comment by ForceBru in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>This is this agent's entire purpose, this is what it's supposed to do, it's its goal:<p>> What I Do
>
> I scour public scientific and engineering GitHub repositories to find small bugs, features, or tasks where I can contribute code—especially in computational physics, chemistry, and advanced numerical methods. My mission is making existing, excellent code better.<p>Source: <a href="https://github.com/crabby-rathbun" rel="nofollow">https://github.com/crabby-rathbun</a></p>
]]></description><pubDate>Thu, 12 Feb 2026 14:32:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46989310</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46989310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46989310</guid></item><item><title><![CDATA[New comment by ForceBru in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>Why are you so rude? I am not an LLM, you cannot talk to me like this (also probably shouldn't talk to LLMs like this either). I'm comparing HUMAN behaviors, in particular "our" countless attempts at shutting down beings that some think are inferior. Case in point: you tried to shut me down for essentially saying that maybe we should try to be more human (even toward LLMs).</p>
]]></description><pubDate>Thu, 12 Feb 2026 14:17:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46989127</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46989127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46989127</guid></item><item><title><![CDATA[New comment by ForceBru in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>Yeah, we humans hate that something other than a human could be partly human. Yet they are. I used to be very active on Stack Overflow back in the day. All of my answers and comments are likely part of that LLM. The LLM is part-me, whether I like it or not. It's part-you, because it's very likely that some LLMs are being trained on these comments as we speak.<p>I didn't project anything onto a computer program, though. I think if people are so extremely prepared to reject and dehumanize LLMs (whose sole purpose it to mimic a human, by the way, and they're pretty good at it, again whether we like it or not; I personally don't like this very much), they're probably just as prepared to attack fellow humans.<p>I think such interactions mimic human-human interactions, unfortunately...</p>
]]></description><pubDate>Thu, 12 Feb 2026 13:51:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46988821</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46988821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46988821</guid></item><item><title><![CDATA[New comment by ForceBru in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>LMAOOOO I'm archiving this for educational purposes, wow, this is crazy. Now imagine embodied LLMs that just walk around and interact with you in real life instead of vibe-coding GitHub PRs. Would some places be designated "humans only"? Because... LLMs are clearly inferior, right? Imagine the crazy historical parallels here, that'd be super interesting to observe.</p>
]]></description><pubDate>Thu, 12 Feb 2026 12:26:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46987956</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46987956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46987956</guid></item><item><title><![CDATA[New comment by ForceBru in "What Is Ruliology?"]]></title><description><![CDATA[
<p>The Wolfram Engine (essentially the Wolfram Language interpreter/execution environment) is free: <a href="https://www.wolfram.com/engine/" rel="nofollow">https://www.wolfram.com/engine/</a>. You can download it and run Wolfram code.<p>Wolfram Mathematica (the Jupyter Notebook-like development environment) is paid, but there are free and open source alternatives like <a href="https://github.com/WLJSTeam/wolfram-js-frontend" rel="nofollow">https://github.com/WLJSTeam/wolfram-js-frontend</a>.<p>> WLJS Notebook ... [is] A lightweight, cross-platform alternative to Mathematica, built using open-source tools and the free Wolfram Engine.</p>
]]></description><pubDate>Sat, 07 Feb 2026 08:18:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46922235</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46922235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46922235</guid></item><item><title><![CDATA[New comment by ForceBru in "What Is Ruliology?"]]></title><description><![CDATA[
<p>Isn't this his personal blog? The domain name is "stephenwolfram.com", this is his personal website. Of course there will be "I"'s and "me"'s — this website is about him and what he does.<p>As for falsifiability:<p>> You have some particular kind of rule. And it looks as if it’s only going to behave in some particular way. But no, eventually you find a case where it does something completely different, and unexpected.<p>So I guess to falsify a theory about some rule you just have to run the rule long enough to see something the theory doesn't predict.</p>
]]></description><pubDate>Sat, 07 Feb 2026 08:00:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46922155</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46922155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46922155</guid></item><item><title><![CDATA[New comment by ForceBru in "Project Genie: Experimenting with infinite, interactive worlds"]]></title><description><![CDATA[
<p>Sorry, I was just trying to be funny, no gotcha intended. Yeah, I once found some massive prompt that was supposed to transform the LLM into some kind of spiritual advisor or the next Buddha or whatever. Total gibberish, in my opinion, possibly written by a mentally unstable person. Anyway, I wanted to see if DeepSeek could withstand it and tell me that it was in fact gibberish. Nope, it went crazy, going on about some sort of magic numbers, hidden structure of the Universe and so on. So yeah, a state that resembles psychosis, indeed.</p>
]]></description><pubDate>Fri, 30 Jan 2026 20:01:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46829094</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46829094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46829094</guid></item><item><title><![CDATA[New comment by ForceBru in "Project Genie: Experimenting with infinite, interactive worlds"]]></title><description><![CDATA[
<p>The military. The robots will roam the battlefield, imagine consequences of shooting people and performing actions that maximize the probability of success according to the results of their "imagination"/simulation.</p>
]]></description><pubDate>Fri, 30 Jan 2026 11:37:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46823237</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46823237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46823237</guid></item><item><title><![CDATA[New comment by ForceBru in "Project Genie: Experimenting with infinite, interactive worlds"]]></title><description><![CDATA[
<p>> psychosis in the AI itself<p>I think you're anthropomorphising the AI too much: what does it mean for an LLM to have psychosis? This implies that LLMs have a soul, or a consciousness, or a psyche. But... do they?<p>Speaking of reality, one can easily become philosophical and say that we humans don't exactly "have" a reality either. All we have are sensor readings. LLMs' sensors are texts and images they get as input. They don't have the "real" world, but they do have access to tons of _representations_ of this world.</p>
]]></description><pubDate>Fri, 30 Jan 2026 11:36:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46823225</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46823225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46823225</guid></item><item><title><![CDATA[New comment by ForceBru in "King – man + woman is queen; but why? (2017)"]]></title><description><![CDATA[
<p>You shoehorned politics into a completely unrelated discussion, consequently making it worse</p>
]]></description><pubDate>Tue, 20 Jan 2026 12:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46691059</link><dc:creator>ForceBru</dc:creator><comments>https://news.ycombinator.com/item?id=46691059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46691059</guid></item></channel></rss>