<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: saberience</title><link>https://news.ycombinator.com/user?id=saberience</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 14:37:07 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=saberience" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by saberience in "Wacli – WhatsApp CLI: sync, search, send"]]></title><description><![CDATA[
<p>As someone that's written some apps using official WA for Business accounts, I would strongly advise against any 3rd party tools for automating WA.<p>Whatsapp has some really stringent requirements on any kind of automation. E.g. Not messaging anyone automatically unless they messaged you with 24 hours, in fact, this is explicitly blocked if you use Meta's API. You have to use message templates in this case.<p>Also, any bots need to be verified with Meta etc.<p>And the TOS has gotten more strict recently, not less strict. So buyer beware here, Meta is really protective over reverse engineering WA protocol or automating it, so you can easily get yourself blocked or banned here.</p>
]]></description><pubDate>Wed, 15 Apr 2026 13:11:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47778499</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47778499</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47778499</guid></item><item><title><![CDATA[New comment by saberience in "They See Your Photos"]]></title><description><![CDATA[
<p>This tool doesn't work well at all, it identified some "low income" people and then said it recommended them Patagonia clothing???<p>Also, the people didn't look "low income" at all but they were black, so maybe this tool is also racist.</p>
]]></description><pubDate>Mon, 13 Apr 2026 13:44:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47751873</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47751873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47751873</guid></item><item><title><![CDATA[New comment by saberience in "Apple's accidental moat: How the "AI Loser" may end up winning"]]></title><description><![CDATA[
<p>Everyone in the UK and Western Europe uses WhatsApp as their primary messenger.<p>The only time I ever open iMessage is when I get an SMS 2FA verification code or something similar.<p>Also, in the Middle East everyone also just uses WhatsApp or Telegram.</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:01:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750293</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47750293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750293</guid></item><item><title><![CDATA[New comment by saberience in "The Grand Line"]]></title><description><![CDATA[
<p>I'm somewhat amazed this got upvoted to the frontpage... I guess the title got upvoted because it's really terrible writing, either LLM generated or someome trying their hardest to sound "deep".<p>There are so many technical and stylistic issues with this, I would say it's either someone learning to write and trying too hard, or again, using an LLM.<p>1) Why is the whole thing written in the second person perspective? Is it disguised autobiography? It seems to be a cheap way of trying to claim intimacy with the reader, "here look, this was your experience". It ends up sounding like someone narrating your life at you while actually (secretly) talking about themselves.<p>2) While mostly written in the second person, there are several jarring switches back into first person. Unsure if this is a mistake or was intentional, either way, it just sounds bad.<p>3) The tenses are bouncing all over the place in this writing. We have present tense: "You take the train", past tense: "You took it", future tense for a kind of prophetic vibe: "A decade from now, you will not know", and finally a sort of timeless present proverbial tense: "The text is the same. The reader is not. This is what the contemplative traditions mean when they talk about the spiral — the return to the same point, but at a different elevation."<p>The effect of all the tense switching and weirdness just makes it hard for the reader to feel grounded in any of the scenes.<p>4) Rhetorical negation. The writer loves this pattern of describing things by what they are not. Examples:
"It was not silly. It was not even reverent. It was just a thing."
"Not with a call. Not with a vision. Not with a voice in the night."
"Not a metaphor for one. Not like a pilgrimage. A pilgrimage."<p>It can be a nice effect if you use it once... not repeatedly in a short piece, it makes you think the writer is constantly arguing with an imagined critic. "It's not this, it's this!"<p>5) Performative plainness, e.g.  "You walked. You ate. You slept." "You stood for a minute. You looked at the statue." There's a lot of these kinds of fragments, and they feel strange because they're written in an active voice by describing a "character" who never makes any choices, i.e. entirely passive. It's like the author is trying to ape Hemingway's style in these moments but missing the characters and the story which go along with this spartan active voice.<p>Taken altogether, it feels like the author is trying to sound profound but with enormous effort and trying to use every trick in the writer's toolbox, which ends up sounding confusing, and creating distance between the author and the reader, and the "gap" is obvious, the engineering is visible. It's like the writer is saying "You were not trying to be moved." and the reader feeling that the author has tried desperately hard writing 3000 words to try and move you.</p>
]]></description><pubDate>Sun, 12 Apr 2026 09:59:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47737854</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47737854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47737854</guid></item><item><title><![CDATA[New comment by saberience in "Instant 1.0, a backend for AI-coded apps"]]></title><description><![CDATA[
<p>Keyword is "logically" separated here...<p>Also no motion of data encrypted during transit.<p>Would not use this for anything other than toy projects.</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:34:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719693</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47719693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719693</guid></item><item><title><![CDATA[New comment by saberience in "Instant 1.0, a backend for AI-coded apps"]]></title><description><![CDATA[
<p>Wait, why is this needed at all? Why is this backend AI specific?<p>Your Claude Code or Codex is already an expert in all existing backends and databases, we don't need a new backend for AI.<p>You can literally ask Claude to pick whatever backend it thinks it best, and it will build, deploy, and work fine and be significantly cheaper than Instant 1.0.</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:30:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719639</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47719639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719639</guid></item><item><title><![CDATA[New comment by saberience in "LLM Wiki – example of an "idea file""]]></title><description><![CDATA[
<p>Karpathy is at his best when working on teaching materials for ML beginners.<p>When it comes to other stuff he seems to be in a X/Twitter induced AI Psychosis like Garry Tan where he thinks everything he does is amazing and novel because he gets glazed by 1000 X bots who just post "You're amazing" after anything he tweets.<p>This is most definitely an (old) solution in search of a problem. Plenty of people have been trying variants of this for several years at this point but the issue isn't putting stuff into a wiki, or git, or markdown files. It's how you keep then up to date, how you deal with conflicts, how you deal with bloat, how you decide what to keep and delete over time, and also, when you've got this big mass of notes and markdown, when do you surface it?<p>It sounds great on paper until you try and use it and realize that in reality it isn't that useful and doesn't become part of your daily life. That is, it's more fun to build than to actually use, and you don't end up using it outside of the initial novelty.</p>
]]></description><pubDate>Fri, 10 Apr 2026 12:21:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47717018</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47717018</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47717018</guid></item><item><title><![CDATA[New comment by saberience in "LLM Wiki – example of an "idea file""]]></title><description><![CDATA[
<p>He's not, he was an AI researcher and then led AI research teams.<p>He's doing "engineering" now since he can use LLMs, but he never really spent years doing what we would call software engineering. I.e. building distributed systems, writing terraform/ansible, maintaining old databases or optimizing MySQL indexes, debugging Kafka or AzureMQ, etc.<p>Don't get me wrong, he's knowledgeable about LLMs specifically, (although his knowledge is rapidly becoming out of date) but he's not a software engineer, which is why his ideas around engineering often seem totally deluded.</p>
]]></description><pubDate>Fri, 10 Apr 2026 12:17:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716963</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47716963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716963</guid></item><item><title><![CDATA[New comment by saberience in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>You realize you can just create your own tools and wire them up directly using the Anthropic or OpenAI APIs etc?<p>It's not a choice between Skills or MCP, you can also just create your own tools, in whatever language you want, and then send in the tool info to the model. The wiring is trivial.<p>I write all my own tools bespoke in Rust and send them directly to the Anthropic API. So I have tools for reading my email, my calendar, writing and search files etc. It means I can have super fast tools, reduce context bloat, and keep things simple without needing to go into the whole mess of MCP clients and servers.<p>And btw, I wrote my own MCP client and server from the spec about a year ago, so I know the MCP spec backwards and forwards, it's mostly jank and not needed. Once I got started just writing my own tools from scratch I realised I would never use MCP again.</p>
]]></description><pubDate>Fri, 10 Apr 2026 11:16:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716365</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47716365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716365</guid></item><item><title><![CDATA[New comment by saberience in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>I've never understood the point of things like HLE, it doesn't really prove or show anything since 99.99% of humans can't do a single question on this exam.<p>That is, it's easy to make benchmarks which humans are bad at, humans are really bad at many things.<p>Divide 123094382345234523452345111 by 0.1234243131324, guess what, humans would find that hard, computers easy. But it doesn't mean much.<p>Humanity's last exam (HLE) couldn't be completed by most of humanity, the vast majority, so it doesn't really capture anything about humanity or mean much if a computer can do it.</p>
]]></description><pubDate>Wed, 08 Apr 2026 10:14:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47688042</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47688042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47688042</guid></item><item><title><![CDATA[New comment by saberience in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>The article is paywalled, where can we read it?</p>
]]></description><pubDate>Tue, 07 Apr 2026 10:00:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47672838</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47672838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47672838</guid></item><item><title><![CDATA[New comment by saberience in "Show HN: Baton – A desktop app for developing with AI agents"]]></title><description><![CDATA[
<p>Nice work! Congrats on the release, did you check out Vibe-Kanban or Emdash which are both building in this space?<p><a href="https://www.emdash.sh/">https://www.emdash.sh/</a><p><a href="https://vibekanban.com/">https://vibekanban.com/</a><p>What is your secret sauce, so to speak? I personally built my own local tools and system for this, I tried vibekanban but didn't feel like it added much to my productivity, haven't tried emdash yet.</p>
]]></description><pubDate>Wed, 01 Apr 2026 14:03:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47601048</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47601048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601048</guid></item><item><title><![CDATA[New comment by saberience in "4D Doom"]]></title><description><![CDATA[
<p>That's not 4D</p>
]]></description><pubDate>Tue, 31 Mar 2026 22:32:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47594348</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47594348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47594348</guid></item><item><title><![CDATA[New comment by saberience in ""Over 1.5 million GitHub PRs have had ads injected into them by Copilot""]]></title><description><![CDATA[
<p>What's weird is, I never installed any github plugins, or indeed any customization to Codex, other than updating using brew... so I was so confused when this started happening.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:40:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575704</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47575704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575704</guid></item><item><title><![CDATA[New comment by saberience in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>It's the same with Claude Code actually, and recently Codex too...<p>Claude never used to do this but at some point it started adding itself by default as a co-author on every commit.<p>Literally, in the last week, Codex started making all it's branches as "codex-feature-name", and will continue to do so, even if you tell it to never do that again.<p>Really, really annoying.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:14:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575374</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47575374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575374</guid></item><item><title><![CDATA[New comment by saberience in "ARC-AGI-3"]]></title><description><![CDATA[
<p>So this is another ARC-"AGI" benchmark which is again designed around using eyesight for LLMs which are trained to be great at text, what is the point?<p>Yes, we get that LLMs are really bad when you give them contrived visual puzzles or pseudo games to solve... Well great, we already knew this.<p>The "hype" around the ARC-AGI benchmarks makes me laugh, especially the idea we would have AGI when ARC-AGI-1 was solved... then we got 2, and now we're on 3.<p>Shall we start saying that these benchmarks have nothing to do with AGI yet? Are we going to get an ARC-AGI-10 where we have LLMs try and beat Myst or Riven? Will we have AGI then?<p>This isn't the right tool for measuring "AGI", and honestly I'm not sure what it's measuring except the foundation labs benchmaxxing on it.</p>
]]></description><pubDate>Wed, 25 Mar 2026 21:52:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47523782</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47523782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47523782</guid></item><item><title><![CDATA[New comment by saberience in "NumKong: 2'000 Mixed Precision Kernels for All"]]></title><description><![CDATA[
<p>This is one of those things that sounds mind-bogglingly complex and no doubt impressive... but the article is so long (I tried to read it all) and so complex, that I feel only about 5 people in the world probably can actually understand it, or when it can actually be useful.<p>You're clearly a brilliant engineer, but I would suggest it would be good if you could write something much, much shorter, easier to understand for those (majority of engineers) who didn't study low level chip design. And also have a handful of clear examples of when you would use this...<p>So it's like nice job man... but this is so complicated and hard to understand I have no idea how to use it, when I would use it, etc etc. Like I feel I would need to study for several years to grok this article, and well, I don't have time of that sadly.</p>
]]></description><pubDate>Sun, 22 Mar 2026 13:25:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477297</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47477297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477297</guid></item><item><title><![CDATA[New comment by saberience in "Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster"]]></title><description><![CDATA[
<p>Have you actually used LLMs for non trivial tasks? They are still incredibly bad when it comes to actually hard engineering work and they still lie all the time, it's just gotten harder to notice, especially if you're just letting it run all night and generate reams of crap.<p>Most people are optimizing for terrible benchmarks and then don't really understand what the model did anyone and just assume it did something good. It's the blind leading the blind basically, and a lot of people with an AI-psychosis or delusion.</p>
]]></description><pubDate>Thu, 19 Mar 2026 19:58:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47445014</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47445014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47445014</guid></item><item><title><![CDATA[New comment by saberience in "Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster"]]></title><description><![CDATA[
<p>Wait, "Karpathy's Autoresearch", you mean a loop that prompts the agent to improve a thing given a benchmark?<p>People have been doing this for a year or more, Ralph loops etc.<p>I hate the weird strange Twitter world of hero-worship for folks that seems to arise just out of large followings.<p>Joe no-followers does this six months ago, nobody cares. Karpathy writes a really basic loop and it's now a kind of AI miracle prompting tons of grifters, copy-cats, weird hype.<p>I do wonder if LLMs have just made everyone seriously, seriously dumber all of a sudden. Most of the "Autoresearch" posts I see are completely rubbish, with AI optimizing for nonsense benchmarks and people failing to understand the graphs they are looking at. So yes, the AI made itself better at a useless benchmark while also making the code worse in 10 other ways you don't actually understand.</p>
]]></description><pubDate>Thu, 19 Mar 2026 19:49:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47444889</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47444889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47444889</guid></item><item><title><![CDATA[New comment by saberience in "Unsloth Studio"]]></title><description><![CDATA[
<p>Who's the intended user for this?<p>Is it like, for AI hobbyists? I.e. I have a 4090 at home and want to fine-tune models?<p>Is it a competitor to LMStudio?</p>
]]></description><pubDate>Tue, 17 Mar 2026 21:12:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47418399</link><dc:creator>saberience</dc:creator><comments>https://news.ycombinator.com/item?id=47418399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47418399</guid></item></channel></rss>