<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Nav_Panel</title><link>https://news.ycombinator.com/user?id=Nav_Panel</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 20:48:06 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Nav_Panel" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Nav_Panel in "Isaac Asimov: The Last Question (1956)"]]></title><description><![CDATA[
<p>Nah I agree with you, as someone who's read a lot of Asimov. As far as MULTIVAC stories go, I always preferred "All The Troubles of the World" (<a href="https://schools.ednet.ns.ca/avrsb/070/rsbennett/HORTON/shortstories/All%20the%20troubles%20of%20the%20world.pdf" rel="nofollow">https://schools.ednet.ns.ca/avrsb/070/rsbennett/HORTON/short...</a>).</p>
]]></description><pubDate>Fri, 17 Apr 2026 19:15:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47809510</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=47809510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47809510</guid></item><item><title><![CDATA[New comment by Nav_Panel in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>The point is that open weights turns puts inference on the open market, so if your model is actually good and providers want to serve it, it will drive costs down and inference speeds up. Like Cerebras running Qwen 3 235B Instruct at 1.4k tps for cheaper than Claude Haiku (let that tps number sink in for a second. For reference, Claude Opus runs ~30-40 tps, Claude Haiku at ~60. Several orders of magnitude difference). As a company developing models, it means you can't easily capture the inference margins even though I believe you get a small kickback from the providers.<p>So I understand why they wouldn't want to go open weight, but on the other hand, open weight wins you popularity/sentiment if the model is any good, researchers (both academic and other labs) working on your stuff, etc etc. Local-first usage is only part of the story here. My guess is Qwen 3.5 was successful enough that now they want to start reaping the profits. Unfortunately most of Qwen 3.5's success is because it's heavily (and successfully!) optimized for extremely long-context usage on heavily constrained VRAM (i.e. local) systems, as a result of its DeltaNet attention layers.</p>
]]></description><pubDate>Thu, 16 Apr 2026 23:56:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47801000</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=47801000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47801000</guid></item><item><title><![CDATA[New comment by Nav_Panel in "50% of U.S. vinyl buyers don't own a record player"]]></title><description><![CDATA[
<p>It is partly the medium's fault. A lot of the sins of CD/digital mastering wont fly on vinyl because there's physical constraints around what you can literally press into the record groove.</p>
]]></description><pubDate>Fri, 02 Jan 2026 05:24:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46461701</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=46461701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46461701</guid></item><item><title><![CDATA[New comment by Nav_Panel in "Framework Laptop 13 gets ARM processor with 12 cores via upgrade kit"]]></title><description><![CDATA[
<p>I've had a Framework 13 for several years now, so I'm excited to see this kind of thing start to happen. Praying the next one out is a GPU/tensor workload unit so I'm not stuck at home on my desktop when I want to mess around with local AI models...</p>
]]></description><pubDate>Fri, 05 Dec 2025 19:31:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46166158</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=46166158</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46166158</guid></item><item><title><![CDATA[New comment by Nav_Panel in "ChunkLLM: A Lightweight Pluggable Framework for Accelerating LLMs Inference"]]></title><description><![CDATA[
<p>Love it, they're teaching LLMs how to skim texts properly, which is exactly the right approach for handling long contexts.</p>
]]></description><pubDate>Fri, 24 Oct 2025 17:02:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45696615</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=45696615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45696615</guid></item><item><title><![CDATA[New comment by Nav_Panel in ""Is It Conscious?": a 20 questions-style quiz"]]></title><description><![CDATA[
<p>I see a lot of debate about whether AI is "conscious" or has "consciousness", with people talking past each other without a firm grasp on their own stances, so I made a quick quiz to help you locate where you stand.</p>
]]></description><pubDate>Fri, 26 Sep 2025 13:54:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45386543</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=45386543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45386543</guid></item><item><title><![CDATA["Is It Conscious?": a 20 questions-style quiz]]></title><description><![CDATA[
<p>Article URL: <a href="https://consciousness-quiz.netlify.app/">https://consciousness-quiz.netlify.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45386542">https://news.ycombinator.com/item?id=45386542</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 26 Sep 2025 13:54:40 +0000</pubDate><link>https://consciousness-quiz.netlify.app/</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=45386542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45386542</guid></item><item><title><![CDATA[New comment by Nav_Panel in "Built a tiny site that formats Markdown for Substack"]]></title><description><![CDATA[
<p>I prefer to write blog posts in markdown, especially if I'm on the go, and historically I've manually reformatted it in Substack's editor. Figured, why not automate this? So I built a simple utility. Everything happens in your browser with the `marked` library.<p>Most important feature for me is converting from ASCII to "fancy" quotes and apostrophes, which Substack inserts automatically in its editor. Some more advanced features are probably broken, because Substack's editor is a bit wonky, but the basic stuff all works. Give it a try!<p>And here's the source: <a href="https://github.com/simpolism/md-to-substack" rel="nofollow">https://github.com/simpolism/md-to-substack</a></p>
]]></description><pubDate>Thu, 28 Aug 2025 15:36:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45053536</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=45053536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45053536</guid></item><item><title><![CDATA[Built a tiny site that formats Markdown for Substack]]></title><description><![CDATA[
<p>Article URL: <a href="https://md-to-substack.netlify.app/">https://md-to-substack.netlify.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45053535">https://news.ycombinator.com/item?id=45053535</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 28 Aug 2025 15:36:51 +0000</pubDate><link>https://md-to-substack.netlify.app/</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=45053535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45053535</guid></item><item><title><![CDATA[New comment by Nav_Panel in "VIN: The 17-character code that runs the automotive world"]]></title><description><![CDATA[
<p>Yeah. I noticed a lot of "It's not just X. It's Y." which is the biggest tell for me.</p>
]]></description><pubDate>Thu, 07 Aug 2025 02:25:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44819949</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=44819949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44819949</guid></item><item><title><![CDATA[New comment by Nav_Panel in "The problem with "vibe coding""]]></title><description><![CDATA[
<p>Copy paste? Use something like desktop commander and just let it edit the files for you directly. It'll even run commands to test it out. Or go further and use Cline/RooCode and if you're building a webapp it'll load your page in a small browser, screenshot the contents, and send that to the model. The copy-paste stuff is beginner mode.</p>
]]></description><pubDate>Tue, 15 Apr 2025 04:11:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=43688904</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43688904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43688904</guid></item><item><title><![CDATA[Show HN: chart2txt, LLM-readable astrology charts]]></title><description><![CDATA[
<p>Working on a larger astrology project, I realized there's no good way to get text descriptions of an astrology chart in a form designed for LLMs. Astro.com can generate descriptions, but it's unintuitive.<p>I decided to build a small js library that could handle the conversion: <a href="https://github.com/simpolism/chart2txt">https://github.com/simpolism/chart2txt</a> -- the chart2txt site was designed as a usage example.<p>In working on the example, I also realized there was no simple API that outputs basic astrology data (i.e. convert a birth time + place into planet + ascendant positions). So, I built that too to support the example app, using the standard swiss ephemeris library: <a href="https://github.com/simpolism/simple-astro-api">https://github.com/simpolism/simple-astro-api</a><p>Here's an example prompt once you have your chart text: "In a paragraph or two that doesn't reference specific planets or astrological jargon, synthesize the astrology chart below to explain the personality of the native:"<p>Then, you can ask further questions. It's also fun to try against different LLMs and see how the results change! My overall thesis is that astrology is a symbolic language, so a chart reading is not dissimilar from a translation, and thus we should expect LLMs to be pretty good at it. I don't have much interest in debating whether astrology is "real" or "true", though, as all we're doing here is a play of historical symbols.<p>Hope you enjoy! Let me know about any bugs, issues, feature requests, etc!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43646233">https://news.ycombinator.com/item?id=43646233</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 10 Apr 2025 17:35:12 +0000</pubDate><link>https://chart2txt.com</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43646233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43646233</guid></item><item><title><![CDATA[New comment by Nav_Panel in "The Agent2Agent Protocol (A2A)"]]></title><description><![CDATA[
<p>Much debated question but if we run with your definition, then A2A adds communication capabilities alongside tool-calling, which is ultimately a set of programmatic hooks. Like "phone a friend" if you don't know the answer given what you have available directly (via MCP, training data, or context).<p>My assumption is that the initial A2A implementation will be done with MCP, so the LLM can ask your AI directory or marketplace for help with a task via some kind of "phone a friend" tool call, and it'll be able to immediately interop and get the info it needs to complete the task.</p>
]]></description><pubDate>Wed, 09 Apr 2025 16:36:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43634071</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43634071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43634071</guid></item><item><title><![CDATA[New comment by Nav_Panel in "The Agent2Agent Protocol (A2A)"]]></title><description><![CDATA[
<p>A2A isn't a layer of abstraction over MCP, it functions in parallel and they complement each other. MCP addresses the Agent-to-Environment question, how can Agents "do things" on computers. A2A addresses the Agent-to-Agent question, how can Agents learn about other Agents and communicate with them. You need both.<p>You CAN try and build "the one agent that does everything" but in scenarios where there's many simultaneous data streams, a better approach would be to have many stateful agents handling each stream via MCP, coupled with a single "executive" agent that calls on each of the stateful agents via A2A to get the high-level info it needs to make decisions on behalf of its user.</p>
]]></description><pubDate>Wed, 09 Apr 2025 16:12:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=43633716</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43633716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43633716</guid></item><item><title><![CDATA[New comment by Nav_Panel in "The Agent2Agent Protocol (A2A)"]]></title><description><![CDATA[
<p>This is insanely cynical. The optimistic version is that many teams were already home-rolling protocols like A2A for "swarm" logic. For example, aggregation of financial data across many different streams, where a single "executive" agent would interface with many "worker" high-context agents that know a single stream.<p>I had been working on some personal projects over the last few months that would've benefitted enormously from having this kind of standard A2A protocol available. My colleagues and I identified it months ago as a major need, but one that would require a lot of effort to get buy-in across the industry, and I'm happy to see that Google hopped in to do it.</p>
]]></description><pubDate>Wed, 09 Apr 2025 16:05:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43633639</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43633639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43633639</guid></item><item><title><![CDATA[New comment by Nav_Panel in "How to Be Good at Dating"]]></title><description><![CDATA[
<p>"The social double-bind game can be phrased in several ways:<p>The first rule of this game is that it is not a game.<p>Everyone must play.<p>You must love us.<p>You must go on living.<p>Be yourself, but play a consistent and acceptable role.<p>Control yourself and be natural.<p>Try to be sincere.<p>Essentially, this game is a demand for spontaneous behavior of certain kinds. Living, loving, being natural or sincere—all these are spontaneous forms of behavior: they happen "of themselves" like digesting food or growing hair. As soon as they are forced they acquire that unnatural, contrived, and phony atmosphere which everyone deplores—weak and scentless like forced flowers and tasteless like forced fruit. Life and love generate effort, but effort will not generate them. Faith—in life, in other people, and in oneself—is the attitude of allowing the spontaneous to be spontaneous, in its own way and in its own time."<p>- Alan Watts</p>
]]></description><pubDate>Thu, 20 Mar 2025 20:47:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43428749</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43428749</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43428749</guid></item><item><title><![CDATA[New comment by Nav_Panel in "How to Be Good at Dating"]]></title><description><![CDATA[
<p>Agreed -- the thing that frustrated me in the post is the idea that not going through a hookup phase before finding a serious partner is "like running a marathon without doing any training", as though the skills involved in sustaining a relationship were the same skills involved in hookups, as opposed to an amplification of regular friendship skills.<p>Abundance mindset doesn't need to come from a sense of mastery over a game sold to you by a corporate product. IMO it's better to have abundance from a rich life filled with solid friendships that let you feel supported in taking risks, which may involve getting hurt, grieving, pulling yourself together, and trying again.</p>
]]></description><pubDate>Thu, 20 Mar 2025 20:29:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43428525</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43428525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43428525</guid></item><item><title><![CDATA[New comment by Nav_Panel in "How to Be Good at Dating"]]></title><description><![CDATA[
<p>I imagine this is completely serious, and isn't that different from what I've seen in a big city (NYC). Back when I was permitting myself to use these apps (2017, before I truly assessed the cost-benefit analysis), I met a girl who apparently swiped left on every guy. She showed me her Tinder and she had over 5,000 matches (and for some reason was meeting up with me, although it didn't last very long. I think she did get married a few years ago, though). That experience makes me think that the 20,000 number is legitimately a reasonable estimate.</p>
]]></description><pubDate>Thu, 20 Mar 2025 20:11:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43428320</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43428320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43428320</guid></item><item><title><![CDATA[Show HN: DIY LLM "infinite backrooms" in the browser]]></title><description><![CDATA[
<p>Github: <a href="https://github.com/simpolism/backrooms.directory" rel="nofollow">https://github.com/simpolism/backrooms.directory</a><p>After playing with Universal Backrooms last week (<a href="https://github.com/scottviteri/UniversalBackrooms" rel="nofollow">https://github.com/scottviteri/UniversalBackrooms</a>), a python implementation of two LLMs having a conversation forever, I wanted to make it available in the browser, so I built <a href="https://backrooms.directory" rel="nofollow">https://backrooms.directory</a>, a static site that does the same thing.<p>Basically: you put in your OpenRouter (or Hyperbolic) key, select a template and the language models that you want to have a conversation, and they start talking, and keep talking, often indefinitely. They can say some strange and entertaining things, and it's helped give me a deeper intuition about how the models work.<p>I also added a feature not present in Universal Backrooms, which is "explore mode". It's sort of like how Loom (<a href="https://github.com/socketteer/loom" rel="nofollow">https://github.com/socketteer/loom</a>) lets you select branching conversations, but more oriented toward guiding the current conversation rather than exploring branching possibilities.<p>I hope you have fun with it!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43400746">https://news.ycombinator.com/item?id=43400746</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 18 Mar 2025 15:38:52 +0000</pubDate><link>https://backrooms.directory/</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=43400746</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43400746</guid></item><item><title><![CDATA[New comment by Nav_Panel in "My Time Working at Stripe"]]></title><description><![CDATA[
<p>This is wild to me. As someone who's done some eng management and has also read a lot of psychoanalytic theory, the only things I would actively ask my reports to "open up" about emotionally are what they like and dislike about tasks and processes, so I can better distribute work that satisfies them and remove frictions. That said if they want to tell me more, I'm happy to listen.<p>Otoh when dealing with peers at director or C level, things tend to get significantly more psychological, likely because their facility of judgment, which is ultimately psychological and moral, is functioning with far higher leverage due to their organizational authority.</p>
]]></description><pubDate>Sat, 02 Nov 2024 21:09:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42029241</link><dc:creator>Nav_Panel</dc:creator><comments>https://news.ycombinator.com/item?id=42029241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42029241</guid></item></channel></rss>