<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vibe42</title><link>https://news.ycombinator.com/user?id=vibe42</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 07:34:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vibe42" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vibe42 in "Artemis II will use laser beams to live-stream 4K moon footage at 260 Mbps"]]></title><description><![CDATA[
<p>NASA's rendering of the flyby:<p><a href="https://svs.gsfc.nasa.gov/vis/a000000/a005500/a005536/a2_flyby_1min_720p30.mp4" rel="nofollow">https://svs.gsfc.nasa.gov/vis/a000000/a005500/a005536/a2_fly...</a><p>Hope we get to see something like this in 4K !</p>
]]></description><pubDate>Thu, 02 Apr 2026 15:42:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47615983</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47615983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47615983</guid></item><item><title><![CDATA[New comment by vibe42 in "NASA Artemis II moon mission live launch broadcast"]]></title><description><![CDATA[
<p>Mild Space Weather: <a href="https://www.swpc.noaa.gov/" rel="nofollow">https://www.swpc.noaa.gov/</a><p>Moderate geomagnetic storm watch until April 2.</p>
]]></description><pubDate>Wed, 01 Apr 2026 21:15:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47606656</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47606656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47606656</guid></item><item><title><![CDATA[New comment by vibe42 in "Artemis II Launch Day Updates"]]></title><description><![CDATA[
<p>They can move around after they switch from launch to spaceflight config. Apparently they also have some exercise gear for the journey.</p>
]]></description><pubDate>Wed, 01 Apr 2026 20:53:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47606420</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47606420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47606420</guid></item><item><title><![CDATA[New comment by vibe42 in "The Habitable Zone"]]></title><description><![CDATA[
<p>Here's a habitable zone in a different star system:
<a href="https://en.wikipedia.org/wiki/TRAPPIST-1#Habitable_zone" rel="nofollow">https://en.wikipedia.org/wiki/TRAPPIST-1#Habitable_zone</a></p>
]]></description><pubDate>Wed, 01 Apr 2026 20:08:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47605873</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47605873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47605873</guid></item><item><title><![CDATA[New comment by vibe42 in "Darce – AI coding agent for your terminal, any model, 14 kB"]]></title><description><![CDATA[
<p>How does this compare to the pi-mono coding agent?<p><a href="https://github.com/badlogic/pi-mono/tree/main/packages/coding-agent" rel="nofollow">https://github.com/badlogic/pi-mono/tree/main/packages/codin...</a></p>
]]></description><pubDate>Wed, 01 Apr 2026 15:53:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47602543</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47602543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47602543</guid></item><item><title><![CDATA[New comment by vibe42 in "If You Need a Laptop, Buy It Now"]]></title><description><![CDATA[
<p>With 16 GB VRAM one can run a decent quant (Q4-Q8) of newer, smaller dense models. This leaves room for e.g. 32-256k context size.<p>This might not be enough to chew through a large code base but for smaller projects it can easily fit enough if not all of the code base to drive a good coding agent.<p>I don't recommend specific models or model providers due to how much hype and BS there is around benchmarks etc.  Easiest is to check the latest generation of open models and look for a dense-type where a decent quant fits within the VRAM.<p>Some models run fast enough that some of the weights can spill over from VRAM to RAM while maintaining a usable prompt/token gen speed.</p>
]]></description><pubDate>Wed, 01 Apr 2026 15:48:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47602481</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47602481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47602481</guid></item><item><title><![CDATA[New comment by vibe42 in "If You Need a Laptop, Buy It Now"]]></title><description><![CDATA[
<p>Higher-end gaming laptops are still decently priced and work well for local AI inference.<p>And Linux runs better than ever on them; I'm running debian 13 with almost no driver issues.<p>For $2k you can get 32 GB DDR5 RAM and 16 GB fast VRAM.  Bump the RAM to 64 GB and you're still below $3k.</p>
]]></description><pubDate>Wed, 01 Apr 2026 13:48:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47600831</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47600831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47600831</guid></item><item><title><![CDATA[New comment by vibe42 in "Ask HN: What are you building with AI coding agents / tooling?"]]></title><description><![CDATA[
<p>The main server runs 3x RTX PRO 6000 (288 GB VRAM combined), power limited to 280W each (can crank it up as temps are fine but about to add some more fans first as the cards are stacked).<p>The second server is 2x Radeon RX 7900 XTX (48 GB VRAM combined). It's a fairly recent gaming PC that's being repurposed. Idea is to power limit those cards too and run some overnight stuff w small/medium sized models.<p>Intel just released some 32 GB VRAM cards, but sounds like support across AI tooling is a bit rough atm.</p>
]]></description><pubDate>Tue, 31 Mar 2026 21:56:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593999</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47593999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593999</guid></item><item><title><![CDATA[New comment by vibe42 in "Securing Elliptic Curve Cryptocurrencies Against Quantum Vulnerabilities [pdf]"]]></title><description><![CDATA[
<p>Ethereum has a new site for PQ research: <a href="https://pq.ethereum.org/" rel="nofollow">https://pq.ethereum.org/</a></p>
]]></description><pubDate>Tue, 31 Mar 2026 20:51:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593311</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47593311</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593311</guid></item><item><title><![CDATA[New comment by vibe42 in "Is AI work starting to feel addictive to anyone else?"]]></title><description><![CDATA[
<p>I've got a light version of this with local models; just one coding agent, one task takes 1-5 minutes. All local on constrained hardware helps; can't really run a ton of agents in parallel at good speeds.<p>During each task I context switch to some other work, emails, chores etc.<p>Important to take breaks and assess before starting a new session.</p>
]]></description><pubDate>Tue, 31 Mar 2026 20:23:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592969</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47592969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592969</guid></item><item><title><![CDATA[New comment by vibe42 in "Securing Elliptic Curve Cryptocurrencies Against Quantum Vulnerabilities [pdf]"]]></title><description><![CDATA[
<p>Will be pretty wild when mass migration of accounts begin.<p>The analytics of thousands of accounts sending tokens to new accounts. Better use a VPN a migrate on an unusual hour in your time zone :D</p>
]]></description><pubDate>Tue, 31 Mar 2026 19:45:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592446</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47592446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592446</guid></item><item><title><![CDATA[New comment by vibe42 in "Do LLMs Break the Sapir-Whorf Hypothesis?"]]></title><description><![CDATA[
<p>One thing to benchmark is if LLMs are better at solving complex problems if they're described in one language vs others.<p>There's SWE-bench Multilingual for example, but translating a problem into multiple natural languages before passing it to the LLM has not been benchmarked afaik.<p>If there's some residual of the natural language left when the middle layers execute, that would in part validate Sapir-Whorf.</p>
]]></description><pubDate>Tue, 31 Mar 2026 19:29:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592262</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47592262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592262</guid></item><item><title><![CDATA[New comment by vibe42 in "Ask HN: How to Break into AI Engineering (Revisited)"]]></title><description><![CDATA[
<p>Would recommend reading <a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>  and check his github code to reproduce the findings.<p>Easy to way to start hacking LLMs; there's much of value there and a fun way to get into it before tackling heavy math / CS topics.</p>
]]></description><pubDate>Tue, 31 Mar 2026 19:18:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592129</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47592129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592129</guid></item><item><title><![CDATA[New comment by vibe42 in "You Can't Escape the AI Tax"]]></title><description><![CDATA[
<p>Decided on DDR4 for a new local AI server, works well as I try to keep models in VRAM anyway.<p>For browsers and general apps, devs have blown up memory usage like crazy the past two decades.. there's so much low hanging fruit in optimizing for reduced RAM usage.<p>Like many things it was cheaper to just use more memory, now it may become worth it to spend some time thinking really hard how to get your Electron message app using a few GB less..</p>
]]></description><pubDate>Tue, 31 Mar 2026 17:03:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47590351</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47590351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47590351</guid></item><item><title><![CDATA[New comment by vibe42 in "Ask HN: What are you building with AI coding agents / tooling?"]]></title><description><![CDATA[
<p>Building my own home lab for local AI inference and general-purpose servers. Purpose is to learn more about hardware, Linux, networking, open source AI tools.<p>Decided as a constraint to exclusively use local AI!  This was fun in that the first step became assembling the first server able to run a small local model, that would then assist with everything else.<p>After I got the first one running it was used for almost everything, except it could not assemble the 42U steel server rack.. (shoulders hurt a bit now, probably good exercise!)<p>The first thing I tried on the new servers after first boot of debian was feeding the entire Linux dmesg log with one simple instruction: "Check all dmesg entries and provide recommendations for any errors, issues or other considerations".<p>This was very helpful even with smaller local models, as a complement to just searching for various errors (drivers etc).  Learned a lot of new things like BMC network configs.<p>Home lab networking in general was incredible to work through using local AI. Being a bit rusty on various things like firewalls, local DNS etc it was refreshing asking questions so dumb that one might not want them in the logs of hosted AI providers given a history as a SWE...lol<p>And more complex things like how packets flow in mikrotik RouterOS.<p>Some general findings:<p>* The latest generation of local AI models are _way_ better than even just 6 months ago. In particular dense models 7B+ are surprisingly useful for anything Linux, network configs, small to medium sized scripts.<p>* Latest gen open models from small AI labs generally beat last gen models of the same size from larger labs.<p>* Don't trust recommendations for any specific model - try it for real stuff and get messy with it - feed it system/app logs, mad half-spelled ramblings late at night along with more clear and well written instructions the next day...<p>* Larger open models of decent quant (Q5 and up) are now so good enough that the bottleneck for many use cases is no longer the model, but your workflow.<p>* Simpler workflows beat complex prompts, skills, AGENT.md etc.  I run most things with the pi-mono coding agent with no extensions.<p>* Have the same model verify a finding/claim in a fresh context. This drastically reduces false positives and improves correctness of findings. Going further, run a third verification with a different model.<p>* If you grew up with the sounds of floppy disks, 56k modems etc, you might just like the coil whine of local GPUs... it's oddly comforting and different models sound different when working on the same tasks.</p>
]]></description><pubDate>Tue, 31 Mar 2026 16:50:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47590158</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47590158</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47590158</guid></item><item><title><![CDATA[New comment by vibe42 in "Vulnerability research is cooked"]]></title><description><![CDATA[
<p>If everyone is running the same models, does this not favour white hat / defense?<p>Since many exploits consists of several vulnerabilities used in a chain, if a LLM finds one in the middle and it's fixed, that can change a zero day to something of more moderate severity?<p>E.g. someone finds a zero day that's using three vulns through different layers. The first and third are super hard to find, but the second is of moderate difficulty.<p>Automated checks by not even SOTA models could very well find the moderate difficulty vuln in the middle, breaking the chain.</p>
]]></description><pubDate>Mon, 30 Mar 2026 21:45:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580108</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47580108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580108</guid></item><item><title><![CDATA[New comment by vibe42 in "TurboQuant: Redefining AI efficiency with extreme compression"]]></title><description><![CDATA[
<p>The pace of development in llama.cpp is really high, could see an implementation being merged in 4-6 weeks.</p>
]]></description><pubDate>Wed, 25 Mar 2026 16:25:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47519537</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47519537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47519537</guid></item><item><title><![CDATA[New comment by vibe42 in "LLM Neuroanatomy II: Modern LLM Hacking and Hints of a Universal Language?"]]></title><description><![CDATA[
<p>Just learned about it the other day from this thread from Feb, 2024: <a href="https://old.reddit.com/r/LocalLLaMA/comments/1aqrd7t/i_made_an_inference_sever_that_supports_repeating/" rel="nofollow">https://old.reddit.com/r/LocalLLaMA/comments/1aqrd7t/i_made_...</a><p>Has some interesting github links.</p>
]]></description><pubDate>Tue, 24 Mar 2026 17:26:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47506181</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47506181</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47506181</guid></item><item><title><![CDATA[New comment by vibe42 in "LLM Neuroanatomy II: Modern LLM Hacking and Hints of a Universal Language?"]]></title><description><![CDATA[
<p>This is orthogonal to quantisation. Could have big impact on smaller models in the 4B-14B range where people often try specific quants and context sizes to fit into the VRAM of a laptop/desktop GPU.</p>
]]></description><pubDate>Tue, 24 Mar 2026 16:27:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47505180</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47505180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47505180</guid></item><item><title><![CDATA[New comment by vibe42 in "LLM Neuroanatomy II: Modern LLM Hacking and Hints of a Universal Language?"]]></title><description><![CDATA[
<p>Perhaps not widely known but certainly known in LLM research. There was a bunch of these experiments done 2 years ago and what's interesting is that it still seems to work on the latest models.<p>Though beware that the increased score on math and EQ could lead to other areas scoring less well; would love to see how these models score on all open benchmarks.</p>
]]></description><pubDate>Tue, 24 Mar 2026 16:22:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47505083</link><dc:creator>vibe42</dc:creator><comments>https://news.ycombinator.com/item?id=47505083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47505083</guid></item></channel></rss>