<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: schmorptron</title><link>https://news.ycombinator.com/user?id=schmorptron</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 00:22:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=schmorptron" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by schmorptron in "Bitcoin miners are losing on every coin produced as difficulty drops"]]></title><description><![CDATA[
<p>In the gap between cost going down and profitability, is there not an increased risk of sybel attacks?</p>
]]></description><pubDate>Sat, 11 Apr 2026 17:57:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47732610</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=47732610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47732610</guid></item><item><title><![CDATA[Fixing AMDGPU's VRAM management for low-end GPUs]]></title><description><![CDATA[
<p>Article URL: <a href="https://pixelcluster.github.io/VRAM-Mgmt-fixed/">https://pixelcluster.github.io/VRAM-Mgmt-fixed/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47706024">https://news.ycombinator.com/item?id=47706024</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 09 Apr 2026 16:51:46 +0000</pubDate><link>https://pixelcluster.github.io/VRAM-Mgmt-fixed/</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=47706024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47706024</guid></item><item><title><![CDATA[New comment by schmorptron in "StackOverflow: Retiring the Beta Site"]]></title><description><![CDATA[
<p>Oh, I was thinking more of user enters question into SO -> LLM answer on SO -> user evaluates whether LLM answer was sufficient (or system itself judges whether answer is also interesting to other users?) -> question + answer combo made public, judged by other users.<p>There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha<p>the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better<p>Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs</p>
]]></description><pubDate>Sun, 05 Apr 2026 22:01:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654355</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=47654355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654355</guid></item><item><title><![CDATA[New comment by schmorptron in "StackOverflow: Retiring the Beta Site"]]></title><description><![CDATA[
<p>That's a hard one. SO's hostile community to newbies, like any expert community, comes from the longstanding users having seen the basic questions 1000s of times and understandably not wanting to answer variations of them over and over, while for the newbies those questions genuinely are there and they don't have the routine knowledge yet of where to look or how to even look for solutions in the first place.<p>In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.<p>SO for discussions of taste? I have these two options to build this, how should i approach this? 
They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is: 
User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.<p>Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.</p>
]]></description><pubDate>Sun, 05 Apr 2026 17:15:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47651561</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=47651561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47651561</guid></item><item><title><![CDATA[New comment by schmorptron in "Caveman Mode Save Token?"]]></title><description><![CDATA[
<p>I used a system prompt similar to this, where I just dumped the entirety of <a href="https://grugbrain.dev/" rel="nofollow">https://grugbrain.dev/</a> into it and prefaced it with the assistant having to emulate grug.<p>Didn't find it particularly useful, but is is funny!</p>
]]></description><pubDate>Sat, 04 Apr 2026 14:53:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47639574</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=47639574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47639574</guid></item><item><title><![CDATA[New comment by schmorptron in "Open Letter to Google on Mandatory Developer Registration for App Distribution"]]></title><description><![CDATA[
<p>Before this LLM age the solution would've been to make the user solve a leetcode problem to access a developer mode.</p>
]]></description><pubDate>Wed, 25 Feb 2026 11:56:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47150361</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=47150361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47150361</guid></item><item><title><![CDATA[Understanding FSR 4]]></title><description><![CDATA[
<p>Article URL: <a href="https://woti.substack.com/p/understanding-fsr-4">https://woti.substack.com/p/understanding-fsr-4</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46746874">https://news.ycombinator.com/item?id=46746874</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 24 Jan 2026 19:39:19 +0000</pubDate><link>https://woti.substack.com/p/understanding-fsr-4</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=46746874</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46746874</guid></item><item><title><![CDATA[New comment by schmorptron in "Mozilla right now (Digital Painting)"]]></title><description><![CDATA[
<p>I actually feel like these integrations are fine, as long as they are opt-in or easily opt-outable of permanently. For now, I don't see the harm in adding another default search engine, it's much less obstrusive than the home page sponsored links. And if it gets them a little more independent from google by siphoning perplexity's seemingly infinite vc investment money, so be it.</p>
]]></description><pubDate>Sun, 21 Dec 2025 16:54:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46346187</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=46346187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46346187</guid></item><item><title><![CDATA[New comment by schmorptron in "Bikeshedding, or why I want to build a laptop"]]></title><description><![CDATA[
<p>I wonder if the rigidity could be improved while staying modular, maybe just use many more screws? I don't mind undoing more than 5 screws for the bottom to come off, make it 20 and it's still totally fine.</p>
]]></description><pubDate>Tue, 09 Dec 2025 00:54:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46199965</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=46199965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46199965</guid></item><item><title><![CDATA[New comment by schmorptron in "Electron vs. Tauri"]]></title><description><![CDATA[
<p>What is the implementation difference between using the system WebView (fragmented, especially bad under linux) and using one shared tauri-base runtime that only gets breaking changes updates every 2 years or so so there aren't twenty different ones running at the same time and it ends up like electron?<p>Would bundling one extended support release of chromium or firefox's backends that are then shared between all tauri apps not suffice?</p>
]]></description><pubDate>Sat, 29 Nov 2025 13:56:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46087555</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=46087555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46087555</guid></item><item><title><![CDATA[New comment by schmorptron in "Steam Machine"]]></title><description><![CDATA[
<p>the GabeCube pun pratically makes itself</p>
]]></description><pubDate>Thu, 13 Nov 2025 00:19:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45908810</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45908810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45908810</guid></item><item><title><![CDATA[New comment by schmorptron in "Steam Machine"]]></title><description><![CDATA[
<p>They mention FSR specifically in the trailer, but this comes with RDNA3, meaning no FSR4 currently. Does this mean that the int8 path for fsr4 is gonna become official to support this and the ps5 pro?</p>
]]></description><pubDate>Wed, 12 Nov 2025 23:17:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45908280</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45908280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45908280</guid></item><item><title><![CDATA[New comment by schmorptron in "Australia has so much solar that it's offering everyone free electricity"]]></title><description><![CDATA[
<p>It's probably because germany decided to sorta give up on it and all of the production and further research moved to china?</p>
]]></description><pubDate>Thu, 06 Nov 2025 20:04:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45839700</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45839700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45839700</guid></item><item><title><![CDATA[New comment by schmorptron in "AMD could enter ARM market with Sound Wave APU built on TSMC 3nm process"]]></title><description><![CDATA[
<p>Now to do speculation on top of speculation on top of speculation: Valve's next vr headset deckard / steam frame is also rumored to be using an ARM chip, and with them being quite close with AMD since the steam deck custom APU (although that one was apparently just something originally intended for magic leap before that fell apart), this could be in there + be powerful enough to run standalone VR.</p>
]]></description><pubDate>Fri, 31 Oct 2025 11:15:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45770752</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45770752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45770752</guid></item><item><title><![CDATA[New comment by schmorptron in "Meta Superintelligence Labs' first paper is about RAG"]]></title><description><![CDATA[
<p>Okay yeah that makes sense, thanks!</p>
]]></description><pubDate>Wed, 15 Oct 2025 07:56:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45589348</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45589348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45589348</guid></item><item><title><![CDATA[New comment by schmorptron in "Intel Announces Inference-Optimized Xe3P Graphics Card with 160GB VRAM"]]></title><description><![CDATA[
<p>Rumor has it (according to MLID, so no one knows whether it's accurate) that AMD is also looking to use regular LPDDR memory for some of it's lower end next gen GPUs to not have to contend with nvidia over limited and cartelled GDDR7 supply. Maybe they're going to increase parallel bandwidth to compensate it? Or have wholly different tricks up their sleeve.</p>
]]></description><pubDate>Wed, 15 Oct 2025 07:33:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45589190</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45589190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45589190</guid></item><item><title><![CDATA[New comment by schmorptron in "Intel Announces Inference-Optimized Xe3P Graphics Card with 160GB VRAM"]]></title><description><![CDATA[
<p>Maybe not that low, but given it's using LPDDR5 instead of GDDR7, at least the ram should be a lot cheaper.</p>
]]></description><pubDate>Tue, 14 Oct 2025 19:44:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583927</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45583927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583927</guid></item><item><title><![CDATA[New comment by schmorptron in "Intel Announces Inference-Optimized Xe3P Graphics Card with 160GB VRAM"]]></title><description><![CDATA[
<p>Xe3P as far as I remember is built in their own fabs as opposed to xe3 at TSMC. This could give them a huge advantage by being possibly the only competitor not competing for the same TSMC wafers</p>
]]></description><pubDate>Tue, 14 Oct 2025 19:40:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583895</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45583895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583895</guid></item><item><title><![CDATA[New comment by schmorptron in "Meta Superintelligence Labs' first paper is about RAG"]]></title><description><![CDATA[
<p>One thing I don't get about the ever-reoccuring RAG discussions and hype men proclaiming "Rag is dead", is that people seem to be talking about wholly different things? 
My mental model is that what is called RAG can either be:<p>- a predefined document store / document chunk store where every chunk gets a a vector embedding, and a lookup decides what gets pulled into context as to not have to pull whole classes of document, filling it up<p>- the web search like features in LLM chat interfaces, where they do keyword search, and pull relevant documents into context, but somehow only ephemerally, with the full documents not taking up context in the future of the thread (unsure about this, did I understand it right?) .<p>with the new models with million + tokens of context windows, some where arguing that we can just throw whole books into the context non-ephemerally, but doesnt that significantly reduce the diversity of possible sources we can include at once if we hard commit to everything staying in context forever? I guess it might help with consistency? But is the mechanism with which we decide what to keep in context not still some kind of RAG, just with larger chunks of whole documents instead of only parts?<p>I'd be extatic if someone who really knows their stuff could clear this up for me</p>
]]></description><pubDate>Sun, 12 Oct 2025 06:53:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45555908</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45555908</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45555908</guid></item><item><title><![CDATA[New comment by schmorptron in "Show HN: The Little Book of C"]]></title><description><![CDATA[
<p>Oh wow, I would not have caught that. I had a look at the first couple of pages, and as not-a-C-expert, it looked pretty solid to me. Readjusting our heuristics to generated slop (or even non-slop?) is gonna take so much more energy than before.<p>Although I've also been thinking about the overall role of effort in products, art, or any output really. Necessary effort to produce something is / was at least some indicator of quality that means that the author spent a certain amount of time with the material, and probably didn't want to release something bad if it meant they had to put a certain threshold of effort in anyways. With that gone, of course some people are gonna get their productivity enhanced and use this tool to make even better things, more often. But having to expend even more engery as a consumer to find out whether something is worth it is incredibly hard.</p>
]]></description><pubDate>Sun, 05 Oct 2025 06:41:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45479353</link><dc:creator>schmorptron</dc:creator><comments>https://news.ycombinator.com/item?id=45479353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45479353</guid></item></channel></rss>