<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sn0wr8ven</title><link>https://news.ycombinator.com/user?id=sn0wr8ven</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 20 Apr 2026 19:27:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sn0wr8ven" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sn0wr8ven in "A Pascal's Wager for AI Doomers"]]></title><description><![CDATA[
<p>Not nothing but nothing compared to the amount being ordered and invested. I think Nvidia has enough orders to go to 2027, so they are way behind. A lot of companies though, aren't using even the limited amount of hardware being produced now, and this is from Microsoft, Meta, etc. The hardware side is certainly way behind the production. The software side is sort of clear for most people. None of the companies are really making returns on 100B investments, fairly evident given recent estimates and project shutdowns, SORA in particular. So when the 100B or I think 1 trillion is being quoted around now, is just floating, resulting in limited goods, and limited value from the limited goods, it becomes worrying. Because the extra valuations isn't resulting in extra value simply limited if not negative value.</p>
]]></description><pubDate>Mon, 20 Apr 2026 14:08:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47834609</link><dc:creator>sn0wr8ven</dc:creator><comments>https://news.ycombinator.com/item?id=47834609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47834609</guid></item><item><title><![CDATA[New comment by sn0wr8ven in "A Pascal's Wager for AI Doomers"]]></title><description><![CDATA[
<p>They are not convinced, simply worried. If you look at Nvidia, Microsoft, OpenAI, Oracle, etc that is sort of passing around 100B usd without it actually resulting in anything being produced, it becomes worrying. I don't think the author is convinced, simply worried.<p>Specifically, it is the act of "I will invest 100 Billion in you, you will use that money to buy 100 Billion worth of goods from me. Both our balances look good, none of us spent anything." As I understand it, this act isn't so uncommon in finances but never on this scale across this many companies.</p>
]]></description><pubDate>Mon, 20 Apr 2026 13:59:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47834469</link><dc:creator>sn0wr8ven</dc:creator><comments>https://news.ycombinator.com/item?id=47834469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47834469</guid></item><item><title><![CDATA[New comment by sn0wr8ven in "Prism"]]></title><description><![CDATA[
<p>It is nice for academics, but I would ask why? These aren't tasks you can't do yourself. Yes it's all in one place, but it's not like doing the exact same thing previously was ridiculous to setup.<p>A comparison comes to mind is the n8n workflow type product they put out before. N8n takes setup. Proofreading, asking for more relevant papers, converting pictures to latex code, etc doesn't take any setup. People do this with or without this tool almost identically.</p>
]]></description><pubDate>Wed, 28 Jan 2026 11:55:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46794180</link><dc:creator>sn0wr8ven</dc:creator><comments>https://news.ycombinator.com/item?id=46794180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46794180</guid></item><item><title><![CDATA[New comment by sn0wr8ven in "OpenAI O3 breakthrough high score on ARC-AGI-PUB"]]></title><description><![CDATA[
<p>Incredibly impressive. Still can't really shake the feeling that this is o3 gaming the system more than it is actually being able to reason. If the reasoning capabilities are there, there should be no reason why it achieves 90% on one version and 30% on the next. If a human maintains the same performance across the two versions, an AI with reason should too.</p>
]]></description><pubDate>Sat, 21 Dec 2024 15:55:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42480358</link><dc:creator>sn0wr8ven</dc:creator><comments>https://news.ycombinator.com/item?id=42480358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42480358</guid></item><item><title><![CDATA[New comment by sn0wr8ven in "Ask HN: What is the role of data scientists in the age of LLMs?"]]></title><description><![CDATA[
<p>The same data science stuff that data scientists have done before and integrating LLMs into their workflow. I guess. Databases are not being fed into LLMs. LLMs takes in a text input and gives a text output, they are not taking a text input and giving a bunch of data back. They might be able to refer to the data, but they do not replace data scientists, yet. (Token limit)</p>
]]></description><pubDate>Wed, 22 May 2024 17:47:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=40443851</link><dc:creator>sn0wr8ven</dc:creator><comments>https://news.ycombinator.com/item?id=40443851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40443851</guid></item><item><title><![CDATA[New comment by sn0wr8ven in "Ask HN: Which LLMs can run locally on most consumer computers"]]></title><description><![CDATA[
<p>There definitely are smaller LLMs that can run on consumer computers, but as for their performance... You would be lucky to get a full sentence. On the other hand, sending and receiving responses as text is probably the fastest and most realistic way to implement these things in games.</p>
]]></description><pubDate>Wed, 22 May 2024 17:38:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=40443755</link><dc:creator>sn0wr8ven</dc:creator><comments>https://news.ycombinator.com/item?id=40443755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40443755</guid></item></channel></rss>