<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: grokblah</title><link>https://news.ycombinator.com/user?id=grokblah</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 05 May 2026 08:30:21 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=grokblah" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by grokblah in "DeepWiki: Understand Any Codebase"]]></title><description><![CDATA[
<p>That’s a very intriguing observation.<p>(I haven’t read how it works but…)
I wonder if removing file sizes, commit counts, and other numerical metadata would have a significant impact on the output. Or if all of the files were glommed into one large input with path+filename markers?</p>
]]></description><pubDate>Tue, 26 Aug 2025 01:02:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45021077</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=45021077</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45021077</guid></item><item><title><![CDATA[New comment by grokblah in "Ask HN: Which laptop can run the largest LLM model?"]]></title><description><![CDATA[
<p>Interesting, I found it on Amazon for $5k:
<a href="https://a.co/d/h085rvP" rel="nofollow">https://a.co/d/h085rvP</a><p>That’s the same price as an M4 Max MBP with the same ram and storage. Any idea how they compare in performance?</p>
]]></description><pubDate>Thu, 14 Aug 2025 19:01:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44904261</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=44904261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44904261</guid></item><item><title><![CDATA[New comment by grokblah in "Ask HN: Which laptop can run the largest LLM model?"]]></title><description><![CDATA[
<p>Looks like the MacBook Pro might be more cost effective? I like the support for larger models. Thanks!</p>
]]></description><pubDate>Thu, 14 Aug 2025 18:54:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44904191</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=44904191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44904191</guid></item><item><title><![CDATA[Ask HN: Which laptop can run the largest LLM model?]]></title><description><![CDATA[
<p>I’d like to experiment with LLMs locally and understand their infrastructure better.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44902452">https://news.ycombinator.com/item?id=44902452</a></p>
<p>Points: 3</p>
<p># Comments: 4</p>
]]></description><pubDate>Thu, 14 Aug 2025 16:30:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44902452</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=44902452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44902452</guid></item><item><title><![CDATA[New comment by grokblah in "DoubleClickjacking: A New type of web hacking technique"]]></title><description><![CDATA[
<p>This could be mitigated by solving a longstanding UX issue: UI elements changing just before you click or tap.<p>Why not, by default, prevent interactions with newly visible (or newly at that location) UI elements? I find it incredibly annoying when a page is loading and things appear or move as I’m clicking/tapping. A nice improvement would be to give feedback that your action was ineffective/blocked.</p>
]]></description><pubDate>Sat, 18 Jan 2025 19:42:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42750834</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=42750834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42750834</guid></item><item><title><![CDATA[New comment by grokblah in "Leaked OpenAI Docs Show Sam Altman Clearly Aware of Silencing Former Employees"]]></title><description><![CDATA[
<p>OpenAI sounds like a wolf in sheep's clothing</p>
]]></description><pubDate>Tue, 28 May 2024 15:49:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=40501956</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=40501956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40501956</guid></item><item><title><![CDATA[New comment by grokblah in "Ways to teach kids to code (2016)"]]></title><description><![CDATA[
<p>They forgot Scratch!<p><a href="https://scratch.mit.edu" rel="nofollow noreferrer">https://scratch.mit.edu</a></p>
]]></description><pubDate>Fri, 04 Aug 2023 20:20:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=37005338</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=37005338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37005338</guid></item><item><title><![CDATA[New comment by grokblah in "We don't show typing status"]]></title><description><![CDATA[
<p>I miss chatting with that! It was so interactive. I wonder if it could be translated to communication between more than two parties. It sure would be interesting to see a prototype of something like that.</p>
]]></description><pubDate>Wed, 01 Jun 2022 21:54:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=31588644</link><dc:creator>grokblah</dc:creator><comments>https://news.ycombinator.com/item?id=31588644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31588644</guid></item></channel></rss>