<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: imranq</title><link>https://news.ycombinator.com/user?id=imranq</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 16:15:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=imranq" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by imranq in "Your phone is an entire computer"]]></title><description><![CDATA[
<p>Okay so the OP is saying that since Macbook Neo has the same hardware as iphone, but not locked down, so why is iphone locked down. They say its because of the app store profits.<p>Sure App store is not to be understated, but I'd add our phones include way way more personal information than a laptop like NFC for credit cards, personal photos, and all biometric and contact information. Not to mention cellular network connection and generally forms as a soft form of identity. None of these apply to a laptop. So form factor does matter.<p>BUT even if we unlocked the iPhone, the desire for 'MacOS on iPhone' is actually the wrong thing to ask for. Pete Steinberger had in this interview (<a href="https://www.youtube.com/watch?v=AcwK1Uuwc0U&t=1182" rel="nofollow">https://www.youtube.com/watch?v=AcwK1Uuwc0U&t=1182</a>) that UI is basically the wrong paradigm in a world where agents should do tasks for us in milliseconds. We should be able run any local services from our phone like grabbing<p>Good news is we already have this via terminal apps in Android. Now what's left is the ability for agents to run on your device and basically accomplish tasks for you</p>
]]></description><pubDate>Fri, 13 Mar 2026 21:16:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47370041</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=47370041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47370041</guid></item><item><title><![CDATA[New comment by imranq in "Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs"]]></title><description><![CDATA[
<p>Amazing write up and i wish more people showed the process for discovery which is often even more interesting than the result itself<p>Still the result is really interesting being able to stack abstract reasoning and get better performance and the heat maps to show the prob results<p>The academic literature seems to be catching up:<p>- *[SOLAR / DUS (Kim et al., 2023)](<a href="https://arxiv.org/abs/2312.15166" rel="nofollow">https://arxiv.org/abs/2312.15166</a>)* — duplicated transformer layers to build a 10.7B model that outperformed 30B parameter baselines.<p>- *[The Curse of Depth (2025)](<a href="https://arxiv.org/abs/2502.05795" rel="nofollow">https://arxiv.org/abs/2502.05795</a>)* — explains <i>why</i> this works: Pre-LN causes deep transformer layers to converge toward identity functions, meaning middle layers are where real computation happens, and duplicating them concentrates that capacity.<p>- *[Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach (Geiping et al., NeurIPS 2025)](<a href="https://arxiv.org/abs/2502.05171" rel="nofollow">https://arxiv.org/abs/2502.05171</a>)* — takes the idea to its logical conclusion: a model trained with a <i>single</i> recurrent block repeated at inference time, scaling reasoning depth without adding parameters.</p>
]]></description><pubDate>Tue, 10 Mar 2026 20:05:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47328151</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=47328151</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47328151</guid></item><item><title><![CDATA[New comment by imranq in "MacBook Neo"]]></title><description><![CDATA[
<p>I think it might be 2020 when the M1 was released since I remember i had bought a mac book in 2019 and it was still intel</p>
]]></description><pubDate>Wed, 04 Mar 2026 14:35:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47247976</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=47247976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47247976</guid></item><item><title><![CDATA[New comment by imranq in "Clawdbot - open source personal AI assistant"]]></title><description><![CDATA[
<p>I really like Clawdbots safety gloves off approach - no handholding or just saying yes to every permission.<p>I set it up on a old macbook pro I had that had a broken screen and it works great. Now I just message my server using telegram and it does research for me, organizes my notes, and builds small apps on the fly to help with learning.<p>However security is a real concern. I need to understand how to create a comprehensive set of allowlists before expanding into anything more serious like bill payments or messaging people / etc</p>
]]></description><pubDate>Mon, 26 Jan 2026 01:10:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46760555</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=46760555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46760555</guid></item><item><title><![CDATA[New comment by imranq in "I'm 34. Here's 34 things I wish I knew at 21"]]></title><description><![CDATA[
<p>Some great life lessons here, but also some I don't agree with:<p>- The lazy person works twice as hard.
Often I found you can save a lot of time just trying to the minimal possible and gain a lot of insights of why something is minimal vs not<p>-The opinion of the person who rarely offers it is listened to more closely.
I found the opposite to be true, those who don't offer their thoughts frequently are often dismissed when they do want to share something<p>Anyway, many of the points are great.. I would also add to keep a journal and write down what was meaningful throughout the day.. you will find time passing by with more quality since you know what the take and what to avoid</p>
]]></description><pubDate>Thu, 22 Jan 2026 12:44:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46718511</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=46718511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46718511</guid></item><item><title><![CDATA[New comment by imranq in "Flux 2 Klein pure C inference"]]></title><description><![CDATA[
<p>Just because it is in C, doesn't mean you will get C like performance. Just look at the benchmarks, it is 8x slower than just using PyTorch... while I get its cool to use LLMs to generate code at this level, getting super high performing optimized code is very much out of the domain of current frontier LLMs</p>
]]></description><pubDate>Mon, 19 Jan 2026 04:44:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46675135</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=46675135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46675135</guid></item><item><title><![CDATA[New comment by imranq in "Reading across books with Claude Code"]]></title><description><![CDATA[
<p>I really liked the approach of getting new topics to research via embeddings, trails, and claude code, but often what will this give you outside of novelty?</p>
]]></description><pubDate>Sun, 18 Jan 2026 04:41:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46664832</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=46664832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46664832</guid></item><item><title><![CDATA[New comment by imranq in "Show HN: NeurIPS 2025 Explorer – 5000+ papers, 20+ interactive explainers"]]></title><description><![CDATA[
<p>Hey HN! I built an explorer for all NeurIPS 2025 Main Conference and workshop papers with 
reviews, scores, and code links.<p>But the unique feature: AI-generated "explainers" that break down 
complex papers with interactive visualizations. Example:
<a href="https://neurips2025.pages.dev/explainers/linear_attention/" rel="nofollow">https://neurips2025.pages.dev/explainers/linear_attention/</a><p>It explains why attention is hard to optimize, shows the math with 
interactive demos, and includes critical analysis of limitations.<p>The explainers are generated using Gemini3 to parse papers and create:<p>- Interactive visualizations<p>- Step-by-step mathematical walkthroughs<p>- Critical analysis sections<p>- "What would convince me?" sections<p>Tech stack: OpenReview API, Gemini API for explainer generation, 
static hosting on Cloudflare Pages for speed.<p>I'm planning to generate explainers for more papers 
based on what people find interesting, so any feedback would be amazing</p>
]]></description><pubDate>Thu, 27 Nov 2025 04:16:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46065573</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=46065573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46065573</guid></item><item><title><![CDATA[Show HN: NeurIPS 2025 Explorer – 5000+ papers, 20+ interactive explainers]]></title><description><![CDATA[
<p>Article URL: <a href="https://neurips2025.pages.dev/">https://neurips2025.pages.dev/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46065572">https://news.ycombinator.com/item?id=46065572</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 27 Nov 2025 04:16:48 +0000</pubDate><link>https://neurips2025.pages.dev/</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=46065572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46065572</guid></item><item><title><![CDATA[New comment by imranq in "LLM-Deflate: Extracting LLMs into Datasets"]]></title><description><![CDATA[
<p>The claims in this paper don't make sense. There is no proof that anything has been decompressed</p>
]]></description><pubDate>Sat, 20 Sep 2025 12:53:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45312978</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=45312978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45312978</guid></item><item><title><![CDATA[New comment by imranq in "DeepSeek-v3.1"]]></title><description><![CDATA[
<p>There is one: <a href="https://pricepertoken.com/" rel="nofollow">https://pricepertoken.com/</a></p>
]]></description><pubDate>Thu, 21 Aug 2025 22:35:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44978949</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44978949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44978949</guid></item><item><title><![CDATA[New comment by imranq in "New treatment eliminates bladder cancer in 82% of patients"]]></title><description><![CDATA[
<p>Turning it off and then on again works in a lot of surprising places</p>
]]></description><pubDate>Wed, 13 Aug 2025 16:20:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44890461</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44890461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44890461</guid></item><item><title><![CDATA[New comment by imranq in "I tried every todo app and ended up with a .txt file"]]></title><description><![CDATA[
<p>People basically want a life coach, someone by their side who can tell them what the best next thing to do is at any given moment. Everything else are just approximation of that ideal.<p>The author's .txt file works because its simplicity forces a daily ritual of self-coaching. The tool demands that the user manually review, prioritize, and decide what matters. There are no features to hide behind, only the discipline of the process itself.<p>The impulse to use complex apps or build custom scripts is the attempt to engineer a better coach. We try to automate the prioritization and reminders, hoping the system can do the coaching for us.<p>The great trap, of course, is when we fall in love with engineering the system instead of doing the work. This turns productivity into a sophisticated form of procrastination.<p>Ultimately, the best system is the one that removes the most friction between decision and action. For the author, that meant stripping away everything but the list itself.</p>
]]></description><pubDate>Tue, 12 Aug 2025 15:44:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44877820</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44877820</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44877820</guid></item><item><title><![CDATA[New comment by imranq in "Building better AI tools"]]></title><description><![CDATA[
<p>While I agree with the author's vision for a more human-centric AI, I think we're closer to that than the article suggests. The core issue is that the default behavior is what's being criticized. The instruction-following capabilities of modern models mean we can already build these Socratic, guiding systems by creating specific system prompts and tools (like MCP servers). The real challenge isn't technical feasibility, but rather shifting the product design philosophy away from 'magic button' solutions toward these more collaborative, and ultimately more effective, workflows</p>
]]></description><pubDate>Wed, 23 Jul 2025 23:14:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44665041</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44665041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44665041</guid></item><item><title><![CDATA[New comment by imranq in "Andrew Ng: Building Faster with AI [video]"]]></title><description><![CDATA[
<p>My two takeaways is you build 
1) Having a precise vision of what you want to achieve
2) Being able to control / steer AI towards that vision<p>Teams that can do both of these things, especially #1 will move much faster. Even if they are wrong its better than vague ideas that get applause but not customers</p>
]]></description><pubDate>Fri, 11 Jul 2025 21:15:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44536852</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44536852</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44536852</guid></item><item><title><![CDATA[New comment by imranq in "We accidentally solved robotics by watching 1M hours of YouTube"]]></title><description><![CDATA[
<p>This was a bit hard to read. It would be good to have a narrative structure and more clear explanation of concepts.</p>
]]></description><pubDate>Sun, 29 Jun 2025 16:35:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44414365</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44414365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44414365</guid></item><item><title><![CDATA[New comment by imranq in "MCP in LM Studio"]]></title><description><![CDATA[
<p>I'd love to host my own LLMs but I keep getting held back from the quality and affordability of Cloud LLMs. Why go local unless there's private data involved?</p>
]]></description><pubDate>Wed, 25 Jun 2025 19:35:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44381075</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44381075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44381075</guid></item><item><title><![CDATA[New comment by imranq in "Breaking Quadratic Barriers: A Non-Attention LLM for Ultra-Long Context Horizons"]]></title><description><![CDATA[
<p>I like the idea of removing quadratic scaling for attention, this paper has thin experimental support. No real tasks tested beyond perplexity. Nothing on reasoning, retrieval QA, or summarization quality. Even in perplexity the gains are marginal.<p>However it removes attention so I think its worth watching that space of non-attention models</p>
]]></description><pubDate>Mon, 16 Jun 2025 20:57:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44293327</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44293327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44293327</guid></item><item><title><![CDATA[New comment by imranq in "USDA Pomological Watercolors"]]></title><description><![CDATA[
<p>The powers of kerning are great indeed</p>
]]></description><pubDate>Mon, 16 Jun 2025 20:25:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44293078</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44293078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44293078</guid></item><item><title><![CDATA[New comment by imranq in "Show HN: I built a more productive way to manage AI chats"]]></title><description><![CDATA[
<p>Nice idea!<p>I think it would be better if it was just context and not connected to any model. Think of one place where you can hook in your drive folder, GitHub, etc. and have it produce the best context for the task you want to achieve. Then users can copy that to their model or workflow of choice</p>
]]></description><pubDate>Fri, 23 May 2025 22:00:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44077011</link><dc:creator>imranq</dc:creator><comments>https://news.ycombinator.com/item?id=44077011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44077011</guid></item></channel></rss>