<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hislaziness</title><link>https://news.ycombinator.com/user?id=hislaziness</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 14:28:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hislaziness" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hislaziness in "The current state of LLM-driven development"]]></title><description><![CDATA[
<p>Would it be more appropriate to compare LLMs to Autotunes rather than pianos?</p>
]]></description><pubDate>Tue, 12 Aug 2025 02:10:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44871577</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=44871577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44871577</guid></item><item><title><![CDATA[New comment by hislaziness in "How to break the 'AI hype cycle'"]]></title><description><![CDATA[
<p>Akamai CTO Robert Blumofe offers four tips for business leaders striving to foster AI fluency by empowering employees with the right tools and best use cases.<p>useful insights but definitely a PR article.</p>
]]></description><pubDate>Mon, 21 Jul 2025 23:54:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44641750</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=44641750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44641750</guid></item><item><title><![CDATA[New comment by hislaziness in "What the Fuck Python"]]></title><description><![CDATA[
<p>Great. I enjoy these kind of articles. My all time favorite book for 'C' is Expert C Programming: Deep C Secrets.</p>
]]></description><pubDate>Sun, 20 Jul 2025 01:56:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44621230</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=44621230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44621230</guid></item><item><title><![CDATA[New comment by hislaziness in "OpenAI claims gold-medal performance at IMO 2025"]]></title><description><![CDATA[
<p>Terence Tao on the matter - <a href="https://imgur.com/a/terence-tao-on-supposed-gold-imo-sMKP0bm" rel="nofollow">https://imgur.com/a/terence-tao-on-supposed-gold-imo-sMKP0bm</a></p>
]]></description><pubDate>Sun, 20 Jul 2025 01:45:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44621180</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=44621180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44621180</guid></item><item><title><![CDATA[New comment by hislaziness in "Controllable Fast and Slow Thinking by Learning with Randomized Reasoning Traces"]]></title><description><![CDATA[
<p>As I understand, the LLM uses the techniques of searchformer - <a href="https://arxiv.org/abs/2402.14083" rel="nofollow">https://arxiv.org/abs/2402.14083</a>. To do "slow thinking" doing a A* search using a transofrmer.</p>
]]></description><pubDate>Wed, 23 Oct 2024 05:03:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=41921983</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41921983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41921983</guid></item><item><title><![CDATA[Controllable Fast and Slow Thinking by Learning with Randomized Reasoning Traces]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2410.09918">https://arxiv.org/abs/2410.09918</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41921635">https://news.ycombinator.com/item?id=41921635</a></p>
<p>Points: 44</p>
<p># Comments: 15</p>
]]></description><pubDate>Wed, 23 Oct 2024 03:51:45 +0000</pubDate><link>https://arxiv.org/abs/2410.09918</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41921635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41921635</guid></item><item><title><![CDATA[New comment by hislaziness in "$2 H100s: How the GPU Rental Bubble Burst"]]></title><description><![CDATA[
<p>It is not just MSRP, management and operations cost too. The article goes into the details of this.</p>
]]></description><pubDate>Fri, 11 Oct 2024 04:03:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41805991</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41805991</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41805991</guid></item><item><title><![CDATA[New comment by hislaziness in "$2 H100s: How the GPU Rental Bubble Burst"]]></title><description><![CDATA[
<p>The details are in the article. They have done the math.</p>
]]></description><pubDate>Fri, 11 Oct 2024 03:57:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=41805949</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41805949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41805949</guid></item><item><title><![CDATA[New comment by hislaziness in "$2 H100s: How the GPU Rental Bubble Burst"]]></title><description><![CDATA[
<p>TLDR: Don’t buy H100s. The market has flipped from shortage ($8/hr) to oversupplied ($2/hr), because of reserved compute resales, open model finetuning, and decline in new foundation model co’s. Rent instead.<p>Is the AI infra bubble already bursting?</p>
]]></description><pubDate>Fri, 11 Oct 2024 02:49:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=41805550</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41805550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41805550</guid></item><item><title><![CDATA[New comment by hislaziness in "Task-Switching Experiment (2015)"]]></title><description><![CDATA[
<p>The screen seems to be stuck at Wait a second... for me.</p>
]]></description><pubDate>Mon, 07 Oct 2024 09:43:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=41764325</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41764325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41764325</guid></item><item><title><![CDATA[New comment by hislaziness in "Reflection 70B, the top open-source model"]]></title><description><![CDATA[
<p>I tried a few local LLMs. None of them could give me the right answer for "How many 'r's in straberry. All LLMs were 8-27B.</p>
]]></description><pubDate>Sat, 07 Sep 2024 01:20:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41470901</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41470901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41470901</guid></item><item><title><![CDATA[New comment by hislaziness in "Looming Liability Machines (LLMs)"]]></title><description><![CDATA[
<p>I know you mean this in jest, but we are much closer to this than we would imagine, the use of LLMs to process communication / translation is becoming ubiquitous. We are 1 bad translation away from a disaster.</p>
]]></description><pubDate>Sun, 25 Aug 2024 01:42:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=41343607</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41343607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41343607</guid></item><item><title><![CDATA[New comment by hislaziness in "Psssst! Your date of birth can be a random number!"]]></title><description><![CDATA[
<p>I also use some email providers ability to have +xyz at the end of the username. So for  a registration I would for user.name+sitea@domain.com. has helped me track spam and leaks in the past.</p>
]]></description><pubDate>Thu, 15 Aug 2024 07:14:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=41253755</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41253755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41253755</guid></item><item><title><![CDATA[New comment by hislaziness in "Llama 3.1 in C"]]></title><description><![CDATA[
<p>Cool. I will try it out. I tried the same with ollama, the non english part needs a lot more polish. Do you see the outcome being any different?</p>
]]></description><pubDate>Wed, 24 Jul 2024 05:19:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41053839</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41053839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41053839</guid></item><item><title><![CDATA[New comment by hislaziness in "CLISP Port to WASM"]]></title><description><![CDATA[
<p>This is pretty awesome. How did you do it? Any blog on the detials?</p>
]]></description><pubDate>Wed, 24 Jul 2024 02:23:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=41053081</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41053081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41053081</guid></item><item><title><![CDATA[New comment by hislaziness in "Mistral NeMo"]]></title><description><![CDATA[
<p>The model description on huggingface says - Model size - 12.2B params, Tensor type - BF16. Is the Tensor type different from the training param size?</p>
]]></description><pubDate>Fri, 19 Jul 2024 00:04:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=41000896</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41000896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41000896</guid></item><item><title><![CDATA[New comment by hislaziness in "Mistral NeMo"]]></title><description><![CDATA[
<p>I just checked huggingface and the model files download is about 25GB but in a comment below someone mentioned it is 8fp quantized model. Trying to understand how the quantization affects the model (and RAM) size. Can someone please enlighten.</p>
]]></description><pubDate>Fri, 19 Jul 2024 00:01:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=41000885</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=41000885</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41000885</guid></item><item><title><![CDATA[New comment by hislaziness in "Mistral NeMo"]]></title><description><![CDATA[
<p>isn't it 2 bytes (fp16) per param. so 7b = 14 GB+some for inference?</p>
]]></description><pubDate>Thu, 18 Jul 2024 16:03:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=40997003</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=40997003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40997003</guid></item><item><title><![CDATA[New comment by hislaziness in "[dead]"]]></title><description><![CDATA[
<p>The title does not do justice to the article. It talks about a new classification system that OpenAI has introduced for LLMs</p>
]]></description><pubDate>Mon, 15 Jul 2024 14:29:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=40968186</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=40968186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40968186</guid></item><item><title><![CDATA[New comment by hislaziness in "ThankYouHN: 14 Years"]]></title><description><![CDATA[
<p>Same here. Coincidentally I joined in June 2009 and this is the only social media I indulge in.</p>
]]></description><pubDate>Fri, 07 Jun 2024 04:11:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40605266</link><dc:creator>hislaziness</dc:creator><comments>https://news.ycombinator.com/item?id=40605266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40605266</guid></item></channel></rss>