<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News - Newest: &#34;LLM&#34;</title><link>https://news.ycombinator.com/newest</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 05 May 2026 08:59:59 +0000</lastBuildDate><atom:link href="https://hnrss.org/newest?q=LLM" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Train Your Own LLM from Scratch]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/angelos-p/llm-from-scratch">https://github.com/angelos-p/llm-from-scratch</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48017948">https://news.ycombinator.com/item?id=48017948</a></p>
<p>Points: 214</p>
<p># Comments: 22</p>
]]></description><pubDate>Tue, 05 May 2026 04:09:17 +0000</pubDate><link>https://github.com/angelos-p/llm-from-scratch</link><dc:creator>kristianpaul</dc:creator><comments>https://news.ycombinator.com/item?id=48017948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48017948</guid></item><item><title><![CDATA[Show HN: A tiny C program where an LLM rewires its DAG while running]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/kouhxp/liteflow">https://github.com/kouhxp/liteflow</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48017118">https://news.ycombinator.com/item?id=48017118</a></p>
<p>Points: 9</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 05 May 2026 01:43:05 +0000</pubDate><link>https://github.com/kouhxp/liteflow</link><dc:creator>mrkn1</dc:creator><comments>https://news.ycombinator.com/item?id=48017118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48017118</guid></item><item><title><![CDATA[Show HN: Smile-Serve – Inference Server for ML, ONNX, and LLM]]></title><description><![CDATA[
<p>SMILE Serve is a production-ready inference server built on [Quarkus](<a href="https://quarkus.io/" rel="nofollow">https://quarkus.io/</a>)
that brings together three complementary inference capabilities on the JVM:<p><pre><code>  - **Classic ML**: `/api/v1/models` for serialized SMILE models (`.sml`)
  - **ONNX Runtime**: `/api/v1/onnx` for any model in the ONNX open format (`.onnx`)
  - **LLM Chat**: `/api/v1/chat` for Llama 3 chat completions
</code></pre>
A React-based web UI is bundled and served from the same process.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48015597">https://news.ycombinator.com/item?id=48015597</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 22:02:55 +0000</pubDate><link>https://github.com/haifengl/smile/tree/master/serve</link><dc:creator>haifeng</dc:creator><comments>https://news.ycombinator.com/item?id=48015597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48015597</guid></item><item><title><![CDATA[LLM anomaly detectors are not a cause for concern despite Mythos]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.magonia.io/research/why-a-decade-of-writing-detection-logic-makes-the-mythos-exploit-numbers-less-scary/">https://www.magonia.io/research/why-a-decade-of-writing-detection-logic-makes-the-mythos-exploit-numbers-less-scary/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48013928">https://news.ycombinator.com/item?id=48013928</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 19:43:33 +0000</pubDate><link>https://www.magonia.io/research/why-a-decade-of-writing-detection-logic-makes-the-mythos-exploit-numbers-less-scary/</link><dc:creator>badcryptobitch</dc:creator><comments>https://news.ycombinator.com/item?id=48013928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48013928</guid></item><item><title><![CDATA[Exploring LLM biases to manipulate AI search overview]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2605.00012">https://arxiv.org/abs/2605.00012</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48012208">https://news.ycombinator.com/item?id=48012208</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 17:49:05 +0000</pubDate><link>https://arxiv.org/abs/2605.00012</link><dc:creator>Brajeshwar</dc:creator><comments>https://news.ycombinator.com/item?id=48012208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48012208</guid></item><item><title><![CDATA[A thermodynamic trust layer cutting LLM hallucinations by 52%]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Dan23RR/snc-core">https://github.com/Dan23RR/snc-core</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48012163">https://news.ycombinator.com/item?id=48012163</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 17:46:40 +0000</pubDate><link>https://github.com/Dan23RR/snc-core</link><dc:creator>Dan23RR</dc:creator><comments>https://news.ycombinator.com/item?id=48012163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48012163</guid></item><item><title><![CDATA[LLM-first document AI is missing a 50-year-old CS technique]]></title><description><![CDATA[
<p>Article URL: <a href="https://bhavyagupta.dev/posts/llm-document-extractors-fixed-point">https://bhavyagupta.dev/posts/llm-document-extractors-fixed-point</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48010017">https://news.ycombinator.com/item?id=48010017</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 15:34:23 +0000</pubDate><link>https://bhavyagupta.dev/posts/llm-document-extractors-fixed-point</link><dc:creator>bhavya2k03</dc:creator><comments>https://news.ycombinator.com/item?id=48010017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48010017</guid></item><item><title><![CDATA[How HN: BibCrit – LLM analysis grounded in real manuscript corpus data]]></title><description><![CDATA[
<p>Article URL: <a href="https://bibcrit.app/">https://bibcrit.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48009524">https://news.ycombinator.com/item?id=48009524</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 04 May 2026 14:55:54 +0000</pubDate><link>https://bibcrit.app/</link><dc:creator>jossifresben</dc:creator><comments>https://news.ycombinator.com/item?id=48009524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48009524</guid></item><item><title><![CDATA[Show HN: The Cat Is Under Mayonnaise – Modifying LLM Behavior Without Retraining]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/andycufari/the-cat-is-under-mayonnaise-experiment">https://github.com/andycufari/the-cat-is-under-mayonnaise-experiment</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48008612">https://news.ycombinator.com/item?id=48008612</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 04 May 2026 13:38:14 +0000</pubDate><link>https://github.com/andycufari/the-cat-is-under-mayonnaise-experiment</link><dc:creator>andycufari</dc:creator><comments>https://news.ycombinator.com/item?id=48008612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48008612</guid></item><item><title><![CDATA[Show HN: Aurra – Bi-temporal memory for AI agents (with LLM auto-supersede)]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.aurra.us/blog/level-2-auto-supersede-beta">https://www.aurra.us/blog/level-2-auto-supersede-beta</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48008086">https://news.ycombinator.com/item?id=48008086</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 12:53:49 +0000</pubDate><link>https://www.aurra.us/blog/level-2-auto-supersede-beta</link><dc:creator>akshayt2012</dc:creator><comments>https://news.ycombinator.com/item?id=48008086</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48008086</guid></item><item><title><![CDATA[Fluctuating Accuracy in LLM Responses]]></title><description><![CDATA[
<p>Dear HN community, I’m brand new here and already feel right at home after just 5 minutes. I have a question for you about my theory:<p>I’m sure you’ve all experienced the wildly fluctuating quality of LLM responses. My theory: During peak times, the operators gradually reduce the depth of processing to take some of the load off the servers. I’ve noticed this a lot with Claude over the past few months.
What do you think?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48008015">https://news.ycombinator.com/item?id=48008015</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 12:46:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48008015</link><dc:creator>chris_explicare</dc:creator><comments>https://news.ycombinator.com/item?id=48008015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48008015</guid></item><item><title><![CDATA[Train a LLM from Scratch]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/raiyanyahya/how-to-train-your-gpt">https://github.com/raiyanyahya/how-to-train-your-gpt</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48007926">https://news.ycombinator.com/item?id=48007926</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 04 May 2026 12:36:45 +0000</pubDate><link>https://github.com/raiyanyahya/how-to-train-your-gpt</link><dc:creator>linhns</dc:creator><comments>https://news.ycombinator.com/item?id=48007926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48007926</guid></item><item><title><![CDATA[Eight LLM agents wrote 1.7M words; two refused, even when ordered]]></title><description><![CDATA[
<p>Article URL: <a href="https://zenodo.org/records/20020017">https://zenodo.org/records/20020017</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48005939">https://news.ycombinator.com/item?id=48005939</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 04 May 2026 08:06:08 +0000</pubDate><link>https://zenodo.org/records/20020017</link><dc:creator>norikaoda</dc:creator><comments>https://news.ycombinator.com/item?id=48005939</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48005939</guid></item><item><title><![CDATA[Show HN: My "home rig" for iterative attribute-weighted LLM benchmarking]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/yuvhaim-gif/LLM_InSight">https://github.com/yuvhaim-gif/LLM_InSight</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48005800">https://news.ycombinator.com/item?id=48005800</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 04 May 2026 07:44:21 +0000</pubDate><link>https://github.com/yuvhaim-gif/LLM_InSight</link><dc:creator>yuvalhaim</dc:creator><comments>https://news.ycombinator.com/item?id=48005800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48005800</guid></item><item><title><![CDATA[The Engineering Constraints of Distributed LLM Inference over the Open Internet]]></title><description><![CDATA[
<p>Article URL: <a href="https://siliconandsoul.substack.com/p/the-engineering-constraints-of-distributed">https://siliconandsoul.substack.com/p/the-engineering-constraints-of-distributed</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48004553">https://news.ycombinator.com/item?id=48004553</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 May 2026 04:15:24 +0000</pubDate><link>https://siliconandsoul.substack.com/p/the-engineering-constraints-of-distributed</link><dc:creator>essenceX</dc:creator><comments>https://news.ycombinator.com/item?id=48004553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48004553</guid></item><item><title><![CDATA[Know thyself: LLM schema for personal memory]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/parrik/know-thyself">https://github.com/parrik/know-thyself</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48003019">https://news.ycombinator.com/item?id=48003019</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 04 May 2026 00:09:33 +0000</pubDate><link>https://github.com/parrik/know-thyself</link><dc:creator>parrik</dc:creator><comments>https://news.ycombinator.com/item?id=48003019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48003019</guid></item><item><title><![CDATA[Duralang – decorator makes every LangChain LLM/tool/MCP call a Temporal Activity]]></title><description><![CDATA[
<p>Article URL: <a href="https://temporal.io/code-exchange/duralang-durable-stochastic-ai-agents-with-one-decorator">https://temporal.io/code-exchange/duralang-durable-stochastic-ai-agents-with-one-decorator</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48001123">https://news.ycombinator.com/item?id=48001123</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 03 May 2026 20:30:41 +0000</pubDate><link>https://temporal.io/code-exchange/duralang-durable-stochastic-ai-agents-with-one-decorator</link><dc:creator>deepanshsaxena</dc:creator><comments>https://news.ycombinator.com/item?id=48001123</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48001123</guid></item><item><title><![CDATA[Public Runtime for Convera for LLM's]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/cjparadise79/CONVERA-PUBLIC">https://github.com/cjparadise79/CONVERA-PUBLIC</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47999666">https://news.ycombinator.com/item?id=47999666</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 03 May 2026 18:06:25 +0000</pubDate><link>https://github.com/cjparadise79/CONVERA-PUBLIC</link><dc:creator>cjparadise</dc:creator><comments>https://news.ycombinator.com/item?id=47999666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47999666</guid></item><item><title><![CDATA[Show HN: MegaLLM – Universal LLM client for any OpenAI-compatible API]]></title><description><![CDATA[
<p>Article URL: <a href="https://megallm.netlify.app/">https://megallm.netlify.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47999114">https://news.ycombinator.com/item?id=47999114</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 03 May 2026 17:15:46 +0000</pubDate><link>https://megallm.netlify.app/</link><dc:creator>heliskyr2</dc:creator><comments>https://news.ycombinator.com/item?id=47999114</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47999114</guid></item><item><title><![CDATA[Show HN: Llmconfig – configfile and CLI for local LLM]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/kiliczsh/llmconfig">https://github.com/kiliczsh/llmconfig</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47997944">https://news.ycombinator.com/item?id=47997944</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 03 May 2026 15:31:44 +0000</pubDate><link>https://github.com/kiliczsh/llmconfig</link><dc:creator>kilic</dc:creator><comments>https://news.ycombinator.com/item?id=47997944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47997944</guid></item></channel></rss>