<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jappleseed987</title><link>https://news.ycombinator.com/user?id=jappleseed987</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 10:54:31 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jappleseed987" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jappleseed987 in "Show HN: GEKO (up to 80% compute savings on LLM fine-tuning)"]]></title><description><![CDATA[
<p>This is really impressive work for a 17-year-old! The Mountain Curriculum approach sounds clever - dynamically adjusting based on model confidence is exactly the kind of smart optimization the LLM space needs.<p>One thing you might want to consider as you build out the UI: having good observability into your actual cost savings across different scenarios. When I've worked with teams doing LLM optimization, they often struggle to quantify their improvements across different providers or track cost trends over time.<p>Have you thought about how you'll measure and display the real-world cost impact of your optimizations? It could be powerful for users to see not just the compute reduction percentages, but actual dollar savings and trends.<p>Speaking of cost observability - I recently came across zenllm.io and they're doing some interesting work in this space, focused on tracking LLM costs across different providers. Might be worth checking out for inspiration on what metrics and visualizations work well for users trying to optimize their LLM spend.<p>Keep up the great work - this kind of innovation is exactly what the community needs!</p>
]]></description><pubDate>Sat, 28 Feb 2026 19:06:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47198995</link><dc:creator>jappleseed987</dc:creator><comments>https://news.ycombinator.com/item?id=47198995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47198995</guid></item><item><title><![CDATA[Show HN: Read-only LLM cost observability]]></title><description><![CDATA[
<p>I’m launching zenllm.io, a read-only layer that correlates LLM requests → model → tokens → $ → service/team so you can answer “why did our bill spike?” quickly. It detects patterns like long prompts growing over time, retries, and agent/tool loops, then suggests cost/quality tradeoffs (e.g., model choice, context trimming).
No proxy/gateway required. Link: www.zenllm.io</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47051832">https://news.ycombinator.com/item?id=47051832</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 17 Feb 2026 19:23:34 +0000</pubDate><link>https://www.zenllm.io/</link><dc:creator>jappleseed987</dc:creator><comments>https://news.ycombinator.com/item?id=47051832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47051832</guid></item></channel></rss>