<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: usernametaken29</title><link>https://news.ycombinator.com/user?id=usernametaken29</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 09:29:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=usernametaken29" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by usernametaken29 in "I run multiple $10K MRR companies on a $20/month tech stack"]]></title><description><![CDATA[
<p>I have used SQLite with extensions in extreme throughput scenarios. We’re talking running through it millions of documents per second in order to do disambiguation.
I won’t say this wouldn’t have been possible with a remote server, but it would have been a significant technical challenge.
Instead we packed up the database on S3, and each instance got a fresh copy and hammered away at the task. SQLite is the time tested alternative for when you need performance, not features</p>
]]></description><pubDate>Sun, 12 Apr 2026 09:21:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47737594</link><dc:creator>usernametaken29</dc:creator><comments>https://news.ycombinator.com/item?id=47737594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47737594</guid></item><item><title><![CDATA[New comment by usernametaken29 in "Six (and a half) intuitions for KL divergence"]]></title><description><![CDATA[
<p>I think personally the unit you measure divergence in just doesn’t matter. Yes, nats is technically superior, but as long as you do it consistently, all that you really want to do is to measure how similar A is to B.
In that sense I think many explanations of KL are very convoluted.</p>
]]></description><pubDate>Thu, 09 Apr 2026 21:23:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47710354</link><dc:creator>usernametaken29</dc:creator><comments>https://news.ycombinator.com/item?id=47710354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47710354</guid></item><item><title><![CDATA[New comment by usernametaken29 in "Netflix Prices Went Up Again – I Bought a DVD Player Instead"]]></title><description><![CDATA[
<p>Just say you’re training a multimodal language model and in this weird parallel universe we live in suddenly you’re not breaking copyright.
Bonus points if your model reproduces the original 1:1.
Definitely not a copy though</p>
]]></description><pubDate>Thu, 09 Apr 2026 21:17:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47710270</link><dc:creator>usernametaken29</dc:creator><comments>https://news.ycombinator.com/item?id=47710270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47710270</guid></item><item><title><![CDATA[New comment by usernametaken29 in "Claude mixes up who said what and that's not OK"]]></title><description><![CDATA[
<p>Only that one is built to be deterministic and one is built to be probabilistic. Sure, you can technically force determinism but it is going to be very hard. Even just making sure your GPU is indeed doing what it should be doing is going to be hard. Much like debugging a CPU, but again, one is built for determinism and one is built for concurrency.</p>
]]></description><pubDate>Thu, 09 Apr 2026 12:36:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702905</link><dc:creator>usernametaken29</dc:creator><comments>https://news.ycombinator.com/item?id=47702905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702905</guid></item><item><title><![CDATA[New comment by usernametaken29 in "Six (and a half) intuitions for KL divergence"]]></title><description><![CDATA[
<p>If you ask me the quickest way to explain KL divergence is like such:
If two distributions are the same KL becomes 0.
KL quantifies how many nats of difference there is between a target and a source.
It’s always good to read through the original information theoretic work. Most of AI is copycats with more compute anyways.</p>
]]></description><pubDate>Thu, 09 Apr 2026 12:21:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702747</link><dc:creator>usernametaken29</dc:creator><comments>https://news.ycombinator.com/item?id=47702747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702747</guid></item><item><title><![CDATA[New comment by usernametaken29 in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>Actually at a hardware level floating point operations are not associative. So even with temperature of 0 you’re not mathematically guaranteed the same response. Hence, not deterministic.</p>
]]></description><pubDate>Thu, 09 Apr 2026 12:15:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702685</link><dc:creator>usernametaken29</dc:creator><comments>https://news.ycombinator.com/item?id=47702685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702685</guid></item></channel></rss>