<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nikkindev</title><link>https://news.ycombinator.com/user?id=nikkindev</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 06:08:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nikkindev" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nikkindev in "Using uv as your shebang line"]]></title><description><![CDATA[
<p>“Lazy self-installing Python scripts with uv”[1] article from Trey Hunner has more details with examples<p>[1] <a href="https://treyhunner.com/2024/12/lazy-self-installing-python-scripts-with-uv/" rel="nofollow">https://treyhunner.com/2024/12/lazy-self-installing-python-s...</a></p>
]]></description><pubDate>Tue, 28 Jan 2025 18:45:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42856213</link><dc:creator>nikkindev</dc:creator><comments>https://news.ycombinator.com/item?id=42856213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42856213</guid></item><item><title><![CDATA[New comment by nikkindev in "Entropy of a Large Language Model output"]]></title><description><![CDATA[
<p>Author here: Wholeheartedly agree with your comment on hallucination. I initially set out to answer the question “Will entropy help identify hallucination?” And soon realised that it doesn’t, for the same reasons you mentioned above. So I pivoted to just writing about the entropy measure in the post. And this is also reflected by how I started with hallucination and then quickly veered away from it. I’ll be more careful in future posts & conversations. Thanks!</p>
]]></description><pubDate>Tue, 14 Jan 2025 07:54:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=42694826</link><dc:creator>nikkindev</dc:creator><comments>https://news.ycombinator.com/item?id=42694826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42694826</guid></item><item><title><![CDATA[New comment by nikkindev in "Entropy of a Large Language Model output"]]></title><description><![CDATA[
<p>Author here: Thanks for the explanation. Intuitively it does make sense that anything done during "post-training" (RLHF in our case) to make the model adhere to certain (set of) characteristics would bring the entropy down.<p>It is indeed alarming that the future 'base' models would start with more flattened logits as the de-facto. I personally believe that once this enshittification is recognised widely (could already be the case, but not recognized) then the training data being more "original" will become more important. And the cycle repeats! Or I wonder if there is a better post-training method that would still withhold the "creativity"?<p>Thanks for the RLHF explanation in terms of BPE. Definitely easier to grasp the concept this way!</p>
]]></description><pubDate>Mon, 13 Jan 2025 18:52:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=42687072</link><dc:creator>nikkindev</dc:creator><comments>https://news.ycombinator.com/item?id=42687072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42687072</guid></item><item><title><![CDATA[New comment by nikkindev in "Entropy of a Large Language Model output"]]></title><description><![CDATA[
<p>Author here: Really interesting work. Updated original post to include link to the paper. Thanks!</p>
]]></description><pubDate>Mon, 13 Jan 2025 18:21:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42686654</link><dc:creator>nikkindev</dc:creator><comments>https://news.ycombinator.com/item?id=42686654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42686654</guid></item><item><title><![CDATA[New comment by nikkindev in "Entropy of a Large Language Model output"]]></title><description><![CDATA[
<p>Author here: Yes. You are right. I was meaning to paint a picture that instead of the next token appearing magically, it is sampled from a probability distribution. The notion of determinism could be explained differently. Thanks for pointing it out!</p>
]]></description><pubDate>Mon, 13 Jan 2025 18:08:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42686430</link><dc:creator>nikkindev</dc:creator><comments>https://news.ycombinator.com/item?id=42686430</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42686430</guid></item><item><title><![CDATA[New comment by nikkindev in "Show HN: I built Wuf, mobile notifications for all your needs"]]></title><description><![CDATA[
<p>Not for self hosting. Self hosting [1] is still free since the code is open source.<p>[1] <a href="https://ntfy.sh/install/" rel="nofollow noreferrer">https://ntfy.sh/install/</a></p>
]]></description><pubDate>Tue, 12 Sep 2023 13:43:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=37481132</link><dc:creator>nikkindev</dc:creator><comments>https://news.ycombinator.com/item?id=37481132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37481132</guid></item></channel></rss>