<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: msvana</title><link>https://news.ycombinator.com/user?id=msvana</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:25:11 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=msvana" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by msvana in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>I think there is one problem with defining acceptance criteria first: sometimes you don't know ahead of time what those criteria are. You need to poke around first to figure out what's possible and what matters. And sometimes the criteria are subjective, abstract, and cannot be formally specified.<p>Of course, this problem is more general than just improving the output of LLM coding tools</p>
]]></description><pubDate>Sat, 07 Mar 2026 18:29:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47290171</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=47290171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47290171</guid></item><item><title><![CDATA[New comment by msvana in "Labor market impacts of AI: A new measure and early evidence"]]></title><description><![CDATA[
<p>I work as an ML engineer/researcher. When I implement a change in an experiment it usually takes at least an hour to get the results. I can use this time to implement a different experiment. Doesn't matter if I do it by hand or if I let an agent do it for me, I have enough time. Code isn't the bottleneck.<p>I also heard an opinion that since writing code is cheap, people implement things that have no economic value without really thinking it through.</p>
]]></description><pubDate>Fri, 06 Mar 2026 07:13:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47271954</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=47271954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47271954</guid></item><item><title><![CDATA[I built an RGB controller with Arduino]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2026/02/i-built-an-rgb-controller-with-arduino/">https://svana.name/2026/02/i-built-an-rgb-controller-with-arduino/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47195936">https://news.ycombinator.com/item?id=47195936</a></p>
<p>Points: 9</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 28 Feb 2026 14:44:23 +0000</pubDate><link>https://svana.name/2026/02/i-built-an-rgb-controller-with-arduino/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=47195936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47195936</guid></item><item><title><![CDATA[Do LLMs hallucinate more in Czech than in English?]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2026/02/do-llms-hallucinate-more-in-czech-than-in-english/">https://svana.name/2026/02/do-llms-hallucinate-more-in-czech-than-in-english/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47032358">https://news.ycombinator.com/item?id=47032358</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 16 Feb 2026 08:22:53 +0000</pubDate><link>https://svana.name/2026/02/do-llms-hallucinate-more-in-czech-than-in-english/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=47032358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47032358</guid></item><item><title><![CDATA[Hallucinations in LLMs: What are they and what causes them]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2026/01/hallucinations-in-llms-what-are-they-and-what-causes-them/">https://svana.name/2026/01/hallucinations-in-llms-what-are-they-and-what-causes-them/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46539050">https://news.ycombinator.com/item?id=46539050</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 08 Jan 2026 09:26:45 +0000</pubDate><link>https://svana.name/2026/01/hallucinations-in-llms-what-are-they-and-what-causes-them/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=46539050</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46539050</guid></item><item><title><![CDATA[Managing GPU Rentals with Rsync: Workflow for Volatile Cloud Resources]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2025/10/managing-gpu-rentals-with-rsync-workflow-for-volatile-cloud-resources/">https://svana.name/2025/10/managing-gpu-rentals-with-rsync-workflow-for-volatile-cloud-resources/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45758372">https://news.ycombinator.com/item?id=45758372</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 30 Oct 2025 10:32:01 +0000</pubDate><link>https://svana.name/2025/10/managing-gpu-rentals-with-rsync-workflow-for-volatile-cloud-resources/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=45758372</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45758372</guid></item><item><title><![CDATA[Implementing a local AI coding agent is hard]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2025/09/implementing-a-fully-local-ai-coding-agent-is-hard/">https://svana.name/2025/09/implementing-a-fully-local-ai-coding-agent-is-hard/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45404023">https://news.ycombinator.com/item?id=45404023</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 28 Sep 2025 13:00:23 +0000</pubDate><link>https://svana.name/2025/09/implementing-a-fully-local-ai-coding-agent-is-hard/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=45404023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45404023</guid></item><item><title><![CDATA[Show HN: Read-only AI coding assistant]]></title><description><![CDATA[
<p>I am a bit conservative when it comes to using AI to write code for me. However, I want to use AI to chat about my code - something like ChatGPT that sees my local files. So I started working on FileChat.<p>FileChat is still quite new, and any feedback is appreciated. Please file an issue if you find a bug or have a feature suggestion.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45357827">https://news.ycombinator.com/item?id=45357827</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 24 Sep 2025 08:47:56 +0000</pubDate><link>https://github.com/msvana/filechat</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=45357827</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45357827</guid></item><item><title><![CDATA[How I solved PyTorch's cross-platform nightmare]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2025/09/how-i-solved-pytorchs-cross-platform-nightmare/">https://svana.name/2025/09/how-i-solved-pytorchs-cross-platform-nightmare/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45164762">https://news.ycombinator.com/item?id=45164762</a></p>
<p>Points: 73</p>
<p># Comments: 29</p>
]]></description><pubDate>Mon, 08 Sep 2025 04:59:35 +0000</pubDate><link>https://svana.name/2025/09/how-i-solved-pytorchs-cross-platform-nightmare/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=45164762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45164762</guid></item><item><title><![CDATA[Lessons from AI Safety for Businesses]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2025/08/reading-club-lessons-from-ai-safety-for-businesses/">https://svana.name/2025/08/reading-club-lessons-from-ai-safety-for-businesses/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44982531">https://news.ycombinator.com/item?id=44982531</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 22 Aug 2025 09:31:46 +0000</pubDate><link>https://svana.name/2025/08/reading-club-lessons-from-ai-safety-for-businesses/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44982531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44982531</guid></item><item><title><![CDATA[Ask HN: How do you imagine an AI-driven utopia?]]></title><description><![CDATA[
<p>Imagine that the development of artificial superintelligence goes extremely well and we solve any possible obstacle. How do you imagine the ideal future, say in 50 to 100 years?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44708709">https://news.ycombinator.com/item?id=44708709</a></p>
<p>Points: 1</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 28 Jul 2025 08:45:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44708709</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44708709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44708709</guid></item><item><title><![CDATA[New comment by msvana in "A non-anthropomorphized view of LLMs"]]></title><description><![CDATA[
<p>This reminds me of the idea that LLMs are simulators. Given the current state (the prompt + the previously generated text), they generate the next state (the next token) using rules derived from training data.<p>As simulators, LLMs can simulate many things, including agents that exhibit human-like properties. But LLMs themselves are not agents.<p>More on this idea here:
<a href="https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/agi-safety-from-first-principles" rel="nofollow">https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/agi-s...</a><p>This perspective makes a lot of sense to me. Still, I wouldn't avoid anthropomorphization altogether. First, in some cases, it might be a useful mental tool to understand some aspect of LLMs. Second, there is a lot of uncertainty about how LLMs work, so I would stay epistemically humble. The second argument applies in the opposite direction as well: for example, it's equally bad to say that LLMs are 100% conscious.<p>On the other hand, if someone argues against anthropomorphizing LLMs, I would avoid phrasing it as: "It's just matrix multiplication." The article demonstrates why this is a bad idea pretty well.</p>
]]></description><pubDate>Tue, 08 Jul 2025 04:40:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=44497129</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44497129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44497129</guid></item><item><title><![CDATA[LLMs Are Simulators]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators">https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44373676">https://news.ycombinator.com/item?id=44373676</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 25 Jun 2025 04:43:26 +0000</pubDate><link>https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44373676</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44373676</guid></item><item><title><![CDATA[New comment by msvana in "Ask HN: Why study anything if AGI is (supposedly) coming?"]]></title><description><![CDATA[
<p>I had this exact analogy in mind when writing point no. 3</p>
]]></description><pubDate>Wed, 18 Jun 2025 15:15:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44310634</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44310634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44310634</guid></item><item><title><![CDATA[New comment by msvana in "Ask HN: Why study anything if AGI is (supposedly) coming?"]]></title><description><![CDATA[
<p>Thanks, sounds kinda similar to my first point.</p>
]]></description><pubDate>Wed, 18 Jun 2025 15:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44310572</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44310572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44310572</guid></item><item><title><![CDATA[Ask HN: Why study anything if AGI is (supposedly) coming?]]></title><description><![CDATA[
<p>I had this question in my head for a few days. So far, I've been able to come up with 4 reasons:<p>1. It’s about optimizing the worst case. Maybe AGI isn’t coming that soon; we don’t know. So it might be worth being prepared for multiple scenarios.<p>2. AGI might replace average humans, but we would still need the top performers to be human. If this is the case, things still need to change. It might make less sense for most people to pursue careers in, say, software development, unless they are exceptionally talented, motivated, or passionate.<p>3. We can learn new things just because they are fun. Most people who learn how to play an instrument won’t ever make money from it. In the future, programmers and other mainly white-collar workers might end up in a similar situation. It will be a hobby, not a job.<p>4. Related to the previous point, learning is also a social activity. It’s about doing something together, meeting new people with similar interests, making new friends, and possibly even finding spouses.<p>Would you add anything else to this list? Or do you think studying anything is pointless?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44310369">https://news.ycombinator.com/item?id=44310369</a></p>
<p>Points: 5</p>
<p># Comments: 10</p>
]]></description><pubDate>Wed, 18 Jun 2025 14:46:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44310369</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44310369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44310369</guid></item><item><title><![CDATA[The value of commercial coding courses]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2025/05/on-the-value-of-commercial-coding-courses/">https://svana.name/2025/05/on-the-value-of-commercial-coding-courses/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44262167">https://news.ycombinator.com/item?id=44262167</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 12 Jun 2025 19:31:38 +0000</pubDate><link>https://svana.name/2025/05/on-the-value-of-commercial-coding-courses/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44262167</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44262167</guid></item><item><title><![CDATA[New comment by msvana in "Getting AI to write good SQL"]]></title><description><![CDATA[
<p>Problem no. 2 (Understanding user intent) is relevant not only to writing SQL but also to software development in general. Follow-up questions are something I had in mind for a long time. I wonder why this is not the default for LLMs.</p>
]]></description><pubDate>Sat, 17 May 2025 06:44:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44012476</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44012476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44012476</guid></item><item><title><![CDATA[Would I do a PhD again?]]></title><description><![CDATA[
<p>Article URL: <a href="https://svana.name/2025/01/would-i-do-a-phd-again/">https://svana.name/2025/01/would-i-do-a-phd-again/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44002356">https://news.ycombinator.com/item?id=44002356</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 16 May 2025 06:29:15 +0000</pubDate><link>https://svana.name/2025/01/would-i-do-a-phd-again/</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=44002356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44002356</guid></item><item><title><![CDATA[AI-LieDar: Examine the Trade-Off Between Utility and Truthfulness in LLM Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2409.09013">https://arxiv.org/abs/2409.09013</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43873925">https://news.ycombinator.com/item?id=43873925</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 02 May 2025 19:45:09 +0000</pubDate><link>https://arxiv.org/abs/2409.09013</link><dc:creator>msvana</dc:creator><comments>https://news.ycombinator.com/item?id=43873925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43873925</guid></item></channel></rss>