<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sminchev</title><link>https://news.ycombinator.com/user?id=sminchev</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 12:57:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sminchev" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sminchev in "Fear of Missing Code"]]></title><description><![CDATA[
<p>I usually try to minimize the risk as much as possible, and let it run, without being supervised. And always, always validate the result the next day. As we can't trust fellow developers, and we need code review sessions, QA sessions, automation testing session, same is here as well. Don't fully trust the AI. Nothing is perfect<p>We had the ability to write code while eating dinner before. Now it just takes less effort. Thinking for a good prompt is easier than thinking for a good nested loop. And the whole vibe, and enthusiasm are bigger, pushing people to try something new.</p>
]]></description><pubDate>Sun, 29 Mar 2026 20:42:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47567094</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47567094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47567094</guid></item><item><title><![CDATA[New comment by sminchev in "Ask HN: What do you use for normative specs to drive AI agents?"]]></title><description><![CDATA[
<p>I use BMAD, and I don't care how the markdown files look like. I don't read them. I ask agents to create and read the files. One agent create the technical specification in markdown file, and a dev agent reads it and process it. ;)<p>If I need something to distribute or read myself, I ask for a pdf copy :)</p>
]]></description><pubDate>Sun, 29 Mar 2026 19:26:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47566325</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47566325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47566325</guid></item><item><title><![CDATA[New comment by sminchev in "Ask HN: How are you keeping AI coding agents from burning money?"]]></title><description><![CDATA[
<p>Some things that I know how to do, I just run myself. If starting the tests is a bash command, I asked the AI to create bash script that does this, and then I run it myself. Same, with the build, deploy and other similar tasks. 
For some no so important tasks, I use different model, like GLM, which is cheaper. Then I save the result of the, let's say bug analysis, or code review, and ask my main model (Opus) to read the document and execute the task. This way I use my expensive model to write the tasks, but the cheaper one to do the analysis.</p>
]]></description><pubDate>Sun, 29 Mar 2026 19:22:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47566274</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47566274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47566274</guid></item><item><title><![CDATA[New comment by sminchev in "Ask HN: Best stack for building a tiny game with an 11-year-old?"]]></title><description><![CDATA[
<p>You have Claude Code. Use it ;) It will be really a good exercise for her to train how she express herself. To cleanly and correctly explain what she wants. And you can guide her, give her some tip and tricks. Once you have something really simple, you can try to show her some peaces of the code, just to show her how thinks looks like, without going in to details how it works and what it does. If she at some point get curious, she might come and ask herself, and this is where the magic and love of code will happen ;)<p>I would propose a simple program with a few steps, it can be a console one, to draw a circle, to draw a rectangle, to draw a pink unicorn ;)<p>Happy, for me, I don't have such problems. My girl wants to be football player.</p>
]]></description><pubDate>Sun, 29 Mar 2026 19:18:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47566238</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47566238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47566238</guid></item><item><title><![CDATA[New comment by sminchev in "Ask HN: Is it just me?"]]></title><description><![CDATA[
<p>True, true, true. I don't argue with that. But we can make a good comparison and analogy to explain the behavior easy, with less technical terms.<p>AI can start hallucinating, if it deals with a lot, and/or complex data ;)  If I deal with so much I will start hallucinating myself :D<p>That was the point :)</p>
]]></description><pubDate>Sun, 29 Mar 2026 19:02:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47566076</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47566076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47566076</guid></item><item><title><![CDATA[New comment by sminchev in "Ask HN: Is it just me?"]]></title><description><![CDATA[
<p>Why do they hallucinate? :)</p>
]]></description><pubDate>Sun, 29 Mar 2026 06:29:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47560849</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47560849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47560849</guid></item><item><title><![CDATA[New comment by sminchev in "Ask HN: How do you deal with obvious AI assistant usage in interviews"]]></title><description><![CDATA[
<p>When I do interviews, I don't ask so many technical questions. They all can read. If they don't know it now, they can read it later. They can read it from a book, from google search result, or AI. What I usually look for is the good basis and proper mindset. Analytical thinking, curiosity, good communication skills. People who likes challenges.
I read it once in an article: AI amplifies. It amplifies the success of the good teams/people/companies. It amplifies the failure of bad teams/people/companies.<p>If you can detect good basis, their skills, knowing how to use AI, will help them be more productive and give better quality</p>
]]></description><pubDate>Sat, 28 Mar 2026 23:20:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47558964</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47558964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47558964</guid></item><item><title><![CDATA[New comment by sminchev in "Ask HN: Is it just me?"]]></title><description><![CDATA[
<p>Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.<p>Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump.<p>Other models are just asking too many questions...<p>There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts.</p>
]]></description><pubDate>Sat, 28 Mar 2026 20:33:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47557929</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47557929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47557929</guid></item><item><title><![CDATA[New comment by sminchev in "PolicyGen – free privacy policy generator, no account needed"]]></title><description><![CDATA[
<p>Looks good. A lot of solo developers, and startups need something like this. Is it in any way validated that it is correct, and it will really protect you? Will it pass if used in court?</p>
]]></description><pubDate>Thu, 26 Mar 2026 13:12:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47530036</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47530036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47530036</guid></item><item><title><![CDATA[New comment by sminchev in "AI Isn't Bad. Your Prompts Are"]]></title><description><![CDATA[
<p>When working with Claude Code, I usually save my prompts in slash commands. Instead of writing the prompts, I call the slash command. How does your extension fit in tools like Cursor and Antigravity? I can just install it and use in every one of those VSCode based AI assistants?</p>
]]></description><pubDate>Tue, 24 Mar 2026 20:18:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47508529</link><dc:creator>sminchev</dc:creator><comments>https://news.ycombinator.com/item?id=47508529</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47508529</guid></item></channel></rss>