<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hamasho</title><link>https://news.ycombinator.com/user?id=hamasho</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 12:59:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hamasho" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hamasho in "In Japan, the robot isn't coming for your job; it's filling the one nobody wants"]]></title><description><![CDATA[
<p>I think that nihilistic sentiment arises only when you are materially satisfied, maybe in the 90s and 00s (like office workers in Fight Club or Office Space). Many of us are in survival mode now. We just need money to keep up with inflation. We don't have time to think about the deep meaning of life.</p>
]]></description><pubDate>Mon, 06 Apr 2026 01:06:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47655711</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47655711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47655711</guid></item><item><title><![CDATA[New comment by hamasho in "Musician says AI company is cloning her music, filing claims against her"]]></title><description><![CDATA[
<p>AI-generated content should have the same amount of copyright as prompt texts. You can't claim copyright for the text "Rock music with angry vocals and 160 bpm with a guitar solo".<p>But I think training models using copyrighted content is stealing in the first place. It's not fair use, so it should be banned entirely.</p>
]]></description><pubDate>Sun, 05 Apr 2026 23:29:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47655024</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47655024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47655024</guid></item><item><title><![CDATA[New comment by hamasho in "How many products does Microsoft have named 'Copilot'?"]]></title><description><![CDATA[
<p>It makes sense. And Google is its own way to name all AI products “Gemini”.</p>
]]></description><pubDate>Sat, 04 Apr 2026 21:30:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47643648</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47643648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47643648</guid></item><item><title><![CDATA[New comment by hamasho in "I put my whole life into a single database"]]></title><description><![CDATA[
<p>Why don’t you just query Palantir DB by your human ID? It shows your entire life data and much more.</p>
]]></description><pubDate>Tue, 10 Mar 2026 13:23:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47322940</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47322940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47322940</guid></item><item><title><![CDATA[New comment by hamasho in "Launch HN: Terminal Use (YC W26) – Vercel for filesystem-based agents"]]></title><description><![CDATA[
<p>Hmm.. so this is not the same category with computer use or browser use. I love the idea. Well defined and controlled sandbox is really useful.
Off topic but I’m disappointed by computer use and browser use when I tried three months ago. They couldn’t complete many basic tasks. Especially browser use, it easily failed slightly unorthodox website. It can’t find select box implemented by div, stacks in infinite loop when the submit button is disabled, and it even failed to complete the demo in its own readme! I’m okay with open source projects a bit buggy, but a VC funded company, which already has the fancy landing page, provides the service to big corps, and offers paid plans, should at least make sure the demo works.</p>
]]></description><pubDate>Mon, 09 Mar 2026 21:54:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47316119</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47316119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47316119</guid></item><item><title><![CDATA[New comment by hamasho in "Yoghurt delivery women combatting loneliness in Japan"]]></title><description><![CDATA[
<p>I grew up on a small village in a small island.
The yogurt lady was an essential part of the community.<p>Many stay-at-home moms (including my mom) seemed to enjoy her visit.
She and my mom talked a lot, sometimes for hours (I still can't figure out how she completed her job when she spent so much time with one person).
They chatted about recent events, like the daughter of the fisherman gave birth, the great-grandpa of the liquor shop died of cancer, a newly opened restaurant in the nearest town sucked, and sometimes shared even personal struggles or family matters.
It really helped a lot of people combat mental struggles caused by the isolation of  being traditional stay-at-home wives in a super rural area.
The only downside was anything you shared with her would be spread in the entire village before dawn.</p>
]]></description><pubDate>Sun, 08 Mar 2026 01:01:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47293219</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47293219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47293219</guid></item><item><title><![CDATA[New comment by hamasho in "Verification debt: the hidden cost of AI-generated code"]]></title><description><![CDATA[
<p>I've been spending much less time on reviews lately. I used to check if the code was correct and well-written, and worked on my local machine as expected and performed well. But I can't do it anymore. If they can vibe-code, why can't I vibe-review? Maybe something wrong will happen in production, but it's not my responsibility. I also stopped volunteering for on-call (well, I shouldn't in the first place). If I noticed someone reporting a bug in production during non-working hours, I investigated and implemented the solution, usually faster than coworkers.  I thought it was my responsibility to contribute to the product if I could, even though it was beyond my job description. Working with AI-generated code really demoralized me and I can't love the product I'm working on anymore.</p>
]]></description><pubDate>Sun, 08 Mar 2026 00:07:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47292836</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47292836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47292836</guid></item><item><title><![CDATA[New comment by hamasho in "GPT-5.4"]]></title><description><![CDATA[
<p>I agree, but in general those chat apps have relatively bad user experiences for multibillion BtoC company. I used to have a lot of surprises and frustrations while using Claude Code / Desktop, and still encounter issues, but it's the best in major LLM services.</p>
]]></description><pubDate>Fri, 06 Mar 2026 02:43:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47270186</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47270186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47270186</guid></item><item><title><![CDATA[New comment by hamasho in "Google Workspace CLI"]]></title><description><![CDATA[
<p>lol but it’s definitely happening. Some services are solely for llm consumption and human is not a welcomed customer.</p>
]]></description><pubDate>Thu, 05 Mar 2026 14:08:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47261709</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47261709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47261709</guid></item><item><title><![CDATA[New comment by hamasho in "Statement from Dario Amodei on our discussions with the Department of War"]]></title><description><![CDATA[
<p>But in the stock market, it is almost impossible for companies like Anthropic or any successful startups not to become villains (profit first no matter what). Anthropic especially needs to burn huge amount of money, so they need a lot of funding. The only way to keep founders' idealism is probably to copy Zuckerberg. Divide stocks with and without voting-power and trade only no-voting stocks.</p>
]]></description><pubDate>Fri, 27 Feb 2026 14:31:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47180894</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=47180894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47180894</guid></item><item><title><![CDATA[New comment by hamasho in "Vouch"]]></title><description><![CDATA[
<p>But one way to get better at communication is try and error. This solution makes trying much harder, and eventually leads less good communicators.</p>
]]></description><pubDate>Mon, 09 Feb 2026 00:38:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46940174</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46940174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46940174</guid></item><item><title><![CDATA[New comment by hamasho in "The Codex App"]]></title><description><![CDATA[
<p>More simple and similar app: vibe-kanban<p><a href="https://www.vibekanban.com/">https://www.vibekanban.com/</a></p>
]]></description><pubDate>Mon, 02 Feb 2026 21:35:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46861927</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46861927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46861927</guid></item><item><title><![CDATA[New comment by hamasho in "Outsourcing thinking"]]></title><description><![CDATA[
<p><p><pre><code>  > Surely they know the risks, and surely people will be just as responsible with AI
</code></pre>
I can't imagine even half of students can understand the short and long term risk of using social media and AI intensively.
At least I couldn't when I was a student.</p>
]]></description><pubDate>Sun, 01 Feb 2026 00:28:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46842393</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46842393</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46842393</guid></item><item><title><![CDATA[New comment by hamasho in "Outsourcing thinking"]]></title><description><![CDATA[
<p><p><pre><code>  > The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
</code></pre>
This really resonates with me.
If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future.
But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.<p>In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.</p>
]]></description><pubDate>Sun, 01 Feb 2026 00:08:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46842232</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46842232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46842232</guid></item><item><title><![CDATA[New comment by hamasho in "AI Slop Report: The Global Rise of Low-Quality AI Videos"]]></title><description><![CDATA[
<p>I see a lot of educational animal videos that copy the content of BBC Earth only to replace David Attenborough voice with AI, and it unreasonably irritates me.</p>
]]></description><pubDate>Sun, 28 Dec 2025 08:49:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46409518</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46409518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46409518</guid></item><item><title><![CDATA[New comment by hamasho in "AI Slop Report: The Global Rise of Low-Quality AI Videos"]]></title><description><![CDATA[
<p>I feel like the search result of YouTube videos on Google search is much worse than the result on YouTube itself. It's strange because Google develop both Google and YouTube search. It's like reverse Reddit, where the website's search is so unusable you have to search on Google like "xxx reddit".</p>
]]></description><pubDate>Sun, 28 Dec 2025 08:45:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46409501</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46409501</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46409501</guid></item><item><title><![CDATA[New comment by hamasho in "What makes you senior"]]></title><description><![CDATA[
<p><p><pre><code>  > The moment you hand them something fuzzy, though, like ...
  > “we should probably think about scaling”,
  > that’s when you see the difference.
</code></pre>
Senior engineers should ask, "but do we need scaling? And if it does, how much needed now and future?"
But I've seen a lot of seniors who jumped to implementing an unnecessarily complicated solution without questions, because they don't think about it too much, want to have fun, or just don't have energy to argue (I'm guilty myself).</p>
]]></description><pubDate>Tue, 23 Dec 2025 22:27:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46370208</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46370208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46370208</guid></item><item><title><![CDATA[New comment by hamasho in "Structured outputs create false confidence"]]></title><description><![CDATA[
<p>Story time.<p>I used Python's Instructor[1], a package to force the model output to match the predefined Pydantic model.
It's used like in the example below, and the output is guaranteed to fit the model.<p><pre><code>    import instructor
    from pydantic import BaseModel

    class Person(BaseModel):
        name: str
        age: int

    client = instructor.from_provider("openai/gpt-5-nano")
    person = client.create(
        response_model=Person,
        messages=[{"role": "user", "content": "Extract: John is a 30-year-old"}]
    )
    print(person)
</code></pre>
I defined a response model for chain of thought prompt with answers and its thinking process, then asked questions.<p><pre><code>    class MathAnswer(BaseModel):
        value: int
        reasoning: str

    answer = client.create(
        response_model=MathAnswer,
        messages=[{"role": "user", "content": "What's the answer to 17*4+1? Think step by step"}]
    )
    print(f"answer={answer.value}, {answer.reasoning}")
</code></pre>
This worked in most cases, but once in a while, it produced very strange results:<p><pre><code>    67, First I calculated 17*4=68, then I added 1 so the answer is 69
</code></pre>
The actual implementation was much more complicated with many and complex proerties, a lot of inserted context, and long, engineered prompt, and it happened only a few times, so I took hours to figure out if it's caused by a programming bug or just LLM's randomness.<p>Turned out, because I defined MathAnswer in that order, the model output was in the same order and it put the `reasoning` after the `answer`, so the thinking process didn't influence the answer like `{"answer": 67, "reasoning": "..."}` instead of `{"reasoning": "...", "answer": 69}`.
I just changed the order of the model's properties and the problem was gone.<p><pre><code>    class MathAnswer(BaseModel):
        reasoning: str
        value: int
</code></pre>
[1] <a href="https://python.useinstructor.com/#what-is-instructor" rel="nofollow">https://python.useinstructor.com/#what-is-instructor</a><p>ETA: Codex and Claude Code only said how shit my prompt and RAG system were, then suggested how to improve them, but it only made the problem worse. They really don't know how they work.</p>
]]></description><pubDate>Sun, 21 Dec 2025 22:37:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46349308</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46349308</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46349308</guid></item><item><title><![CDATA[New comment by hamasho in "History LLMs: Models trained exclusively on pre-1913 texts"]]></title><description><![CDATA[
<p>Meanwhile in Japan, the second largest bank created an AI pretending the president, replying chats and attending video conferences…<p>[1] AI learns one year's worth of CEO Sumitomo Mitsui Financial Group's president's statements [WBS]
<a href="https://youtu.be/iG0eRF89dsk" rel="nofollow">https://youtu.be/iG0eRF89dsk</a></p>
]]></description><pubDate>Fri, 19 Dec 2025 09:45:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46323977</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46323977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46323977</guid></item><item><title><![CDATA[New comment by hamasho in "Ask HN: Does anyone understand how Hacker News works?"]]></title><description><![CDATA[
<p>If I remember correctly, when Dropbox announced its launch most replies here were “but I can self host rsync!!” Well, turned out most people can’t.</p>
]]></description><pubDate>Thu, 18 Dec 2025 07:51:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46309988</link><dc:creator>hamasho</dc:creator><comments>https://news.ycombinator.com/item?id=46309988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46309988</guid></item></channel></rss>