<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ud0</title><link>https://news.ycombinator.com/user?id=ud0</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 11:56:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ud0" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ud0 in "Show HN: Turn photos into Wordle puzzles with AI that runs 100% in your browser"]]></title><description><![CDATA[
<p>It uses a very tiny model, small enough to run in a browser, so it is not very smart <a href="https://huggingface.co/onnx-community/Florence-2-base-ft" rel="nofollow">https://huggingface.co/onnx-community/Florence-2-base-ft</a></p>
]]></description><pubDate>Mon, 06 Apr 2026 11:12:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47659383</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=47659383</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47659383</guid></item><item><title><![CDATA[Show HN: Turn photos into Wordle puzzles with AI that runs 100% in your browser]]></title><description><![CDATA[
<p>I built Moments to get this game idea out of my head, finally. My original goal was to run on-device models specifically in mobile browsers, but running local vision models directly in phone browsers is still very much too early, so I focused on desktop.<p>How it works:<p>- You upload a photo.<p>- A local vision model running entirely in your browser captions it and picks a prominent object from the image.<p>- You guess the word just like Wordle.<p>It uses a very tiny model so it is not very smart <a href="https://huggingface.co/onnx-community/Florence-2-base-ft" rel="nofollow">https://huggingface.co/onnx-community/Florence-2-base-ft</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47658954">https://news.ycombinator.com/item?id=47658954</a></p>
<p>Points: 1</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 06 Apr 2026 10:09:36 +0000</pubDate><link>https://momentsgame.com/</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=47658954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47658954</guid></item><item><title><![CDATA[Ask HN: I don't get why Anthropic is limiting usage]]></title><description><![CDATA[
<p>I’m trying to understand the rationale behind Anthropic limiting certain types of third-party usage, e.g., OpenClaw<p>From a naive perspective, more usage should mean more revenue since customers pay per token. So why restrict it? If a third party is generating heavy usage, isn’t that ultimately beneficial for revenue and growth?<p>What other factors are they considering that are not immediately obvious? If I sold shoes, I'd be happy to sell more regardless of how many resellers are down the chain.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47653057">https://news.ycombinator.com/item?id=47653057</a></p>
<p>Points: 5</p>
<p># Comments: 6</p>
]]></description><pubDate>Sun, 05 Apr 2026 19:38:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47653057</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=47653057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47653057</guid></item><item><title><![CDATA[Show HN: ScreenStack – AI-native platform purpose-built for technical interviews]]></title><description><![CDATA[
<p>Article URL: <a href="https://screenstack.tech/">https://screenstack.tech/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47286098">https://news.ycombinator.com/item?id=47286098</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 07 Mar 2026 09:49:46 +0000</pubDate><link>https://screenstack.tech/</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=47286098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47286098</guid></item><item><title><![CDATA[Using LLMs to evaluate technical interview performance]]></title><description><![CDATA[
<p>Article URL: <a href="https://dokasto.com/blog/we-are-letting-llms-decide/">https://dokasto.com/blog/we-are-letting-llms-decide/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47071202">https://news.ycombinator.com/item?id=47071202</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 19 Feb 2026 08:10:00 +0000</pubDate><link>https://dokasto.com/blog/we-are-letting-llms-decide/</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=47071202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47071202</guid></item><item><title><![CDATA[We Are Letting LLMs Decide Who Gets Hired and Doing It Wrong]]></title><description><![CDATA[
<p>Article URL: <a href="https://dokasto.com/blog/we-are-letting-llms-decide/">https://dokasto.com/blog/we-are-letting-llms-decide/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46773112">https://news.ycombinator.com/item?id=46773112</a></p>
<p>Points: 5</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 26 Jan 2026 23:18:29 +0000</pubDate><link>https://dokasto.com/blog/we-are-letting-llms-decide/</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=46773112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46773112</guid></item><item><title><![CDATA[New comment by ud0 in "Apple M5 chip"]]></title><description><![CDATA[
<p>Yes, but where are the production desktop app using on-device AI right now?</p>
]]></description><pubDate>Thu, 16 Oct 2025 07:33:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45602442</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45602442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45602442</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: Do You Use Teamblind.com?"]]></title><description><![CDATA[
<p>I'd really love to know other sources that can give you real information from actual people about tech careers & compensation & all minutiae of interviews & weird company tips. Do share</p>
]]></description><pubDate>Thu, 02 Oct 2025 20:37:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45455252</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45455252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45455252</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: Do You Use Teamblind.com?"]]></title><description><![CDATA[
<p>I first stumbled on Blind while job hunting, and since then I’ve used it as a resource, not a social media platform like most people do. I follow specific tags like software engineering and career, and I don’t keep notifications on (in fact, the only app with notifications on my phone is WhatsApp).<p>For interviews, I search company-specific tags to find discussions and tips, ChatGPT makes it even easier now to extract insights from those threads. I also use it for compensation research. Occasionally, I’ll check company gossip, and I have to say, Blind has correctly predicted layoffs at my employer twice. Beneath all the noise, some people really do share valuable inside information.<p>That’s why I treat Blind as a data-gathering tool, not a hangout. I mainly open it for interviews, negotiations, compensation benchmarks, or to get the general sentiment around a company. Honestly, I wish they had an API like Reddit’s, it would make pulling insights so much easier.</p>
]]></description><pubDate>Thu, 02 Oct 2025 05:56:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45446704</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45446704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45446704</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: Do You Use Teamblind.com?"]]></title><description><![CDATA[
<p>I don’t use Teamblind as often as I used to, but it played a huge role in my career growth. Coming from a developing country, it helped me level the playing field. Through the platform, I discovered FAANG(current employer for 6yrs now), Leetcode, structured ways to prepare for interviews, and even the fact that sign-on bonuses exist, something I had no idea about before.<p>Teamblind has been so impactful for me that I’m more than happy to pay for it. While there’s certainly noise on the platform, I’ve learned to focus on the insightful conversations and resources that matter. If you are in the US it might not be so useful, but for us outside, it is gold.<p>I've been on HN for way longer than Blind, but Blind had had the most impact in my career & I'm grateful for that.</p>
]]></description><pubDate>Wed, 01 Oct 2025 21:46:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45443927</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45443927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45443927</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: What frustrates you most about video conferencing tools?"]]></title><description><![CDATA[
<p>It sounds like screen sharing is the main pain. When that popup blocks your demo, what do you usually do in the moment? ignore it, drag it, or stop to fix it?</p>
]]></description><pubDate>Mon, 22 Sep 2025 08:50:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45330718</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45330718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45330718</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: What frustrates you most about video conferencing tools?"]]></title><description><![CDATA[
<p>Interesting throwback, sounds like Intel’s app felt way simpler. When you use modern tools now, what’s the part that feels most overcomplicated compared to that old phone-book style experience?</p>
]]></description><pubDate>Mon, 22 Sep 2025 08:49:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45330711</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45330711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45330711</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: What frustrates you most about video conferencing tools?"]]></title><description><![CDATA[
<p>Makes sense. When you do have to jump on Zoom/Teams/Meet, what’s the part that slows you down the most, is it setup, figuring out controls, or just the constant prompts?</p>
]]></description><pubDate>Mon, 22 Sep 2025 08:48:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45330704</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45330704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45330704</guid></item><item><title><![CDATA[Ask HN: What frustrates you most about video conferencing tools?]]></title><description><![CDATA[
<p>I work at a FAANG and have spent the last 5+ years building video conferencing tools. I hacked together a Chrome extension to fix my video colour. I planned to add more features: local recordings, on-device AI transcripts and summaries, and clean screenshots of screen shares. Then my company shut down our in-house tool and moved to Zoom. Around the same time, I joined a new team and had to onboard, which is when I realised I truly needed these features. In technical meetings, I often want to capture a screenshot of a shared screen or record a brief explanation that transcripts miss, with permission, of course.<p>With Zoom, I'm unable to do any of it because of Admin controls. Many AI notetakers exist, but few run locally, and wiring a Chrome plugin into Zoom is messy. These things are easy & possible, so it frustrates me every time I join a call.<p>What do you find painful about video conferencing? And if you could design a Zoom/Teams/Google Meet alternative from scratch, what would it do for you?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45325282">https://news.ycombinator.com/item?id=45325282</a></p>
<p>Points: 4</p>
<p># Comments: 8</p>
]]></description><pubDate>Sun, 21 Sep 2025 18:23:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45325282</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=45325282</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45325282</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: What are you working on? (July 2025)"]]></title><description><![CDATA[
<p>For a lot of teachers in rural parts of the world, they might have a laptop but they are underserved when it comes to AI. I noticed that a lot of them still manually create questions which is a time consuming process. I created an offline first desktop app for this. <a href="https://github.com/dokasto/Saidia">https://github.com/dokasto/Saidia</a></p>
]]></description><pubDate>Sun, 03 Aug 2025 22:53:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44780532</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=44780532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44780532</guid></item><item><title><![CDATA[Show HN: Offline-First AI Assistant for Educators]]></title><description><![CDATA[
<p>Saidia is an offline-first AI assistant tailored for educators, enabling them to generate questions directly from source materials.<p>Built using Electron, Ollama, and Gemma 3n, Saidia functions entirely offline and is optimised for basic hardware. It's ideal for areas with unreliable internet and power, empowering educators with powerful teaching resources where cloud-based tools are impractical or impossible.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44766230">https://news.ycombinator.com/item?id=44766230</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 02 Aug 2025 10:02:28 +0000</pubDate><link>https://github.com/dokasto/Saidia</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=44766230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44766230</guid></item><item><title><![CDATA[Ask HN: I built a Wordle-like game that uses your photos and offline AI]]></title><description><![CDATA[
<p>Hi HN,<p>I’ve been working on a game idea inspired by Wordle, but with a unique twist: it uses your own photos to generate guessing words. Here’s how it works: the app picks a random picture from your gallery. It uses a small language model (SLM), running entirely on your phone, to identify a word from the image. The chosen word could describe an object, the mood, or any notable feature in the picture. You then try to guess the word, just like Wordle.<p>The app is entirely offline, private, and doesn’t require internet access. I’ve always been fascinated by the possibilities of small language models on devices, and I have more ideas I’d like to explore in the future.<p>I currently have a rough prototype ready, but developing this further is quite time-consuming as I also have a full-time job. Before investing more time into refining it, I’d love to know if this concept sounds appealing and if using your own gallery photos is something you’d find engaging.<p>Thanks in advance for your insights!<p>See screenshots here https://imgur.com/a/Rwsv7Kf</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44596461">https://news.ycombinator.com/item?id=44596461</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 17 Jul 2025 18:26:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44596461</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=44596461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44596461</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: Are LLMs just answering what we want to hear?"]]></title><description><![CDATA[
<p>this is really good, just tried</p>
]]></description><pubDate>Sat, 29 Mar 2025 17:34:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43517126</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=43517126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43517126</guid></item><item><title><![CDATA[New comment by ud0 in "Andrej Karpathy on the State of Web Development"]]></title><description><![CDATA[
<p>People saying just use vanilla  have never tried to build a modern web application with a rich UI.</p>
]]></description><pubDate>Sat, 29 Mar 2025 13:03:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=43515201</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=43515201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43515201</guid></item><item><title><![CDATA[New comment by ud0 in "Ask HN: How do you use local LLMs?"]]></title><description><![CDATA[
<p>I use Deepseek via LLM studio for reading sensitive/non-sensitive docs, contracts, searching bank statements & bills.</p>
]]></description><pubDate>Sun, 23 Feb 2025 08:29:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=43147760</link><dc:creator>ud0</dc:creator><comments>https://news.ycombinator.com/item?id=43147760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43147760</guid></item></channel></rss>