<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: zhyder</title><link>https://news.ycombinator.com/user?id=zhyder</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 15:33:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=zhyder" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Google's AI Studio now integrates with Firebase for vibe coding production apps]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.google/innovation-and-ai/technology/developers-tools/full-stack-vibe-coding-google-ai-studio/">https://blog.google/innovation-and-ai/technology/developers-tools/full-stack-vibe-coding-google-ai-studio/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47450049">https://news.ycombinator.com/item?id=47450049</a></p>
<p>Points: 2</p>
<p># Comments: 3</p>
]]></description><pubDate>Fri, 20 Mar 2026 03:15:06 +0000</pubDate><link>https://blog.google/innovation-and-ai/technology/developers-tools/full-stack-vibe-coding-google-ai-studio/</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47450049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47450049</guid></item><item><title><![CDATA[New comment by zhyder in "MacBook Neo"]]></title><description><![CDATA[
<p>Looks like the best display you can get in laptops at this price: 2408x1506 resolution, 500 nits, antireflective coating (!). And bonus points for no silly notch.</p>
]]></description><pubDate>Wed, 04 Mar 2026 16:42:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47250122</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47250122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47250122</guid></item><item><title><![CDATA[New comment by zhyder in "Anthropic Cowork feature creates 10GB VM bundle on macOS without warning"]]></title><description><![CDATA[
<p>I guess it could warn about it but the VM sandbox is the best part of Cowork. The sandbox itself is necessary to balance the power you get with generating code (that's hidden-to-user) with the security you need for non-technical users. I'd go even further and make user grant host filesystem access only to specific folders, and warn about anything with write access: can think of lots of easy-to-use UIs for this.</p>
]]></description><pubDate>Mon, 02 Mar 2026 16:32:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47220221</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47220221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47220221</guid></item><item><title><![CDATA[Netflix drops bid for Warner Bros after Paramount offer]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.theverge.com/streaming/885753/netflix-exit-warner-bros-discovery-deal-paramount">https://www.theverge.com/streaming/885753/netflix-exit-warner-bros-discovery-deal-paramount</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47173710">https://news.ycombinator.com/item?id=47173710</a></p>
<p>Points: 20</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 26 Feb 2026 23:25:20 +0000</pubDate><link>https://www.theverge.com/streaming/885753/netflix-exit-warner-bros-discovery-deal-paramount</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47173710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47173710</guid></item><item><title><![CDATA[New comment by zhyder in "Nano Banana 2: Google's latest AI image generation model"]]></title><description><![CDATA[
<p>Model card: <a href="https://deepmind.google/models/model-cards/gemini-3-1-flash-image/" rel="nofollow">https://deepmind.google/models/model-cards/gemini-3-1-flash-...</a><p>Pretty close to Gemini 3 Pro Image (aka Nano Banana Pro) in most benchmarks, even without thinking+search, and even exceeding it in 2 most important ones of 'Overall Preference' and 'Visual Quality'. I'm excited about the big jump in Infographics/Factuality (even without thinking+search; I'm surprised that text+image search grounding doesn't make an even bigger dent).</p>
]]></description><pubDate>Thu, 26 Feb 2026 19:33:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47170915</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47170915</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47170915</guid></item><item><title><![CDATA[More plugin support in Claude Cowork]]></title><description><![CDATA[
<p>Article URL: <a href="https://claude.com/blog/cowork-plugins-across-enterprise">https://claude.com/blog/cowork-plugins-across-enterprise</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47139651">https://news.ycombinator.com/item?id=47139651</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 24 Feb 2026 17:13:47 +0000</pubDate><link>https://claude.com/blog/cowork-plugins-across-enterprise</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47139651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47139651</guid></item><item><title><![CDATA[New comment by zhyder in "Gemini 3.1 Pro"]]></title><description><![CDATA[
<p>Agree, can't wait for updates to the diffusion model.<p>Could be useful for planning too, given its tendency to think big picture first. Even if it's just an additional subagent to double-check with an "off the top off your head" or "don't think, share first thought" type of question. More generally would like to see how sequencing autoregressive thinking with diffusion over multiple steps might help with better overall thinking.</p>
]]></description><pubDate>Thu, 19 Feb 2026 17:42:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47076596</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47076596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47076596</guid></item><item><title><![CDATA[New comment by zhyder in "Gemini 3.1 Pro"]]></title><description><![CDATA[
<p>Surprisingly big jump in ARC-AGI-2 from 31% to 77%, guess there's some RLHF focused on the benchmark given it was previously far behind the competition and is now ahead.<p>Apart from that, the usual predictable gains in coding. Still is a great sweet-spot for performance, speed and cost. Need to hack Claude Code to use their agentic logic+prompts but use Gemini models.<p>I wish Google also updated Flash-lite to 3.0+, would like to use that for the Explore subagent (which Claude Code uses Haiku for). These subagents seem to be Claude Code's strength over Gemini CLI, which still has them only in experimental mode and doesn't have read-only ones like Explore.</p>
]]></description><pubDate>Thu, 19 Feb 2026 16:38:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47075664</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47075664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47075664</guid></item><item><title><![CDATA[New comment by zhyder in "The only moat left is money?"]]></title><description><![CDATA[
<p>"the value of a human eyeball" / attention is and always will be the limited resource. But I wish the way the economy worked wasn't that attention is sold for money, which makes money the moat, and sets a floor on how low-priced things can get for customers too. Is this really the best the economy can do? Or is it possible to have a fair LLM-based search engine that matches customer need description with stated product descriptions from providers (while weighing customer reviews, etc)?</p>
]]></description><pubDate>Wed, 18 Feb 2026 17:34:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47063656</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=47063656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47063656</guid></item><item><title><![CDATA[New comment by zhyder in "Ex-GitHub CEO launches a new developer platform for AI agents"]]></title><description><![CDATA[
<p>Hmm the whole point of checkpoints seems to be to reduce token waste by saving repeat thinking work. But wouldn't trying to pull N checkpoints into context of the N+1 task be MUCH more expensive? It's at odds with the current practice of clearing context regularly to save on input tokens. Even subagents (which I think are the real superpower that Claude Code has over Gemini CLI for now) by their nature get spawned with fresh near-empty context.<p>Token costs aside, arguably fresh context is also better at problem solving. When it was just me coding by hand, I didn't save all my intermediate thinking work anywhere: instead thinking afresh when a similar problem came up later helped in coming up with better solutions. I did occasionally save my thinking in design docs, but the equivalent to that is CLAUDE.md and similar human-reviewed markdown saved at explicit -umm- checkpoints.</p>
]]></description><pubDate>Wed, 11 Feb 2026 04:07:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46970704</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46970704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46970704</guid></item><item><title><![CDATA[New comment by zhyder in "Speed up responses with fast mode"]]></title><description><![CDATA[
<p>So 2.5x the speed at 6x the price [1].<p>Quite a premium for speed. Especially when Gemini 3 Pro is 1.8x the tokens/sec speed (of regular-speed Opus 4.6) at 0.45x the price [2]. Though it's worse at coding, and Gemini CLI doesn't have the agentic strength of Claude Code, yet.<p>[1] - <a href="https://x.com/claudeai/status/2020207322124132504" rel="nofollow">https://x.com/claudeai/status/2020207322124132504</a>
[2] - <a href="https://artificialanalysis.ai/leaderboards/models" rel="nofollow">https://artificialanalysis.ai/leaderboards/models</a></p>
]]></description><pubDate>Sat, 07 Feb 2026 23:25:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46929408</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46929408</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46929408</guid></item><item><title><![CDATA[New comment by zhyder in "A decentralized peer-to-peer messaging application that operates over Bluetooth"]]></title><description><![CDATA[
<p>Love it. Wonder if it's viable for citizen journalism in warzones and areas of civil unrest, with the larger size of photos (and short videos), given the inherently slow transfer rates and battery life implications of going thru multiple hops before Internet-exiting the area that's otherwise Internet-offline. What's the back-of-the-envelope math here on viable bandwidth?<p>Wifi obviously has higher bandwidth, but I guess it isn't viable as a mesh, or is there any trick with turning on/off hotspots on phones dynamically that'd make it viable? (Afaik older phones made you pick between being a hotspot or being a regular wifi client, but at least some newer ones seem to allow both simultaneously.)<p>I'm definitely hoping for a future with wider support for C2PA (content credentials on images) on phone cameras to make these photos power citizen journalism. So far Samsung S25 and Pixel 10 support C2PA in the camera hardware: need other phone makers (especially Apple) to get on board already... if you're an iPhone user, please help yell at Apple support etc!<p>Aside: I registered a domain and plan to build a citizen journalism news feed for such photos (and uncut videos). I see it as the antidote to Instagram et al's feeds that're full of AI slop (and plenty of fakery even before AI-generated imagery got big). And it's essential to truth, democracy and ultimately (maybe I'm too idealistic here) peace. Aside to the aside: wish some of us techies banded together to build "peace tech" as a new sector in tech, DM if interested in brainstorming or working together.</p>
]]></description><pubDate>Mon, 19 Jan 2026 15:53:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46680352</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46680352</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46680352</guid></item><item><title><![CDATA[New comment by zhyder in "Don't fall into the anti-AI hype"]]></title><description><![CDATA[
<p>Sounds like antirez, simonw, et al are still advocating reviewing the code output of these agents for now. But presumably soon (within months?) the agents will be good enough such that line-by-line review will no longer be necessary, or humanly possible as we crank the agents up to 11.<p>But then how will we review each PR enough to have confidence in it?<p>How will we understand the overall codebase too after it gets much bigger?<p>Are there any better tools here other than just asking LLMs to summarize code, or flag risky code... any good "code reader" tools (like code editors but focused on this reading task)?</p>
]]></description><pubDate>Sun, 11 Jan 2026 23:42:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46581786</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46581786</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46581786</guid></item><item><title><![CDATA[Ask HN: How do you review the code from agents?]]></title><description><![CDATA[
<p>Engineers are increasingly setting up coding agents to run continuously, some running multiple agents in parallel. I've struggled to do that because I've struggled to build confidence without full code review.<p>How are y'all reviewing all this massive code output? How can it possibly scale, as the agents run faster or as you add agents?<p>I guess we'll have to learn to give up some control; we'll stop reviewing all lines of code, and increasingly rely on AI tools to summarize and flag specific lines. Are any tools good at this?<p>More generally, how do you build confidence in the code, both at the PR level and eventually at the codebase level (when 90+% of code in it will be written by agents)?<p>Am I too worried about code review, are there bigger bottlenecks in our jobs when using these coding agents?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46557580">https://news.ycombinator.com/item?id=46557580</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 09 Jan 2026 18:55:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46557580</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46557580</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46557580</guid></item><item><title><![CDATA[New comment by zhyder in "VW is bringing physical buttons back to the dashboard with the ID. Polo EV"]]></title><description><![CDATA[
<p>Most car manufacturers made this mistake because they started mimicking the then leader for innovation (and customer satisfaction), Tesla, too much.<p>General cautionary tale: just coz a company is successful, doesn't mean it's doing _everything_ right. Plenty of folks who love their Teslas would prefer a few more buttons (and door handles on the inside, etc) if given the choice. Could say similar things about some choices Apple made.</p>
]]></description><pubDate>Tue, 06 Jan 2026 17:54:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46515851</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46515851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46515851</guid></item><item><title><![CDATA[New comment by zhyder in "Your job is to deliver code you have proven to work"]]></title><description><![CDATA[
<p>"Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review. That’s no longer valuable. What’s valuable is contributing code that is proven to work."<p>I'd go further: what's valuable is code review. So review the AI agent's code yourself first, ensuring not only that it's proven to work, but also that it's good quality (across various dimensions but most importantly in maintainability in future). If you're already overwhelmed by that thousand-line patch, try to create a hundred-line patch that accomplishes the same task.<p>I expect code review tools to also rapidly change, as lines of code written per person dramatically increase. Any good new tools already?</p>
]]></description><pubDate>Thu, 18 Dec 2025 19:07:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46317081</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46317081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46317081</guid></item><item><title><![CDATA[New comment by zhyder in "Coursera to combine with Udemy"]]></title><description><![CDATA[
<p>Have you tried them with providing a grounding resource, e.g. attaching a file to ChatGPT or NotebookLM? Yes need some human expert to create (or curate) that grounding resource in the first place, but LLMs handle the rest well: presenting info in different ways and paces, interacting with the learner like a tutor, etc.</p>
]]></description><pubDate>Wed, 17 Dec 2025 19:36:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46304419</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46304419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46304419</guid></item><item><title><![CDATA[New comment by zhyder in "Coursera to combine with Udemy"]]></title><description><![CDATA[
<p>End of an era: video (with broadband Internet penetration) was the best tool we had for 15+ years. But LLMs are now good enough, including in image+infographic generation and factuality (especially when grounding resources are provided... which is where human experts still matter). I think video is now better only for learning physical hands-on skills... and those videos tend to be on YouTube rather than on Udemy or Coursera.<p>Coursera's model will still survive for a while, given people's desire for branded credentials (university degree credits or company-branded certificates)... until the university bubble bursts too in a 10+ years. Start of trend: <a href="https://www.nbcnews.com/politics/politics-news/poll-dramatic-shift-americans-no-longer-see-four-year-college-degrees-rcna243672" rel="nofollow">https://www.nbcnews.com/politics/politics-news/poll-dramatic...</a><p>A bit of a plug: we tried building a consumer business, with a learning experience built atop these LLMs: <a href="https://uphop.ai/learn" rel="nofollow">https://uphop.ai/learn</a> . Still offered for free to consumers, but we're now succeeding much better on B2B ("you either die a consumer business or live long enough to become B2B" was v true for us).</p>
]]></description><pubDate>Wed, 17 Dec 2025 17:50:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46302925</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46302925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46302925</guid></item><item><title><![CDATA[New comment by zhyder in "Gemini 3 Flash: Frontier intelligence built for speed"]]></title><description><![CDATA[
<p>Glad to see big improvement in the SimpleQA Verified benchmark (28->69%), which is meant to measure factuality (built-in, i.e. without adding grounding resources). That's one benchmark where all models seemed to have low scores until recently. Can't wait to see a model go over 90%... then will be years till the competition is over number of 9s in such a factuality benchmark, but that'd be glorious.</p>
]]></description><pubDate>Wed, 17 Dec 2025 17:19:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46302453</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46302453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46302453</guid></item><item><title><![CDATA[CC: Google Labs AI agent for email+calendar]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.google/technology/google-labs/cc-ai-agent/">https://blog.google/technology/google-labs/cc-ai-agent/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46296338">https://news.ycombinator.com/item?id=46296338</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 16 Dec 2025 23:45:20 +0000</pubDate><link>https://blog.google/technology/google-labs/cc-ai-agent/</link><dc:creator>zhyder</dc:creator><comments>https://news.ycombinator.com/item?id=46296338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46296338</guid></item></channel></rss>