<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kmod</title><link>https://news.ycombinator.com/user?id=kmod</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 22:17:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kmod" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kmod in "OpenCode – Open source AI coding agent"]]></title><description><![CDATA[
<p>Fwiw this got changed about a week ago, where they changed the logic to match the documentation rather than default to sending your prompts to their servers. This is why so many people have noticed this happening but if you ask an AI about it right now it will say this is not true.<p>Personally I think it's necessary to run opencode itself inside a sandbox, and if you do that you can see all of the rejected network calls it's trying to make even in local mode. I use srt and it was pretty straightforward to set up</p>
]]></description><pubDate>Sat, 21 Mar 2026 17:07:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47468887</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=47468887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47468887</guid></item><item><title><![CDATA[New comment by kmod in "Claude Opus 4.6"]]></title><description><![CDATA[
<p>I think it's interesting that they dropped the date from the API model name, and it's just called "claude-opus-4-6", vs the previous was "claude-opus-4-5-20251101". This isn't an alias like "claude-opus-4-5" was, it's the actual model name. I think this means they're comfortable with bumping the version number if they want to release a revision.</p>
]]></description><pubDate>Fri, 06 Feb 2026 01:02:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46907680</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=46907680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46907680</guid></item><item><title><![CDATA[New comment by kmod in "Gemini in Chrome"]]></title><description><![CDATA[
<p>They are definitely capable of writing such statements, which you can see in their enterprise products. In my Google Workspace gemini app it says pretty prominently and clearly:<p><pre><code>  Your [ORGNAME] chats aren’t used to improve our models
</code></pre>
The Google Workspace privacy hub is similarly easy to read and clear that they don't train on your data: <a href="https://support.google.com/a/answer/15706919" rel="nofollow">https://support.google.com/a/answer/15706919</a><p>So they definitely understand that people want to hear that their data isn't being used for training, and they know how to say it clearly and reassuringly. Which makes the omission of that in their consumer products more telling in my view.</p>
]]></description><pubDate>Fri, 19 Sep 2025 15:41:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45302874</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=45302874</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45302874</guid></item><item><title><![CDATA[New comment by kmod in "In a first, Google has released data on how much energy an AI prompt uses"]]></title><description><![CDATA[
<p><a href="https://azallianceforgolf.org/wp-content/uploads/2023/01/C-Study_AZ-Golf-Industry-Economic-Contribution.pdf" rel="nofollow">https://azallianceforgolf.org/wp-content/uploads/2023/01/C-S...</a><p>page 21, says Arizona 2015 golf course irrigation was 120 million gallons per day, citing the US Geological Survey.<p><a href="https://dgtlinfra.com/data-center-water-usage/" rel="nofollow">https://dgtlinfra.com/data-center-water-usage/</a><p>says Google's datacenter water consumption in 2023 was 5.2 billion gallons, or ~14 million gallons a day. Microsoft was ~4.7, Facebook was 2.6, AWS didn't seem to disclose, Apple was 2.3. These numbers seem pulled from what the companies published.<p>The total for these companies was ~30 million gallons a day. Apply your best guesses as to what fraction of datacenter usage they are, what fraction of datacenter usage is AI, and what 2025 usage looks like compared to 2023. My guess is it's unlikely to come out to more than 120 million.<p>I didn't vet this that carefully so take the numbers with a grain of salt, but the rough comparison does seem to hold that Arizona golf courses are larger users of water.<p>Agricultural numbers are much higher, the California almond industry uses ~4000 million gallons of water a day.</p>
]]></description><pubDate>Thu, 21 Aug 2025 20:46:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44977868</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=44977868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44977868</guid></item><item><title><![CDATA[New comment by kmod in "In a first, Google has released data on how much energy an AI prompt uses"]]></title><description><![CDATA[
<p>I was also surprised when someone asked me about AI's water consumption because I had never heard of it being an issue. But a cursory search shows that datacenters use quite a bit more water than I realized, on the order of 1 liter of water per kWh of electricity. I see a lot of talk about how the hyperscalers are doing better than this and are trying to get to net-positive, but everything I saw was about quantifying and optimizing this number rather than debunking it as some sort of myth.<p>I find "1 liter per kWh" to be a bit hard to visualize, but when they talk about building a gigawatt datacenter, that's 278L/s. A typical showerhead is 0.16L/s. The Californian almond industry apparently uses roughly 200kL/s averaged over the entire year -- 278L/s is enough for about 4 square miles of almond orchards.<p>So it seems like a real thing but maybe not that drastic, especially since I think the hyperscaler numbers are better than this.</p>
]]></description><pubDate>Thu, 21 Aug 2025 20:29:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44977675</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=44977675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44977675</guid></item><item><title><![CDATA[Asimov: The Code Research Agent for Engineering Teams]]></title><description><![CDATA[
<p>Article URL: <a href="https://reflection.ai/blog/introducing-asimov/">https://reflection.ai/blog/introducing-asimov/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44588066">https://news.ycombinator.com/item?id=44588066</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 16 Jul 2025 23:41:40 +0000</pubDate><link>https://reflection.ai/blog/introducing-asimov/</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=44588066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44588066</guid></item><item><title><![CDATA[Unless users take action, Android will let Gemini access third-party apps]]></title><description><![CDATA[
<p>Article URL: <a href="https://arstechnica.com/security/2025/07/unless-users-take-action-android-will-let-gemini-access-third-party-apps/">https://arstechnica.com/security/2025/07/unless-users-take-action-android-will-let-gemini-access-third-party-apps/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44502736">https://news.ycombinator.com/item?id=44502736</a></p>
<p>Points: 13</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 08 Jul 2025 18:35:10 +0000</pubDate><link>https://arstechnica.com/security/2025/07/unless-users-take-action-android-will-let-gemini-access-third-party-apps/</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=44502736</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44502736</guid></item><item><title><![CDATA[New comment by kmod in "Gemini CLI"]]></title><description><![CDATA[
<p>I've found a method that gives me a lot more clarity about a company's privacy policy:<p><pre><code>  1. Go to their enterprise site
  2. See what privacy guarantees they advertise above the consumer product
  3. Conclusion: those are things that you do not get in the consumer product
</code></pre>
These companies do understand what privacy people want and how to write that in plain language, and they do that when they actually offer it (to their enterprise clients). You can diff this against what they say to their consumers to see where they are trying to find wiggle room ("finetuning" is not "training", "ever got free credits" means not-"is a paid account", etc)<p>For Code Assist, here's their enterprise-oriented page vs their consumer-oriented page:<p><a href="https://cloud.google.com/gemini/docs/codeassist/security-privacy-compliance#data-protection-privacy" rel="nofollow">https://cloud.google.com/gemini/docs/codeassist/security-pri...</a><p><a href="https://developers.google.com/gemini-code-assist/resources/privacy-notice-gemini-code-assist-individuals" rel="nofollow">https://developers.google.com/gemini-code-assist/resources/p...</a><p>It seems like these are both incomplete and one would need to read their overall pages, which would be something more like<p><a href="https://support.google.com/a/answer/15706919?hl=en" rel="nofollow">https://support.google.com/a/answer/15706919?hl=en</a><p><a href="https://support.google.com/gemini/answer/13594961?hl=en#reviewers" rel="nofollow">https://support.google.com/gemini/answer/13594961?hl=en#revi...</a></p>
]]></description><pubDate>Wed, 25 Jun 2025 21:22:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44381955</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=44381955</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44381955</guid></item><item><title><![CDATA[New comment by kmod in "Building supercomputers for autocrats probably isn't good for democracy"]]></title><description><![CDATA[
<p>I agree in general, but I think some important context here is that the author of this post was previously on the OpenAI board (the board that fired Sam Altman).</p>
]]></description><pubDate>Mon, 09 Jun 2025 11:47:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44223486</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=44223486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44223486</guid></item><item><title><![CDATA[New Amazon EC2 P6-B200 Instances Powered by Nvidia Blackwell GPUs]]></title><description><![CDATA[
<p>Article URL: <a href="https://aws.amazon.com/blogs/aws/new-amazon-ec2-p6-b200-instances-powered-by-nvidia-blackwell-gpus-to-accelerate-ai-innovations/">https://aws.amazon.com/blogs/aws/new-amazon-ec2-p6-b200-instances-powered-by-nvidia-blackwell-gpus-to-accelerate-ai-innovations/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44016748">https://news.ycombinator.com/item?id=44016748</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 17 May 2025 20:26:01 +0000</pubDate><link>https://aws.amazon.com/blogs/aws/new-amazon-ec2-p6-b200-instances-powered-by-nvidia-blackwell-gpus-to-accelerate-ai-innovations/</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=44016748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44016748</guid></item><item><title><![CDATA[New comment by kmod in "Google Gemini has the worst LLM API"]]></title><description><![CDATA[
<p>The worst part to me is the privacy nightmare with AI Studio. It's essentially impossible to tell whether any particular API call will end up being included in their training data since this depends on properties that are stored elsewhere and are not available to the developer -- even a simple property such as "does this account have billing enabled" is oddly difficult to evaluate, and I was told by their support that because I at one point had any free credits on my account that it was a trial account and not a billed account even though I had a credit card attached and was being charged. I don't know if this is true and there is no way for me to find out.<p>At some point they updated their privacy policy in regards to this, but instead of saying that this will cause them to train on your data, now the privacy policy says both that they will train on this data and that they will not train on this data, with no indication of which statement takes precedence over the other.</p>
]]></description><pubDate>Sun, 04 May 2025 16:58:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=43887864</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=43887864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43887864</guid></item><item><title><![CDATA[New comment by kmod in "Google Gemini has the worst LLM API"]]></title><description><![CDATA[
<p>There are a few conditions that take precedence over having-billing-enabled and will cause AI Studio to train on your data. This is why I personally use Vertex</p>
]]></description><pubDate>Sun, 04 May 2025 16:52:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43887832</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=43887832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43887832</guid></item><item><title><![CDATA[LG and Samsung Are Pioneering the Netflix Model for Home Appliances]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.bloomberg.com/news/newsletters/2025-04-11/lg-and-samsung-are-pioneering-the-netflix-model-for-home-appliances">https://www.bloomberg.com/news/newsletters/2025-04-11/lg-and-samsung-are-pioneering-the-netflix-model-for-home-appliances</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43655679">https://news.ycombinator.com/item?id=43655679</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 11 Apr 2025 16:31:17 +0000</pubDate><link>https://www.bloomberg.com/news/newsletters/2025-04-11/lg-and-samsung-are-pioneering-the-netflix-model-for-home-appliances</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=43655679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43655679</guid></item><item><title><![CDATA[New comment by kmod in "LLMs understand nullability"]]></title><description><![CDATA[
<p>I found this overly handwavy, but I discovered that there is a non-"gentle" version of this page which is more explicit:<p><a href="https://dmodel.ai/nullability/">https://dmodel.ai/nullability/</a></p>
]]></description><pubDate>Mon, 07 Apr 2025 20:46:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43615764</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=43615764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43615764</guid></item><item><title><![CDATA[New comment by kmod in "Gemini 2.5"]]></title><description><![CDATA[
<p>The benchmark numbers don't really mean anything -- Google says that Gemini 2.5 Pro has an AIME score of 86.7 which beats o3-mini's score of 86.5, but OpenAI's announcement post [1] said that o3-mini-high has a score of 87.3 which Gemini 2.5 would lose to. The chart says "All numbers are sourced from providers' self-reported numbers" but the only mention of o3-mini having a score of 86.5 I could find was from this other source [2]<p>[1] <a href="https://openai.com/index/openai-o3-mini/" rel="nofollow">https://openai.com/index/openai-o3-mini/</a>
[2] <a href="https://www.vals.ai/benchmarks/aime-2025-03-24" rel="nofollow">https://www.vals.ai/benchmarks/aime-2025-03-24</a><p>You just have to use the models yourself and see. In my experience o3-mini is much worse than o1.</p>
]]></description><pubDate>Tue, 25 Mar 2025 18:25:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=43474284</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=43474284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43474284</guid></item><item><title><![CDATA[New comment by kmod in "Gemini 2.5"]]></title><description><![CDATA[
<p>It's "experimental", which means that it is not fully released. In particular, the "experimental" tag means that it is subject to a different privacy policy and that they reserve the right to train on your prompts.<p>2.0 Pro is also still "experimental" so I agree with GP that it's pretty odd that they are "releasing" the next version despite never having gotten to fully releasing the previous version.</p>
]]></description><pubDate>Tue, 25 Mar 2025 18:14:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43474183</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=43474183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43474183</guid></item><item><title><![CDATA[New comment by kmod in "Meta claims torrenting pirated books isn't illegal without proof of seeding"]]></title><description><![CDATA[
<p>I believe that at least in the past the entertainment industry would try to detect someone seeding a file before going after them. The idea being that someone downloading is receiving a copy (not illegal), and the act of making the copy (illegal) was done by the seeder. I'm not sure to what degree this was an established requirement vs them trying to avoid ambiguity, but my point is that this framing by Meta isn't novel. I'm not expressing a judgment on whether it's correct or if it's good.</p>
]]></description><pubDate>Fri, 21 Feb 2025 18:54:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43131410</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=43131410</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43131410</guid></item><item><title><![CDATA[New comment by kmod in "Procrastination and the fear of not being good enough"]]></title><description><![CDATA[
<p>I think people here might like Oliver Burkeman's books where he talks about this stuff a lot. I loved his book "Four Thousand Weeks", and there is a new follow-up "Meditation for Mortals" which I have not read yet but seems to be well-received.<p>He's one of the few people I've seen address what I think is the key difficulty with this sort of stuff: that you can think think that you're addressing procrastination/perfectionism when actually you are engaging in it (with a target of fixing your procrastination/perfectionism). It's a difficult situation to break out of, because it seems like any effort to break-out would necessarily have this sort of grasping, but I think he (and Buddhist meditation) talk a lot about that key challenge.</p>
]]></description><pubDate>Mon, 11 Nov 2024 02:37:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=42104196</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=42104196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42104196</guid></item><item><title><![CDATA[Prompt Caching in the API]]></title><description><![CDATA[
<p>Article URL: <a href="https://openai.com/index/api-prompt-caching/">https://openai.com/index/api-prompt-caching/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41715781">https://news.ycombinator.com/item?id=41715781</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 02 Oct 2024 00:00:22 +0000</pubDate><link>https://openai.com/index/api-prompt-caching/</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=41715781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41715781</guid></item><item><title><![CDATA[New comment by kmod in "Llms.txt"]]></title><description><![CDATA[
<p>This reminds me about the Semantic Web, which was a movement explicitly about making the web more understandable to machines. I don't agree with the ideas and I think a lot of other people were also skeptical, but I bring it up to say that some people take the other side of your argument rather seriously and that there's a lot of existing debate on the topic. Here's Tim Berners-Lee talking about this way back in 1999:<p>> I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A "Semantic Web", which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The "intelligent agents" people have touted for ages will finally materialize.<p>I quoted this from <a href="https://en.wikipedia.org/wiki/Semantic_Web" rel="nofollow">https://en.wikipedia.org/wiki/Semantic_Web</a> since the original reference was a book that is not openly accessible. Also I think it's funny that he's talking about agents in exactly the same way that people do now.</p>
]]></description><pubDate>Wed, 04 Sep 2024 15:43:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=41447110</link><dc:creator>kmod</dc:creator><comments>https://news.ycombinator.com/item?id=41447110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41447110</guid></item></channel></rss>