<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: noahgolmant</title><link>https://news.ycombinator.com/user?id=noahgolmant</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 13:36:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=noahgolmant" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by noahgolmant in "AI Slop Is Killing Online Communities"]]></title><description><![CDATA[
<p>The human has unique context. They may work in a niche domain or they talked to people and observed an unsolved problem. Then they express a potential solution via OSS. It's like product sense. Then they share that with others who find it interesting. The code is a great way to encapsulate the idea. It is usually the result of research and back and forth not a single prompt. It would be way harder to think through or build a solution without AI even if they had context.</p>
]]></description><pubDate>Thu, 07 May 2026 20:29:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=48054506</link><dc:creator>noahgolmant</dc:creator><comments>https://news.ycombinator.com/item?id=48054506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48054506</guid></item><item><title><![CDATA[New comment by noahgolmant in "AI slop is killing online communities"]]></title><description><![CDATA[
<p>I don't really know. Certainly we need a higher bar. The Kafka example in the post may be hyperbolic but I agree it pollutes the space. But we also can't swing the other way and rely completely on out of date proxies. If you ban AI code there will be very little code to see in a year. It'll take time but we'll arrive at new norms. We built semi successful ways to filter content farms in the earlier internet days. The signal has to shift to "did they think hard about this problem" which has some observable properties. Like how they articulate the problem, or why it became important to them.</p>
]]></description><pubDate>Thu, 07 May 2026 20:21:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=48054392</link><dc:creator>noahgolmant</dc:creator><comments>https://news.ycombinator.com/item?id=48054392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48054392</guid></item><item><title><![CDATA[New comment by noahgolmant in "AI slop is killing online communities"]]></title><description><![CDATA[
<p>There has to be room for an AI-driven project that expresses a unique idea, even if there's no community around it yet. Someone has to express it, and from now on that idea will largely be implemented with AI.<p>> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.<p>I agree 100% with the novel contribution aspect. But there's some nuance there.<p>For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.<p>As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.<p>I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.</p>
]]></description><pubDate>Thu, 07 May 2026 19:38:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48053845</link><dc:creator>noahgolmant</dc:creator><comments>https://news.ycombinator.com/item?id=48053845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48053845</guid></item><item><title><![CDATA[New comment by noahgolmant in "AI uses less water than the public thinks"]]></title><description><![CDATA[
<p>I would not extrapolate California results to the rest of the country. Other regions like Northern Virginia or Dallas-Forth Worth dominate current and future data center load. They have significantly different water and land use trends. Of course agricultural water usage is higher in California- it's the top agricultural state in the country. And energy prices are much higher, resulting in a smaller data center presence. State regulations also determine water usage efficiency requirements. So it might differ in less regulated states like Texas.</p>
]]></description><pubDate>Sat, 02 May 2026 00:58:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47982229</link><dc:creator>noahgolmant</dc:creator><comments>https://news.ycombinator.com/item?id=47982229</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47982229</guid></item><item><title><![CDATA[New comment by noahgolmant in "iNaturalist"]]></title><description><![CDATA[
<p>You may want to look into the PMTiles format and tippecanoe. It efficiently produces pyramidal XYZ tile overviews of vector data. Sometimes this is also done server side via the PostGIS asMVT ffunction, or Martin.<p>For client side rendering, deck.gl is quite good, also a newer library called lonboard from DevelopmentSeed.</p>
]]></description><pubDate>Fri, 03 Apr 2026 18:47:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47630482</link><dc:creator>noahgolmant</dc:creator><comments>https://news.ycombinator.com/item?id=47630482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47630482</guid></item><item><title><![CDATA[Show HN: Build ML training datasets from large-scale satellite/aerial imagery]]></title><description><![CDATA[
<p>This is a small tool to label bounding boxes on satellite/aerial imagery and export training datasets for object detection.<p>Web maps like Google Earth work by stitching together lots of small images called tiles (this is why you see square patches as the page loads). They do this by querying a "tile server" API that reads from sharded raster files in cloud storage. In my day job we built infra to efficiently serve imagery through tile servers for map visualization. I wanted to test out ML applications of that infra. It turns out this standard can also be leveraged to label and fine-tune models on map imagery.<p>This tool lets you point at any tile server URL, draw labeled bounding boxes, and export labels in COCO annotation format, plus download underlying tile PNGs for training/inference. This can feed directly into standard computer vision frameworks like ultrayltics or pytorch.<p>The workflow is hotkey-driven: draw a box, press 1-9 to assign a category (or N for negative examples). You can also drag-and-drop local GeoTIFFs.<p>I found this helpful to experiment with fine-tuning SAM 3 on local aerial imagery. It was nice to zip up the PNGs + COCO file, drag and drop to a colab notebook, and run inference.<p>There are other interesting applications of this I'd like to explore, like in-browser map-based segmentation / object detection with onnx.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46279760">https://news.ycombinator.com/item?id=46279760</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 15 Dec 2025 20:04:41 +0000</pubDate><link>https://github.com/noahgolmant/label-tiles</link><dc:creator>noahgolmant</dc:creator><comments>https://news.ycombinator.com/item?id=46279760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46279760</guid></item><item><title><![CDATA[New comment by noahgolmant in "Ask HN: What Are You Working On? (December 2025)"]]></title><description><![CDATA[
<p>A tool to build labeled object detection training datasets from large-scale satellite/aerial imagery collections: <a href="https://github.com/noahgolmant/label-tiles" rel="nofollow">https://github.com/noahgolmant/label-tiles</a><p>Web maps usually join together lots of small images called tiles (this is why you see square patches as google earth/map loads). They do this by querying a "tile server" API. It turns out this standard can also be leveraged to label and fine-tune models on map imagery. In my day job we built infra to efficiently serve imagery through tile servers for map visualization. So I wanted to test out ML applications of that infra.</p>
]]></description><pubDate>Sun, 14 Dec 2025 21:00:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46266828</link><dc:creator>noahgolmant</dc:creator><comments>https://news.ycombinator.com/item?id=46266828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46266828</guid></item></channel></rss>