<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: zerojames</title><link>https://news.ycombinator.com/user?id=zerojames</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 22:13:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=zerojames" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Segment Anything 3]]></title><description><![CDATA[
<p>Article URL: <a href="https://aidemos.meta.com/segment-anything/">https://aidemos.meta.com/segment-anything/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45981155">https://news.ycombinator.com/item?id=45981155</a></p>
<p>Points: 9</p>
<p># Comments: 3</p>
]]></description><pubDate>Wed, 19 Nov 2025 15:59:00 +0000</pubDate><link>https://aidemos.meta.com/segment-anything/</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=45981155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45981155</guid></item><item><title><![CDATA[Detect, Track, and Identify Basketball Players with Computer Vision]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.roboflow.com/identify-basketball-players/">https://blog.roboflow.com/identify-basketball-players/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45431324">https://news.ycombinator.com/item?id=45431324</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 30 Sep 2025 21:13:19 +0000</pubDate><link>https://blog.roboflow.com/identify-basketball-players/</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=45431324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45431324</guid></item><item><title><![CDATA[GPT-5 for Vision: Results from 80 Real-World Tests]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.roboflow.com/gpt-5-vision-multimodal-evaluation/">https://blog.roboflow.com/gpt-5-vision-multimodal-evaluation/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44830978">https://news.ycombinator.com/item?id=44830978</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 07 Aug 2025 22:08:24 +0000</pubDate><link>https://blog.roboflow.com/gpt-5-vision-multimodal-evaluation/</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44830978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44830978</guid></item><item><title><![CDATA[How AI is helping advance the science of bioacoustics to save endangered species]]></title><description><![CDATA[
<p>Article URL: <a href="https://deepmind.google/discover/blog/how-ai-is-helping-advance-the-science-of-bioacoustics-to-save-endangered-species/">https://deepmind.google/discover/blog/how-ai-is-helping-advance-the-science-of-bioacoustics-to-save-endangered-species/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44825566">https://news.ycombinator.com/item?id=44825566</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 07 Aug 2025 15:14:25 +0000</pubDate><link>https://deepmind.google/discover/blog/how-ai-is-helping-advance-the-science-of-bioacoustics-to-save-endangered-species/</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44825566</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44825566</guid></item><item><title><![CDATA[Advancing State of the Art Object Detection (Again) with RF-DETR]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.roboflow.com/rf-detr-nano-small-medium/">https://blog.roboflow.com/rf-detr-nano-small-medium/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44669873">https://news.ycombinator.com/item?id=44669873</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 24 Jul 2025 12:31:21 +0000</pubDate><link>https://blog.roboflow.com/rf-detr-nano-small-medium/</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44669873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44669873</guid></item><item><title><![CDATA[New comment by zerojames in "Show HN: NYC Subway Simulator and Route Designer"]]></title><description><![CDATA[
<p>This is a very cool project!</p>
]]></description><pubDate>Tue, 08 Jul 2025 12:20:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44499298</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44499298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44499298</guid></item><item><title><![CDATA[New comment by zerojames in "Ask HN: Who is hiring? (July 2025)"]]></title><description><![CDATA[
<p>Roboflow | Solutions Architects, Implementation Engineer, Developer Advocate, Full Stack Engineers | Full-time (Remote, SF, NYC) | <a href="https://roboflow.com/careers?ref=whoishiring0725">https://roboflow.com/careers?ref=whoishiring0725</a><p>Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform [1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.<p>Over 1 million engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models [2]. We are pushing forward the CV ecosystem with open source projects like RF-DETR [3] and Supervision [4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog [5] and YouTube channel [6].<p>We have several openings available but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping us figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)<p>We're currently hiring for full-stack engineers to help build our product, field engineers that work directly with our customers to solve their business needs with computer vision, and several technical roles on the sales team.<p>We raised our Series B round at the end of last year, led by GV. With this round, we will continue building the open source tools, platform, and community so developers and enterprises can deploy computer vision applications to production. [7]<p>[1]: <a href="https://roboflow.com/?ref=whoishiring0725">https://roboflow.com/?ref=whoishiring0725</a><p>[2]: <a href="https://roboflow.com/universe?ref=whoishiring0725">https://roboflow.com/universe?ref=whoishiring0725</a><p>[3]: <a href="https://github.com/roboflow/rf-detr">https://github.com/roboflow/rf-detr</a><p>[4]: <a href="https://github.com/roboflow/supervision">https://github.com/roboflow/supervision</a><p>[5]: <a href="https://blog.roboflow.com/?ref=whoishiring0725">https://blog.roboflow.com/?ref=whoishiring0725</a><p>[6]: <a href="https://www.youtube.com/@Roboflow" rel="nofollow">https://www.youtube.com/@Roboflow</a><p>[7]: <a href="https://blog.roboflow.com/series-b/">https://blog.roboflow.com/series-b/</a></p>
]]></description><pubDate>Tue, 01 Jul 2025 16:58:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44435891</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44435891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44435891</guid></item><item><title><![CDATA[LLM Speedrunner: Eval for frontier models to reproduce scientific findings]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/facebookresearch/llm-speedrunner">https://github.com/facebookresearch/llm-speedrunner</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44396310">https://news.ycombinator.com/item?id=44396310</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 27 Jun 2025 12:34:35 +0000</pubDate><link>https://github.com/facebookresearch/llm-speedrunner</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44396310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44396310</guid></item><item><title><![CDATA[Anthropic Desktop Extensions: One-click local MCP installation in desktop apps]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/anthropics/dxt">https://github.com/anthropics/dxt</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44388727">https://news.ycombinator.com/item?id=44388727</a></p>
<p>Points: 4</p>
<p># Comments: 2</p>
]]></description><pubDate>Thu, 26 Jun 2025 16:07:37 +0000</pubDate><link>https://github.com/anthropics/dxt</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44388727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44388727</guid></item><item><title><![CDATA[GenAI Processors: Modular, Asynchronous, and Composable AI Pipelines]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/google-gemini/genai-processors">https://github.com/google-gemini/genai-processors</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44380822">https://news.ycombinator.com/item?id=44380822</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 25 Jun 2025 19:06:06 +0000</pubDate><link>https://github.com/google-gemini/genai-processors</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44380822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44380822</guid></item><item><title><![CDATA[AlphaGenome]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/google-deepmind/alphagenome">https://github.com/google-deepmind/alphagenome</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44377757">https://news.ycombinator.com/item?id=44377757</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 25 Jun 2025 14:29:38 +0000</pubDate><link>https://github.com/google-deepmind/alphagenome</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44377757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44377757</guid></item><item><title><![CDATA[OpenAI o3-pro: Multimodal and Vision Analysis]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.roboflow.com/openai-o3-pro-review/">https://blog.roboflow.com/openai-o3-pro-review/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44247859">https://news.ycombinator.com/item?id=44247859</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 11 Jun 2025 14:13:39 +0000</pubDate><link>https://blog.roboflow.com/openai-o3-pro-review/</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44247859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44247859</guid></item><item><title><![CDATA[New comment by zerojames in "Ask HN: Who is hiring? (June 2025)"]]></title><description><![CDATA[
<p>Roboflow | Solutions Architects, Implementation Engineer, Developer Advocate, Full Stack Engineers | Full-time (Remote, SF, NYC) | <a href="https://roboflow.com/careers?ref=whoishiring0625">https://roboflow.com/careers?ref=whoishiring0625</a><p>Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform [1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.<p>Over 1 million engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models [2]. We are pushing forward the CV ecosystem with open source projects like RF-DETR [3] and Supervision [4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog [5] and YouTube channel [6].<p>We have several openings available but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping us figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)<p>We're currently hiring for full-stack engineers to help build our product, field engineers that work directly with our customers to solve their business needs with computer vision, and several technical roles on the sales team.<p>We raised our Series B round at the end of last year, led by GV. With this round, we will continue building the open source tools, platform, and community so developers and enterprises can deploy computer vision applications to production. [7]<p>[1]: <a href="https://roboflow.com/?ref=whoishiring0625">https://roboflow.com/?ref=whoishiring0625</a><p>[2]: <a href="https://roboflow.com/universe?ref=whoishiring0625">https://roboflow.com/universe?ref=whoishiring0625</a><p>[3]: <a href="https://github.com/roboflow/rf-detr">https://github.com/roboflow/rf-detr</a><p>[4]: <a href="https://github.com/roboflow/supervision">https://github.com/roboflow/supervision</a><p>[5]: <a href="https://blog.roboflow.com/?ref=whoishiring0625">https://blog.roboflow.com/?ref=whoishiring0625</a><p>[6]: <a href="https://www.youtube.com/@Roboflow" rel="nofollow">https://www.youtube.com/@Roboflow</a><p>[7]: <a href="https://blog.roboflow.com/series-b/">https://blog.roboflow.com/series-b/</a></p>
]]></description><pubDate>Mon, 02 Jun 2025 17:36:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44161192</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44161192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44161192</guid></item><item><title><![CDATA[RF-DETR: SOTA Real-Time Object Detection Model]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/roboflow/rf-detr">https://github.com/roboflow/rf-detr</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44160412">https://news.ycombinator.com/item?id=44160412</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 02 Jun 2025 16:22:37 +0000</pubDate><link>https://github.com/roboflow/rf-detr</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44160412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44160412</guid></item><item><title><![CDATA[Show HN: Subscribe Openly]]></title><description><![CDATA[
<p>Occasionally I run into a site that links directly to an XML file in their RSS/Atom links. Browsers don't intuitively handle these links; my ideal experience is to go directly into my web reader.<p>Subscribe Openly takes a web feed URL and creates a pretty page with direct links to several web readers (suggestions of more to add are welcome) [1].<p>You can then link to this page instead of directly to an XML link. As a fallback, the XML link will appear on the Subscribe Openly page.<p>I hope web readers become more integrated with browsers again, but for now I'm sure there are still things we can do to improve the experience of encountering feed links. [2]<p>The code is open source: <a href="https://" rel="nofollow">https://</a> github.com/capjamesg/subscribe-openly/<p>[1]: Example: <a href="https://subscribeopenly.net/subscribe/?url=https://jamesg.blog/feeds/posts.xml" rel="nofollow">https://subscribeopenly.net/subscribe/?url=https://jamesg.bl...</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44160226">https://news.ycombinator.com/item?id=44160226</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 02 Jun 2025 16:01:50 +0000</pubDate><link>https://subscribeopenly.net/</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=44160226</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44160226</guid></item><item><title><![CDATA[Show HN: Vision AI Checkup, an Optometrist for VLMs]]></title><description><![CDATA[
<p>Evaluating visual capabilities of language models is hard.<p>On the one end of the evaluation spectrum, we have vibe checks which, while useful for building intuition, are time-consuming to run across a dozen or more models. On the other end, we have large benchmarks which are so large that they are intractable to most users.<p>Vision AI Checkup is a new tool for evaluating VLMs. The site is made up of hand-crafted prompts focused on real-world problems: defect detection, understanding how the position of one object relates to another, colour understanding, and more.<p>Our prompts are especially focused on industrial tasks -- serial number reading, assembly line understanding, and more -- although we're excited to add more general prompts.<p>The tool lets you see how models do across categories of prompts, and how different models do on a single prompt.<p>We have open sourced the codebase, with instructions on how to add a prompt to the assessment: <a href="https://github.com/roboflow/vision-ai-checkup">https://github.com/roboflow/vision-ai-checkup</a>. You can also add new models.<p>We'd love feedback and, also, ideas for areas where VLMs struggle that you'd like to see assessed!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43971984">https://news.ycombinator.com/item?id=43971984</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 13 May 2025 12:08:02 +0000</pubDate><link>https://visioncheckup.com</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=43971984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43971984</guid></item><item><title><![CDATA[Extending my edit web page bookmarklet]]></title><description><![CDATA[
<p>Article URL: <a href="https://jamesg.blog/2025/05/09/extending-my-edit-web-page-bookmarklet">https://jamesg.blog/2025/05/09/extending-my-edit-web-page-bookmarklet</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43935085">https://news.ycombinator.com/item?id=43935085</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 09 May 2025 09:15:52 +0000</pubDate><link>https://jamesg.blog/2025/05/09/extending-my-edit-web-page-bookmarklet</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=43935085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43935085</guid></item><item><title><![CDATA[New comment by zerojames in "Ask HN: Who is hiring? (April 2025)"]]></title><description><![CDATA[
<p>Roboflow | Solutions Architects, Implementation Engineer, Recruiter, Full Stack Engineers | Full-time (Remote, SF, NYC) | <a href="https://roboflow.com/careers?ref=whoishiring0425">https://roboflow.com/careers?ref=whoishiring0425</a><p>Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform [1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.<p>Over 1 million engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models [2]. We are pushing forward the CV ecosystem with open source projects like Autodistill [3] and Supervision [4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog [5] and YouTube channel [6].<p>We have several openings available but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping us figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)<p>We're currently hiring for full-stack engineers to help build our product, field engineers that work directly with our customers to solve their business needs with computer vision, and several technical roles on the sales team.<p>We just raised our Series B round, led by GV. With this round, we will continue building the open source tools, platform, and community so developers and enterprises can deploy computer vision applications to production. [7]<p>[1]: <a href="https://roboflow.com/?ref=whoishiring0425">https://roboflow.com/?ref=whoishiring0425</a><p>[2]: <a href="https://roboflow.com/universe?ref=whoishiring0425">https://roboflow.com/universe?ref=whoishiring0425</a><p>[3]: <a href="https://github.com/autodistill/autodistill" rel="nofollow">https://github.com/autodistill/autodistill</a><p>[4]: <a href="https://github.com/roboflow/supervision" rel="nofollow">https://github.com/roboflow/supervision</a><p>[5]: <a href="https://blog.roboflow.com/?ref=whoishiring0425">https://blog.roboflow.com/?ref=whoishiring0425</a><p>[6]: <a href="https://www.youtube.com/@Roboflow" rel="nofollow">https://www.youtube.com/@Roboflow</a><p>[7]: <a href="https://blog.roboflow.com/series-b/">https://blog.roboflow.com/series-b/</a></p>
]]></description><pubDate>Thu, 03 Apr 2025 06:35:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43565589</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=43565589</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43565589</guid></item><item><title><![CDATA[New comment by zerojames in "Notetime: Minimalistic notes where everything is timestamped"]]></title><description><![CDATA[
<p>This is a delightful app. Thank you for sharing! I like the user experience. I don't need to do anything except for write and press enter when I want to create a new line of text.</p>
]]></description><pubDate>Fri, 21 Mar 2025 13:09:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43435229</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=43435229</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43435229</guid></item><item><title><![CDATA[Show HN: RF-DETR, SOTA Real-Time Object Detection Model]]></title><description><![CDATA[
<p>The Roboflow ML team has been actively working on RF-DETR, a real-time, transformer-based object detection model architecture. The model architecture is now public and open source (Apache 2.0).<p>RF-DETR-large is the first real-time model to exceed 60 AP on the Microsoft COCO benchmark [1] and achieve strong speeds compared to other models at the base size. On an NVIDIA T4 GPU, RF-DETR-Base achieves 160 FPS.<p>RF-DETR is designed to transfer well to identify real-world objects that aren’t usually found in common training datasets, such as those found in industrial environments, wildlife settings, lab images, thermal, and novel research areas. This is measured using the new RF100-VL benchmark [2] developed in partnership with researchers from CMU.<p>If you try out the model, let us know! We have a fine-tuning guide to get you started with training models. [3]<p>[1]: <a href="https://cocodataset.org" rel="nofollow">https://cocodataset.org</a><p>[2]: <a href="https://github.com/roboflow/rf100-vl" rel="nofollow">https://github.com/roboflow/rf100-vl</a><p>[3]: <a href="https://blog.roboflow.com/train-rf-detr-on-a-custom-dataset/">https://blog.roboflow.com/train-rf-detr-on-a-custom-dataset/</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43435194">https://news.ycombinator.com/item?id=43435194</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 21 Mar 2025 13:07:04 +0000</pubDate><link>https://github.com/roboflow/rf-detr</link><dc:creator>zerojames</dc:creator><comments>https://news.ycombinator.com/item?id=43435194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43435194</guid></item></channel></rss>