<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ctbellmar</title><link>https://news.ycombinator.com/user?id=ctbellmar</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 18:11:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ctbellmar" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ctbellmar in "Accelerating Gemma 4: faster inference with multi-token prediction drafters"]]></title><description><![CDATA[
<p>I had the same experience with 31B. Runs well on 4090 too!</p>
]]></description><pubDate>Wed, 06 May 2026 14:36:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=48036791</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=48036791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48036791</guid></item><item><title><![CDATA[New comment by ctbellmar in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>Glad it helps. As for narrow classifiers, it's decision tree logic as you say, and best done via trial and error than over-engineering and theory. Cleverness comes from your own experience :)</p>
]]></description><pubDate>Wed, 15 Apr 2026 10:47:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47777257</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=47777257</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47777257</guid></item><item><title><![CDATA[New comment by ctbellmar in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>That's the idea - hence PlantLab, not CannaLab. Cannabis makes sense as the entry point because it's a cash crop with a big hobbyist scene, so there's enough interest to get real usage data early. But the goal is broader - tomatoes, grapes, whatever grows.<p>One crop at a time though. A so-so classifier across 50 species is way less useful than a really good one for the thing you're actually growing.</p>
]]></description><pubDate>Tue, 14 Apr 2026 05:24:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47761582</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=47761582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47761582</guid></item><item><title><![CDATA[New comment by ctbellmar in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>Thanks! Yeah, the single-species focus does a lot of the work. Under the hood it's not one big model - there's a cannabis verification gate, then routing into
  disease vs pest vs deficiency, then narrower classifiers from there. Each one has a simpler job so accuracy stays high.<p>Early on the photography thing was a real problem. Training data was mostly decent shots, then inference would come in as some blurry phone photo under purple LEDs.<p>Confident misclassifications. The fix wasn't clever - just more data that looks like how people actually take photos of their plants. Messy, badly lit, half the leaf out of frame. Once there was enough of that in the training set the models stopped caring about white balance. About 1.1 million augmented images now and light temperature just isn't a factor. No color card needed.<p>For tissue culture - I'd bet the multi-species part is what's killing you. I'd pick the single highest-value species, collect a probably-uncomfortable amount of well-labeled data for just that one, and see if things change. Right now you might not be able to tell what's a data problem vs a fundamental limitation, because the generalization overhead masks both.</p>
]]></description><pubDate>Tue, 14 Apr 2026 05:23:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47761573</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=47761573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47761573</guid></item><item><title><![CDATA[New comment by ctbellmar in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>PlantLab (<a href="https://plantlab.ai" rel="nofollow">https://plantlab.ai</a>) - AI plant health diagnosis for cannabis. It's an API, not an app [1]. Photo in, structured JSON out - condition, confidence, growth stage, nutrient lockout analysis. The response is for machines. Light burn at 0.92 confidence? Your controller dims the light. Calcium deficiency with excess potassium flagged as the lockout cause? Dosing pump adjusts.<p>I'm a software dev/data nerd, not a grower. I got interested because cannabis grow rooms are already full of automation - VPD controllers, pH/EC monitoring, dosing pumps, dimmable lights. But nothing was looking at the plant. Every sensor in the room measures the environment, not whether the plant is actually doing well. I wanted to add the eyes. And this seems to be a bound domain issue (i.e. limited number of issues/conditions/pests vs. all plants everywhere).<p>ViT-based multi-stage pipeline that verifies it's cannabis, classifies condition or pest, then runs nutrient subclassification if needed. 30 classes, 18ms inference, Go API, ONNX Runtime. Trained on a little over a million images from grower friends. Classification was 80% of the lift. I also shipped a Home Assistant integration - camera takes a scheduled snapshot, PlantLab diagnoses, HA acts on the result. No human involved.<p>Recently the part that's been the most fun is the autoresearch loop. Between training runs the system looks at its own confusion matrix, finds which classes it's mixing up, audits those training images for bad labels, and tells me what to fix. It's not fully autonomous yet but it's getting there - the model is increasingly debugging its own training data.<p>Solo project, <100 users, free tier is 3/day.<p>[1] I built a simple Android app for those who want to just try it out, it's on Google Store. Probably will make one for iOS too as time allows. <a href="https://play.google.com/store/apps/details?id=com.plantlab.plantlab_mobile">https://play.google.com/store/apps/details?id=com.plantlab.p...</a></p>
]]></description><pubDate>Mon, 13 Apr 2026 13:23:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47751614</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=47751614</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47751614</guid></item><item><title><![CDATA[New comment by ctbellmar in "AWS multiple services outage in us-east-1"]]></title><description><![CDATA[
<p>Various AI services (e.g. Perplexity) are down as well</p>
]]></description><pubDate>Mon, 20 Oct 2025 08:11:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45641135</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=45641135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45641135</guid></item><item><title><![CDATA[New comment by ctbellmar in "Augment Code's pricing is changing on October 20"]]></title><description><![CDATA[
<p>They sent out emails to existing customers yesterday, showing if you are above/below/at average usage. I'm above (no surprise), and I wonder if anyone on higher plans will find themselves under-utilizing their subscription - probably not.</p>
]]></description><pubDate>Tue, 14 Oct 2025 12:49:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45579409</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=45579409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45579409</guid></item><item><title><![CDATA[New comment by ctbellmar in "Show HN: Augment your dataset with LLM distillation techniques"]]></title><description><![CDATA[
<p>Pawel,<p>This looks promising! Is it for text based models only at this time (i.e. no vision/image training)?</p>
]]></description><pubDate>Tue, 14 Oct 2025 12:48:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45579399</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=45579399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45579399</guid></item><item><title><![CDATA[New comment by ctbellmar in "Show HN: OWhisper – Ollama for realtime speech-to-text"]]></title><description><![CDATA[
<p>I wrote a tool that may be just the thing for you:<p><a href="https://github.com/bikemazzell/skald-go/" rel="nofollow">https://github.com/bikemazzell/skald-go/</a><p>Just speech to text, CLI only, and it can paste into whatever app you have open.</p>
]]></description><pubDate>Fri, 15 Aug 2025 14:16:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44912719</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=44912719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44912719</guid></item><item><title><![CDATA[New comment by ctbellmar in "Augment Code model comparison: GPT-5 vs. Claude Sonnet 4"]]></title><description><![CDATA[
<p>I wonder how well Augment system will play with both of these. I recall that for some time, Cursor worked really well with Claude LLMs and less so with OpenAI's offerings like GPT and o's. So far, my own testing had a few timeouts on GPT-5 and slower results. Nothing substantially different - need to experiment with different languages and projects to pick out the use cases for GPT-5.</p>
]]></description><pubDate>Mon, 11 Aug 2025 16:53:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44866494</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=44866494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44866494</guid></item><item><title><![CDATA[New comment by ctbellmar in "GPT-5 vs. Sonnet: Complex Agentic Coding"]]></title><description><![CDATA[
<p>I know it's been mentioned a few times, but worth repeating: these LLMs tend to do noticeably better in their own native environments. Claude (Opus or Sonnet) in Copilot != Claude in Claude Code. Same applies to Cursor,  Windsurf, Augment, etc. This likely has a lot to do with context manipulation (and compression), which affects the resulting output. I imagine that GPT-5 likewise will do better in Codex vs 3rd party plugin/VS Code fork.</p>
]]></description><pubDate>Mon, 11 Aug 2025 16:50:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44866455</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=44866455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44866455</guid></item><item><title><![CDATA[New comment by ctbellmar in "The Claude party is almost over"]]></title><description><![CDATA[
<p>"Qwen3-Coder ... is the first open-source model I’ve been able to accept patches from. It isn’t by any means a Claude killer, but it feels like Claude 3.7 Sonnet, maybe even better."<p>Has anyone been able to set up Qwen3-Coder to run locally in agentic mode (via LM Studio or similar)? So far, I have only seen in work as Chat via Continue plugin. It gives reasonable suggestions, and it is supposed to be able to call tools, just haven't figured out how to make that happen yet.</p>
]]></description><pubDate>Mon, 11 Aug 2025 14:22:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44864424</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=44864424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44864424</guid></item><item><title><![CDATA[New comment by ctbellmar in "Ask HN: What's Your Take on Perplexity AI?"]]></title><description><![CDATA[
<p>I had some time for in depth experiments with it this summer and was disappointed. It gets the surface level details alright, but falls apart on any detailed work.<p>Examples that failed: 
- opening hours for POI (restaurants, tourist attractions, etc) - mostly made up
- GPS coordinates - produced results that were nearly 100% inaccurate
- finding contact info (e.g. phone, email) for specific government or public bodies - nearly 100% inaccurate<p>The issue with above was mainly not a lack of results but rather fabricated/made up ones. As in: here are the coordinates (that don't correspond to actual locations) or here are the phone numbers of such and such departments (that don't exist), creating more work to try and discover they are nonsensical vs. just giving "no results found" message.</p>
]]></description><pubDate>Mon, 11 Aug 2025 14:17:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44864360</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=44864360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44864360</guid></item><item><title><![CDATA[New comment by ctbellmar in "Ask HN: What are you working on? (May 2025)"]]></title><description><![CDATA[
<p>WhatSignal, WhatsApp <-> Signal relay, written in Go<p><a href="https://github.com/bikemazzell/whatsignal">https://github.com/bikemazzell/whatsignal</a><p>I'm working on a WhatsApp to Signal relay. I.e.: whenever someone sends a WA message to you, it appears in your Signal. You can reply and it will go back to the original sender.<p>Why? I'm privacy conscious and don't fancy using a Meta product. But some of my friends/associates/family still insist on WhatsApp only. Running this WhatBridge service on my micro server behind a VPN allows me to communicate without having WhatsApp on my mobile.<p>Behind the scenes, it connects WAHA (<a href="https://github.com/devlikeapro/waha">https://github.com/devlikeapro/waha</a>
) and Signal CLI (<a href="https://github.com/AsamK/signal-cli">https://github.com/AsamK/signal-cli</a>). Still early stages, but getting closer to a workable state.</p>
]]></description><pubDate>Mon, 26 May 2025 12:05:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44096624</link><dc:creator>ctbellmar</dc:creator><comments>https://news.ycombinator.com/item?id=44096624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44096624</guid></item></channel></rss>