<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: brucethemoose2</title><link>https://news.ycombinator.com/user?id=brucethemoose2</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 17:48:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=brucethemoose2" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by brucethemoose2 in "The Reasonable Effectiveness of Using Old Phones as Servers"]]></title><description><![CDATA[
<p>Some phones have limiters to keep the battery at 60%-80%.<p>I believe <i>most</i> can do this with the right software.<p>It's not a fix, but it should extend the life considerably.</p>
]]></description><pubDate>Sat, 30 Mar 2024 17:11:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=39876619</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39876619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39876619</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The Reasonable Effectiveness of Using Old Phones as Servers"]]></title><description><![CDATA[
<p>Another benefit: modern smartphones have large GPUs, large media blocks, and fast RAM.<p><i>With the right software</i>, they can be a surprisingly powerful AI host or transcoding server.</p>
]]></description><pubDate>Sat, 30 Mar 2024 15:31:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=39875777</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39875777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39875777</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "Towards 1-bit Machine Learning Models"]]></title><description><![CDATA[
<p>Real world GPU performance is hugely influenced by hand optimization of the CUDA kernels.</p>
]]></description><pubDate>Fri, 29 Mar 2024 23:05:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=39870001</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39870001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39870001</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "Mazda’s rotary engine in the age of the electric car"]]></title><description><![CDATA[
<p>Power/Weight is extremely high. A tiny wankel will do the job, and weight is everything on cars.<p>It does prefer a narrow RPM band, which is fine.<p>Reliability is the biggest concern TBH, but maybe that's not a <i>huge</i> bummer if its more of a backup/assistant engine.</p>
]]></description><pubDate>Fri, 29 Mar 2024 21:31:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=39869240</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39869240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39869240</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "DBRX: A new open LLM"]]></title><description><![CDATA[
<p>Yeah, its an unspoken but rampant thing in the llm community. Basically no one respects licenses for training data.<p>I'd say the majority of instruct tunes, for instance, use OpenAI output (which is against their TOS).<p>But its all just research! So who cares! Or at least, that seems to be the mood.</p>
]]></description><pubDate>Thu, 28 Mar 2024 21:04:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=39857290</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39857290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39857290</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "DBRX: A new open LLM"]]></title><description><![CDATA[
<p>Yeah I know, hence its odd I found it kind of dumb for personal use. Moreso with the smaller models, which lost an objective benchmark I have to some Mistral finetunes.<p>And I don't <i>think</i> I was using it wrong. I know, for instance, the Chinese language models are funny about sampling since I run Yi all the time.</p>
]]></description><pubDate>Thu, 28 Mar 2024 17:47:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=39855004</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39855004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39855004</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "DBRX: A new open LLM"]]></title><description><![CDATA[
<p>I would note the actual leading models right now (IMO) are:<p>- Miqu 70B (General Chat)<p>- Deepseed 33B (Coding)<p>- Yi 34B (for chat over 32K context)<p>And of course, there are finetunes of all these.<p>And there are some others in the 34B-70B range I have not tried (and some I have tried, like Qwen, which I was not impressed with).<p>Point being that Llama 70B, Mixtral and Grok as seen in the charts are not what I would call SOTA (though mixtral is excellent for the batch size 1 speed)</p>
]]></description><pubDate>Wed, 27 Mar 2024 21:04:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=39844574</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39844574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39844574</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "JPEG XL exceeds 20% of served images"]]></title><description><![CDATA[
<p>The conspiracy theorist in me says thats a low priority due to perverse incentives (namely selling more storage at a huge markup).<p>Another rationale is that the Apple ecosystems tends to not use JPEG by default anyway, right? It uses HEIC or something.</p>
]]></description><pubDate>Tue, 26 Mar 2024 16:16:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39829582</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39829582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39829582</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "Show HN: Memories – FOSS Google Photos alternative built for high performance"]]></title><description><![CDATA[
<p>Being a "hero" open source dev for a project like that can require a lot of neuroticism.<p>Sometimes it works, but sometimes the project is just too big, I think.</p>
]]></description><pubDate>Thu, 21 Mar 2024 23:24:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=39785642</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39785642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39785642</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The AMD tinybox is on hold until we can build and run the firmware on our GPUs"]]></title><description><![CDATA[
<p>It's not either or, you can use different vendors for different tasks.<p>tinygrad isn't in the realm of production ready though, AFAIK.</p>
]]></description><pubDate>Wed, 20 Mar 2024 19:49:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=39771211</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39771211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39771211</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The AMD tinybox is on hold until we can build and run the firmware on our GPUs"]]></title><description><![CDATA[
<p>The MI300 is the best accelerator you can buy, for many current workloads.<p>It's technically way more advanced. Not as outrageously priced as an H100 either.</p>
]]></description><pubDate>Wed, 20 Mar 2024 19:47:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=39771190</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39771190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39771190</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The AMD tinybox is on hold until we can build and run the firmware on our GPUs"]]></title><description><![CDATA[
<p>I think you are preaching to the choir, and AMD is not listening.<p>AMD would be selling 48GB 7900s or AI-only W7900s if they really wanted a consumer card ramp.<p>They don't. Not because they can't (they literally prevent OEMs from doing so, who would double up VRAM in a heartbeat without AMD lifting a finger), but because AMD doesn't want that.</p>
]]></description><pubDate>Wed, 20 Mar 2024 19:40:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39771125</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39771125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39771125</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The AMD tinybox is on hold until we can build and run the firmware on our GPUs"]]></title><description><![CDATA[
<p>I never followed Hotz, so perhaps I missed something cool. But I never understood the hype myself.</p>
]]></description><pubDate>Wed, 20 Mar 2024 19:32:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=39771035</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39771035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39771035</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "Key Stable Diffusion Researchers Leave Stability AI as Company Flounders"]]></title><description><![CDATA[
<p>Well, personally, SDXL just blows 1.5 out of the water for me. I haven't had a reason to even touch 1.5 in months.<p>But note that SDXL is really awful in automatic1111 or vanilla HF diffusers for me. You have to use something with proper augmentations (like ComfyUI or Fooocus(which runs on ComfyUI)).</p>
]]></description><pubDate>Wed, 20 Mar 2024 19:30:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=39771007</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39771007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39771007</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The AMD tinybox is on hold until we can build and run the firmware on our GPUs"]]></title><description><![CDATA[
<p>Used 3090 prices are absolutely outrageous.<p>And the 4090 MSRP was outrageous to begin with.</p>
]]></description><pubDate>Wed, 20 Mar 2024 19:27:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=39770963</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39770963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39770963</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The AMD tinybox is on hold until we can build and run the firmware on our GPUs"]]></title><description><![CDATA[
<p>I was talking about renting!<p>There are some boutique hosts like Hot Aisle serving MI300s (who I really should reach out to), but for the immediate future our little startup is stuck with the big cloud providers. No MI300s for us mere mortals, not even to rent.</p>
]]></description><pubDate>Wed, 20 Mar 2024 18:53:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=39770558</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39770558</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39770558</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The AMD tinybox is on hold until we can build and run the firmware on our GPUs"]]></title><description><![CDATA[
<p>But is this going to blow over in a few days? Again?<p>I can certainly appreciate frustration with the AMD stack, but be blunt, I was not impressed with Hotz's YouTube rant from before.[1] It didn't give the impression of a stable framework, and this doesn't either.<p>Also (at least from the end user llm inference side of things) ROCm is not nearly as unusable as it used to be. We would <i>certainly</i> be renting MI300s over A100s (or even H100s) if we could get any, and we use a number of different inference backends.<p>1: <a href="https://news.ycombinator.com/item?id=36193625">https://news.ycombinator.com/item?id=36193625</a></p>
]]></description><pubDate>Wed, 20 Mar 2024 18:31:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=39770324</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39770324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39770324</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "Key Stable Diffusion Researchers Leave Stability AI as Company Flounders"]]></title><description><![CDATA[
<p>SDXL is amazing.<p>The community is entrechend in 1.5 because that's what everyone is now familiar with, IMO</p>
]]></description><pubDate>Wed, 20 Mar 2024 18:16:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=39770170</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39770170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39770170</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "The rise and fall of a Halifax man's illegal TV streaming empire"]]></title><description><![CDATA[
<p>Its more like the store being a literal hedge maze, with an entrance fee, and once you get to the actual products, they are outrageously priced junk.<p>And the store is price fixing with nearby stores.<p>I think theres a difference between opportunistic crime, and reasonably upstanding people being utterly frustrated with a market status quo.<p>If course there is a ton of piracy that <i>is</i> just straight up theft from perfectly convenient platforms, but the unacceptable, anticompetitive commercial platforms are the core that keeps the community going, I think.</p>
]]></description><pubDate>Tue, 19 Mar 2024 14:46:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=39756241</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39756241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39756241</guid></item><item><title><![CDATA[New comment by brucethemoose2 in "Show HN: Not sure you're talking to a human? Create a human check"]]></title><description><![CDATA[
<p>My "oh no" moment was a vision model reading this <i>perfectly</i>:<p><a href="https://abadguide.files.wordpress.com/2012/01/jh66.jpg?w=640" rel="nofollow">https://abadguide.files.wordpress.com/2012/01/jh66.jpg?w=640</a><p>Not an OCR program or anything specialized, just some generic (vision) llm that can run on my desktop with a dumb prompt...</p>
]]></description><pubDate>Tue, 19 Mar 2024 14:22:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=39756021</link><dc:creator>brucethemoose2</dc:creator><comments>https://news.ycombinator.com/item?id=39756021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39756021</guid></item></channel></rss>