<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dstrbad</title><link>https://news.ycombinator.com/user?id=dstrbad</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 07:50:59 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dstrbad" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Ask HN: Lightweight GPU job queue for single-node setup?]]></title><description><![CDATA[
<p>I’m running experiments on a single machine with 1 GPU and looking for a simple way to queue jobs (basically a GPU-aware task spooler).<p>In the past I’ve used task-spooler, but it seems unmaintained now.<p>I don’t need anything distributed, just:
– queue jobs
– run one at a time (or manage GPU allocation)
– minimal setup / dependencies<p>I’ve looked at things like Slurm and Kubernetes based setups, but they feel like overkill for this use case.<p>What are people here using in practice?<p>Custom scripts? Something like gflow/qup? Or is there a maintained equivalent to task-spooler?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47652325">https://news.ycombinator.com/item?id=47652325</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 05 Apr 2026 18:23:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47652325</link><dc:creator>dstrbad</dc:creator><comments>https://news.ycombinator.com/item?id=47652325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47652325</guid></item><item><title><![CDATA[New comment by dstrbad in ""Ironies of Automation (1983)""]]></title><description><![CDATA[
<p>I uploaded this to Fermat's Library because I think it's the most relevant five page paper in CS right now, even though it has nothing to do with software.<p>Bainbridge's argument about process control operators maps almost perfectly to AI-assisted software engineering:<p>The operator is asked to monitor a system that was automated because it does the job better than them. That's every engineer reviewing AI generated PRs, you're expected to verify decisions you couldn't have produced at that speed yourself.<p>Manual and cognitive skills decay without practice.<p>Her plant operators couldn't smoothly control processes they'd stopped doing by hand.<p>We're building the same dynamic with junior engineers who never debug without an agent.<p>The designer automates away the "unreliable human" and leaves them the residual tasks, which are always the hardest, most context-dependent edge cases, handed to someone progressively deskilled by the system now failing.<p>Her sharpest observation: the most reliable automated systems need the greatest investment in human training, because the operators get the least practice.
We should be thinking hard about what this means for how we train engineers, structure code review, and allocate work between humans and AI tools.</p>
]]></description><pubDate>Fri, 27 Mar 2026 08:09:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47540112</link><dc:creator>dstrbad</dc:creator><comments>https://news.ycombinator.com/item?id=47540112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47540112</guid></item><item><title><![CDATA["Ironies of Automation (1983)"]]></title><description><![CDATA[
<p>Article URL: <a href="https://fermatslibrary.com/p/028c7a80">https://fermatslibrary.com/p/028c7a80</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47540102">https://news.ycombinator.com/item?id=47540102</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 27 Mar 2026 08:08:27 +0000</pubDate><link>https://fermatslibrary.com/p/028c7a80</link><dc:creator>dstrbad</dc:creator><comments>https://news.ycombinator.com/item?id=47540102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47540102</guid></item><item><title><![CDATA[Show HN: ML accelerator on a RISC-V FPGA SoC – zero-cycle matmul, boots Linux]]></title><description><![CDATA[
<p>Article URL: <a href="https://dstrbad.substack.com/p/building-an-ml-accelerator-from-scratch">https://dstrbad.substack.com/p/building-an-ml-accelerator-from-scratch</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47456957">https://news.ycombinator.com/item?id=47456957</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 20 Mar 2026 16:31:40 +0000</pubDate><link>https://dstrbad.substack.com/p/building-an-ml-accelerator-from-scratch</link><dc:creator>dstrbad</dc:creator><comments>https://news.ycombinator.com/item?id=47456957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456957</guid></item></channel></rss>