<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bunnie</title><link>https://news.ycombinator.com/user?id=bunnie</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 05:42:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bunnie" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bunnie in "Google removes "Doki Doki Literature Club" from Google Play"]]></title><description><![CDATA[
<p>Anecdotally, from when I did first amendment activism, my lawyers would always recommend that I stick to doing it with dead tree editions as much as possible. Literal book banning and/or burning has direct judicial precedent that is hard to contest, enforced by lots of precedent.<p>The problem with moving out of the dead tree medium is that suddenly a whole host of alternative, untested legal theories can be thrown at you. Even if they are preposterous or 'obviously wrong' to the lay person, these alternative theories increase the cost of litigation, and limits the quick remedies you can seek, because the judge has to consider now if your situation is different from precedent.<p>If your adversary is well funded they can just keep on throwing up 'but what about...' theories to the court for years and years, effectively achieving censorship without setting any meaningful legal precedent.<p>Then they can reuse this strategy again and again, and anytime a litigant gets close to winning they settle out of court, avoiding clear legal precedent and thus preserving this 'legal purgatory' path (settlements do not create legal precedent).<p>Basically, they learned from experience with books how to avoid other media getting the same level of effective legal protection.<p>It's a clever exploit on the legal system, but not great for actual justice.</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:49:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750684</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47750684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750684</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>The core ID definitely didn't need to be in a register, but the elapsed clocks since reset is actually really handy. Having this in the hot path allows me to build a captouch sensor using the BIO, because the clock increment is 1.42ns and even though the rise time of the pad is microseconds you get plenty of resolution at that counting rate.<p>I think it will be interesting to see what people end up doing with it and what are the pain points. As you say, it's a v1 - with any luck there will be a v2, so we could consider the time starting now as a deliberation period for what goes into v2.<p>The good news is that it also all compiles into an FPGA, so proposed patches can be tested & vetted in hardware, albeit at a much slower clock rate.</p>
]]></description><pubDate>Tue, 24 Mar 2026 15:37:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47504241</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47504241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47504241</guid></item><item><title><![CDATA[New comment by bunnie in "BIO – The Bao I/O Co-Processor"]]></title><description><![CDATA[
<p>Yah, it is - the text is first posted to the campaign, and then copied to my blog for long-term archival in a domain that I control, sans the sales pitch.</p>
]]></description><pubDate>Tue, 24 Mar 2026 14:03:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47502777</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47502777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47502777</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>It's hard to know for sure, because we don't have access to the PIO's implementation, but I suspect that the PIO is "not small".<p>That being said - size isn't everything. At these small geometries you have gates to burn, and having access to multiple shifts in a single cycle really do help in a range of serialization tasks.</p>
]]></description><pubDate>Tue, 24 Mar 2026 13:58:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47502718</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47502718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47502718</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>I suspect there are tricks to get higher rates, for sure. And hopefully once we see a library of applications forming, we can make informed decisions about what extensions and features would be necessary to enable the next level of I/O performance.</p>
]]></description><pubDate>Tue, 24 Mar 2026 13:56:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47502675</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47502675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47502675</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>USB 12Mbps is one of the envisioned core use cases - the Baochip doesn't have a host USB interface, so being able to emulate a full-speed USB host with a BIO core opens the possibility of things like having a keyboard that you can plug into the device. CAN is another big use case, once there is a CAN bus emulator there's a bunch of things you can do. Another one is 10/100Mbit ethernet - it's not fast - but good for extremely long runs (think repeaters for lighting protocols across building-scale deployments).<p>When considering the space of possibilities, I focused on applications that I could see there being actual product sold that rely upon the feature. The problem with DVI is that while it's a super-clever demo, I don't see volume products going to market relying upon that feature. The moment you connect to an external monitor, you're going to want an external DRAM chip to run the sorts of applications that effectively utilize all those pixels. I could be wrong and mis-judged the utility of the demo but if you do the analysis on the bandwidth and RAM available in the Baochip, I feel that you could do a retro-gaming emulator with the chip, but you wouldn't, for example, be replacing a video kiosk with the chip. Running DOOM on a TV would be cool, but also, you're not going to sell a video game kit that just runs DOOM and nothing else.<p>The good news is there's plenty of room to improve the performance of the BIO. If adoption is robust for the core, I can make the argument to the company that's paying for the tape-outs to give me actual back-end resources and I can upgrade the cores to something more capable, while improving the DMA bandwidth, allowing us to chase higher system frequencies. But realistically, I don't see us ever reaching a point where, for example, we're bit-banging USB high speed at 480Mbps - if not simply because the I/Os aren't full-swing 3.3V at that point in time.</p>
]]></description><pubDate>Tue, 24 Mar 2026 06:00:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499057</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47499057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499057</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>FIFO is 8-deep. I did fail to mention that explicitly in the article, I think. The depth is so automatic to me that I forget other people don't know it.<p>The deadlock possibilities with the FIFO are real. It is possible to check the "fullness" of a FIFO using the built-in event subsystem, which allows some amount of non-blocking backpressure to be had, but it does incur more instruction overhead.</p>
]]></description><pubDate>Tue, 24 Mar 2026 05:46:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498988</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47498988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498988</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>Correct, actually most programs I've written for the BIO are in assembly.<p>The C compiler support is a relatively recent addition, mostly to showcase the possibilities of doing high-level protocol offloading into the BIO, and the tooling benefits of sticking with a "standard" instruction set.</p>
]]></description><pubDate>Tue, 24 Mar 2026 05:44:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498980</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47498980</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498980</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>If there's a single rising edge on the bus that you can use as quantum trigger, then, the reads turn into as series of moves into a FIFO, and the response can be quite fast. The quantum-trigger-on-GPIO was provided to solve exactly the problem you described.</p>
]]></description><pubDate>Tue, 24 Mar 2026 05:43:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498971</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47498971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498971</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>It depends a lot upon where the processing is happening. For example, you could do something where all the data is pre-processed and you're just blasting bits into a GPIO register with a pair of move instructions. In which case you could get north of 60MHz, but I think that's sort of cheating - you'll run out of pre-processed data pretty quickly, and then you have to take a delay to generate more data.<p>The 25MHz number I cite as the performance expectation is "relaxed": I don't want to set unrealistic expectations on the core's performance, because I want everyone to have fun and be happy coding for it - even relatively new programmers.<p>However, with a combination of overclocking and optimization, higher speeds are definitely on the horizon. Someone on the Baochip Discord thought up a clever trick I hadn't considered that could potentially get toggle rates into the hundreds of MHz's. So, there's likely a lot to be discovered about the core that I don't even know about, once it gets into the hands of more people.</p>
]]></description><pubDate>Mon, 23 Mar 2026 19:03:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493733</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47493733</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493733</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>As a side note about speed comparisons - please keep in mind the faster speeds cited for the PIO are achieved through overclocking.<p>The BIO should also be able to overclock. It won't overclock as well as the PIO, for sure - the PIO stores its code in flip-flops, which performance scales very well with elevated voltages. The BIO uses a RAM macro, which is essentially an analog part at its heart, and responds differently to higher voltages.<p>That being said, I'm pretty confident that the BIO can run at 800MHz for most cases. However, as the manufacturer I have to be careful about frequency claims. Users can claim a warranty return on a BIO that fails to run at 700MHz, but you can't do the same for one that fails to run at 800MHz - thus whenever I cite the performance of the BIO, I always stick it at the number that's explicitly tested and guaranteed by the manufacturing process, that is, 700MHz.<p>Third-party overclockers can do whatever they want to the chip - of course, at that point, the warranty is voided!</p>
]]></description><pubDate>Mon, 23 Mar 2026 18:52:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493601</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47493601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493601</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>Agreed! The PIO is great at what it does. I drew a lot of inspiration from it.</p>
]]></description><pubDate>Mon, 23 Mar 2026 18:39:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493439</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47493439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493439</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>The idea of the wait-to-quantum register is that it gets you out of cycle-counting hell at the expense of sacrificing a few cycles as rounding errors. But yes, for maximum performance you would be back to cycle counting.<p>That being said - one nice thing about the BIO being open source is you can run the verilog design in Verilator. The simulation shows exactly how many cycles are being used, and for what. So for very tight situations, the open source RTL nature of the design opens up a new set of tools that were previously unavailable to coders. You can see an example of what it looks like here: <a href="https://baochip.github.io/baochip-1x/ch00-00-rtl-overview.html#simulation-with-verilator" rel="nofollow">https://baochip.github.io/baochip-1x/ch00-00-rtl-overview.ht...</a><p>Of course, there's a learning curve to all new tools, and Verilator has a pretty steep curve in particular. But, I hope people give the Verilator simulations a try. It's kind of neat just to be able to poke around inside a CPU and see what it's thinking!</p>
]]></description><pubDate>Mon, 23 Mar 2026 18:38:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493434</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47493434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493434</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>Actually, the PIO does what it does very well! There is no "worse" or "better" - just different.<p>Because it does what it does so well, I use the PIO as the design study comparison point. This requires taking a critical view of its architecture. Such a review doesn't mean its design is bad - but we try to take it apart and see what we can learn from it. In the end, there are many things the PIO can do that the BIO can't do, and vice-versa. For example, the BIO can't do the PIO's trick of bit-banging DVI video signals; but, the PIO isn't going to be able to protocol processing either.<p>In terms of area, the larger area numbers hold for both an ASIC flow as well as the FPGA flow. I ran the design through both sets of tools with the same settings, and the results are comparable. However, it's easier to share the FPGA results because the FPGA tools are NDA-free and everyone can replicate it.<p>That being said, I also acknowledge in the article that it's likely there are clever optimizations in the design of the actual PIO that I did not implement. Still, barrel shifters are a fairly expensive piece of hardware whether in FPGA or in ASIC, and the PIO requires several of them, whereas the BIO only has one. The upshot is that the PIO can do multiple bit-shifts in a single clock cycle, whereas the BIO requires several cycles to do the same amount of bit-shifting. Again, neither good or bad - just different trade-offs.</p>
]]></description><pubDate>Mon, 23 Mar 2026 18:24:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493270</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47493270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493270</guid></item><item><title><![CDATA[New comment by bunnie in "BIO: The Bao I/O Coprocessor"]]></title><description><![CDATA[
<p>Hello again HN, I'm bunnie! Unfortunately, time zones strike again...I'll check back when I can, and respond to your questions.</p>
]]></description><pubDate>Mon, 23 Mar 2026 17:58:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47492917</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47492917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47492917</guid></item><item><title><![CDATA[New comment by bunnie in "Baochip-1x: What it is, why I'm doing it now and how it came about"]]></title><description><![CDATA[
<p>To clarify - RISC-V is an architecture, and that is an open specification. However, as an architecture it only specifies things like, what the instructions are and their encodings. It doesn't actually give you a CPU that does anything, just an abstraction of how to describe a CPU to a common standard.<p>Anyone is permitted to implement a RISC-V CPU, which would then involve coding something up in an RTL. The resulting RTL artifact may be open or closed source depending upon the developer's preference. In the case of the Vexriscv, that particular one implementation is MIT licensed. There are other implementations that also have MIT licenses, but because it is up to the core's implementer to pick a license, not all RISC-V cores are open source.<p>In fact, some of the most commercially successful RISC-V cores are closed source licensed.</p>
]]></description><pubDate>Sun, 15 Mar 2026 12:12:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47386655</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47386655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47386655</guid></item><item><title><![CDATA[New comment by bunnie in "Baochip-1x: What it is, why I'm doing it now and how it came about"]]></title><description><![CDATA[
<p>Hmm...it's not <i>just</i> the speed. Actually, the I/O pads themselves are closed source because there's a lot of process magic in them - from the ring seals to the ESD protection, the foundries consider these to be part of what makes them different from each other, so they protect those designs.<p>So for example, many projects bitbang USB full-speed using plain old 3.3V I/Os but by the spec the signals have to have some slew rate limiting in a form that isn't found on standard I/Os. And also, if you're doing it right, you're taking the differential signals in on USB and not just reading them into two separate single-ended pads but you're actually subtracting the analog values to get the full benefit of differential signaling's common mode rejection properties. Thus even a lower speed USB PHY has some specialty circuits in it to achieve these nuances.<p>As another example, RS232, by the spec, would be a +/-3V to +/-15V driver, which is actually really specialized in the chip world and quite uncommon due to the negative voltages. PHYs that drive I/Os is one of the enduring pain points for open source PDKs - they are hard to develop, "boring" because they are "just wires", but absolutely essential to get right and bring into existence if you want to talk to anything interesting.</p>
]]></description><pubDate>Sun, 15 Mar 2026 06:27:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384836</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47384836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384836</guid></item><item><title><![CDATA[New comment by bunnie in "Baochip-1x: What it is, why I'm doing it now and how it came about"]]></title><description><![CDATA[
<p>thank you~~</p>
]]></description><pubDate>Sun, 15 Mar 2026 06:22:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384819</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47384819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384819</guid></item><item><title><![CDATA[New comment by bunnie in "Baochip-1x: What it is, why I'm doing it now and how it came about"]]></title><description><![CDATA[
<p>I imagine sel4 could be possible, but I haven't done any specific checking for compatibility.<p>Current draw - depends on the operating mode, etc. A dabao board with all its regulators and overhead draws around 30mA @ 5V. The CPU in "WFI sleep" (clocked stopped, instant wake-up, all memory preserved) will draw about 12mA @ 0.85V. There's a "deep sleep" mode that requires an effective reboot (clock stopped, no memory preserved) to come out of where it's down to under 1mA @ 0.7V. These latter low power modes require an external power management architecture that can vary the voltage of the core so you can achieve lower leakage states.<p>I think comparatively speaking, the Baochip doesn't have strong low power numbers. I have always imagined it as more of a chip that gets stuck into a USB device, so it's plugged into a host with a fairly ample power reserve, and not a coin cell battery.</p>
]]></description><pubDate>Sun, 15 Mar 2026 06:22:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384817</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47384817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384817</guid></item><item><title><![CDATA[New comment by bunnie in "Baochip-1x: What it is, why I'm doing it now and how it came about"]]></title><description><![CDATA[
<p>It all depends on the node. Masks in 130nm are maybe in the $10k's-$100k's range. Masks for the latest TSMC nodes might cost you $30-40 million per set. The masks are pretty much a modern marvel in their own right - I'd wager they are some of the most precisely manufactured human objects in existence.</p>
]]></description><pubDate>Sun, 15 Mar 2026 06:17:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384791</link><dc:creator>bunnie</dc:creator><comments>https://news.ycombinator.com/item?id=47384791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384791</guid></item></channel></rss>