<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ejiblabahaba</title><link>https://news.ycombinator.com/user?id=ejiblabahaba</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 09:38:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ejiblabahaba" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ejiblabahaba in "Finding the differences in a series of power supplies"]]></title><description><![CDATA[
<p>There's basically no point. Desktop PSU is a solved problem, most designs are all about cost engineering and the tiny sliver of higher power and ultra high efficiency options are not struggling with their current form factor.<p>In the data center, where power (and cooling) are the only significant OpEx, GaN point-of-load conversion is everywhere. Common as a rack distributed 48V to 12V bus or direct to processor Vdd (2% duty cycle is feasible with GaN thanks to fast on/off times). There was a while where GaN was used as part of the power factor correction for AC to DC in the server power supply, back when passing 400VAC or 800VAC bus around made sense. I think these days it's mostly back to DC buses, and AC-to-DC is all happening farther back, in part because of widespread solar deployments and trying to avoid DC->AC->DC double conversion losses when possible. So maybe GaN gets use on active secondary rectifier in the bus -> 48V now too.</p>
]]></description><pubDate>Thu, 07 May 2026 08:37:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=48046948</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=48046948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48046948</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "How Motorola’s 2N2222 and 2N3904 transistors became the default NPNs"]]></title><description><![CDATA[
<p>Well, kinda. The value is in meeting the spec, yes, but the "certifiable" part is very often a subset of the actual spec. Sort of like in software how someone will inevitably depend on every feature of a public API (including the defects), a lot of early military electronics depend on implementation details of their component processes, usually by accident. The cost for semiconductor companies is not meeting the spec, it's keeping a production line from the 1960s running exactly the same way for a single customer who buys a handful of parts every year.<p>The problem with Chinese semiconductors isn't performance or meeting specs, at least not these days. It's counterfeits, life expectancy of the source companies, and the obvious risk of basing your supply chain on a foreign political actor that can leverage this dependency against you.</p>
]]></description><pubDate>Mon, 20 Apr 2026 19:21:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47839254</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=47839254</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47839254</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "How Motorola’s 2N2222 and 2N3904 transistors became the default NPNs"]]></title><description><![CDATA[
<p>I wouldn't be so quick to make guarantees. There's cheaper spec-equivalent devices, sure, but a frequent and recurring feature of military hardware prior to the 2000s is an unexpected dependence on non-spec device characteristics. I've seen seemingly inconsequential cost-saving process tweaks identified as the root cause for novel testing failures for more than one ancient US military project.<p>Setting aside that approximately no one is shooting down quadcopters with missiles, that quadcopters and missiles are entirely different categories of weapon with substantially different range/payload/response time targets, and that the availability logistics and acceptable failure rates of paramilitary gangs and impoverished former Soviet belligerents from whom we are likely drawing conclusions about drone warfare economics are maybe just a step or two below the average US military procurement contract requirements; and charitably interpreting your argument as a generalization that repurposed consumer-grade electronics offers a cost reduction over military-grade selections for equivalent performance: true in most modern cases. The US has plenty of cheap domestic options for seemingly pricy problem domains, just ask SpaceX. SOTA in aerospace and defense routinely uses commercial products to great effect. To the extent that shelling out for a $60 military-grade 1960s transistor instead of a $0.06 commercial equivalent is a problem for the US military, it is downstream of enormous legacy cold war capital investment in technology that was novel for its time but is comparatively brittle by modern standards. This is still a problem, to be clear, but a small one in the grand scheme of US military spending.</p>
]]></description><pubDate>Mon, 20 Apr 2026 18:54:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47838914</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=47838914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47838914</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Fraud investigation is believing your lying eyes"]]></title><description><![CDATA[
<p>Suppose an asteroid strikes the Earth and all human life becomes extinct. What power, specifically, has been transferred, and to where?</p>
]]></description><pubDate>Fri, 06 Feb 2026 20:54:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46918007</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=46918007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46918007</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Bosch Unveils New Brake Technology"]]></title><description><![CDATA[
<p>I think the question was less about the efficacy of ABS, and more about the failure mode. Is it possible for the ABS system to "fail open" unintentionally, such that depressing the brake pedal has no effect whatsoever?</p>
]]></description><pubDate>Sat, 20 Sep 2025 18:36:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45316011</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=45316011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45316011</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Looking back at my transition from Windows to Linux"]]></title><description><![CDATA[
<p>My employer blocks access to Google Docs as part of our confidential information protection policy. They're certainly not the only ones. I'd hesitate to call on-premises file management "specialized needs" - rather, it's (still) the default, particularly if you take a peek outside of the software bubble.</p>
]]></description><pubDate>Sun, 24 Aug 2025 21:15:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45007854</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=45007854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45007854</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Here be dragons: Preventing static damage, latchup, and metastability in the 386"]]></title><description><![CDATA[
<p>Historically this was a huge concern because not every manufacturer implemented their ESD protection properly; or, on occasion, the process technology meant that ESD protection would hinder the functionality of the device. This happened a lot in RF circuits, and still to this day many RF instruments are extremely sensitive to ESD events. Board assembly was also a lot less automated in the early days of integrated circuits, so more human handlers and more opportunities for ESD events were anticipated.<p>Modern IC ESD protection is very effective against a few moderate energy events distributed on different pins, and there's a few industry standards that help determine the required amount of caution for dealing with a particular IC (HBM or human-body model, and CDM or charged-device model, are common - targeted toward human assembly procedures and things like triboelectric or inductive charge buildup). In the right climate, a single high energy event is sometimes enough to degrade functionality or (rarely) completely destroy the device, so board assembly and semiconductor manufacturing facilities still require workers to use wrist straps, shoe grounders, mats, treated floors, climate control, etc. Some high voltage GaN work I did years ago required ionizing blowers (basically a spark gap with a fan) because GaN gates are easy to destroy with gate overstress, and there are risks involved with unintended high voltage contact with typical ESD protective solutions. In another embedded-focused lab, the only time I've ever seen someone put on a wrist strap was for handling customer hardware returns. It really depends what you're working with, and in what environment.<p>I've more frequently (once or twice a year) had devices which exhibit symptoms of something being wrong at the inputs or the outputs, but only on a specific pin or port. For outputs, some symptoms include the output slew rate is inadequate, or the output appears stuck sometimes, or the output has higher than expected voltage noise (though this is a non-exhaustive list). For inputs, the symptoms are more complex - sometimes there's a manifestation at the outputs for amplifiers or other linear circuits, but for feedback systems or digital systems they might behave as though an input is stuck, toggling slowly, etc. which is difficult to distinguish from other, more common errors. I've also directly been the cause of several ESD failures, but in these cases the test objective was to determine the failure thresholds for the system, so I'm not sure that counts.<p>I've had a customer hardware failure that was eventually traced back to electrical overstress damage on a single pin of an IC near the corner of a board, right where someone might put their thumb if they were holding the board in one hand. In the absence of a better explanation, I suggested this was an ESD failure due to handling error. I never heard about it again, which is weak evidence in favor of a one-off ESD event.</p>
]]></description><pubDate>Sun, 17 Aug 2025 21:21:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44935052</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=44935052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44935052</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "More than you wanted to know about how Game Boy cartridges work"]]></title><description><![CDATA[
<p>These parts are bonkers. The ringing on its own outputs with a few inches of trace or (heaven forbid) a connector is regularly sufficient to self-trigger the automatic direction reversal. These things genuinely deserve the "experts only" label - they are close to unusable in the situations where you'd be most inclined to reach for them.</p>
]]></description><pubDate>Wed, 23 Jul 2025 11:11:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44657848</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=44657848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44657848</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Fstrings.wtf"]]></title><description><![CDATA[
<p>Learned a few tricks that I'm sure are buried on fstring.help somewhere (^ for centering, # for 0x/0b/0o prefixes, !a for ascii). I missed the nested f-strings question, because I've been stuck with 3.11 rules, where nested f-strings are still allowed but require different quote characters (e.g. print(f"{f'{{}}'}") would work). I guess this got cleaned up (along with a bunch of other restrictions like backslashes and newlines) in 3.12.<p>F-strings are great, but trying to remember the minute differences between string interpolation, old-style formatting with %, and new-style formatting with .format(), is sort of a headache, and there's cases where it's unavoidable to switch between them with some regularity (custom __format__ methods, templating strings, logging, etc). It's great that there's ergonomic new ways of doing things, which makes it all the more frustrating to regularly have to revert to older, less polished solutions.</p>
]]></description><pubDate>Sat, 19 Jul 2025 13:03:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44615163</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=44615163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44615163</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Air Traffic Control"]]></title><description><![CDATA[
<p>Because, unlike most professions, ATC is immediately, personally responsible for making decisions for which a slight mistake could instantly claim the lives of hundreds of people.</p>
]]></description><pubDate>Tue, 13 May 2025 05:48:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=43969917</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=43969917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43969917</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "An epic treatise on error models for systems programming languages"]]></title><description><![CDATA[
<p>Which language is this? I'm sure some people can clue this together from the hints, but I'm not one of them.</p>
]]></description><pubDate>Sat, 08 Mar 2025 11:26:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43299337</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=43299337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43299337</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "PCIe trouble with 4TB Crucial T500 NVMe SSD for >1 power cycle on MSI PRO X670-P"]]></title><description><![CDATA[
<p>For what it's worth, this post just helped me explain several years of failure to wake from sleep state, across several different MSI-based machines, when I've connected them to an HDMI port in my TV. I think this debug is interesting in its own right, and unlike 99% of the content on this website, it was directly and immediately useful to me. I doubt I'm the only one, too.</p>
]]></description><pubDate>Sat, 28 Dec 2024 15:23:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=42531675</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=42531675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42531675</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "We need visual programming. No, not like that"]]></title><description><![CDATA[
<p>As someone with a hardware background, I'll throw in my $0.02. The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like. Once you get past defining the block behaviors in HDL, instantiation can become tedious and error-prone in text, since the tools all kinda suck with very little hinting or argument checking, and the modules can and regularly do have dozens of I/O arguments. Instead, it's often very easy to map the module inputs to schematic-level wires, particularly in situations where large buses can be combined into single fat lines, I/O types can be visually distinguished, etc. IDE keyboard shortcuts also make these signals easy to follow and trace as they pass through hierarchical organization of blocks, all the way down to transistor-level implementations in many cases.<p>I've also always had an admiration for the Falstad circuit simulation tool[0], as the only SPICE-like simulator that visually depicts magnitude of voltages and currents during simulation (and not just on graphs). I reach for it once in a while when I need to do something a bit bigger than I can trivially fit in my head, but not so complex that I feel compelled to fight a more powerful but significantly shittier to work with IDE to extract an answer.<p>Schematics work really well for capturing information that's independent of time, like physical connections or common simple functions (summers, comparators, etc). Diagrams with time included sacrifice a dimension to show sequential progress, which is fine for things that have very little changing state attached or where query/response is highly predictable. Sometimes, animation helps restore the lost dimension for systems with time-evolution. But beyond trivial things that fit on an A4 sheet, I'd rather represent time-evolution of system state with timing diagrams. I don't think there's many analogous situations in typical programming applications that call for timing diagrams, but they are absolutely foundational for digital logic applications and low-level hardware drivers.<p>[0]: <a href="https://www.falstad.com/circuit/" rel="nofollow">https://www.falstad.com/circuit/</a></p>
]]></description><pubDate>Mon, 15 Jul 2024 04:45:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=40965329</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=40965329</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40965329</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "We need visual programming. No, not like that"]]></title><description><![CDATA[
<p>A lot of these environments inherit a visual presentation style (ladder logic) that comes from the pre-computer era, and that works extremely well for electrical schematics when conveying asynchronous conditional behaviors to anyone, even people without much of a math background. There's a lot of more advanced functions these days that you write in plain C code in a hierarchical block, mostly for things like motor control.</p>
]]></description><pubDate>Mon, 15 Jul 2024 04:17:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=40965241</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=40965241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40965241</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Uno: Create Beautiful Cross Platform .NET Apps Faster"]]></title><description><![CDATA[
<p>For Uno specifically: I do like the attempt to build out a WASM target, it looks usable. Uno looks to be in a lot better state internally than even two years ago, which is promising - that's one thing I can't get out of either WPF or Avalonia.<p>Key things that need to improve:
- Documentation appears to have gotten better in the last few years, but still many sparse or underdocumented corners. This is my biggest reservation today.
- When I last used it (a few years ago), a lot of things weren't fully implemented - maybe this has changed? But it added a lot of friction.</p>
]]></description><pubDate>Thu, 02 May 2024 16:33:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=40238160</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=40238160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40238160</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Uno: Create Beautiful Cross Platform .NET Apps Faster"]]></title><description><![CDATA[
<p>If Microsoft said "Windows-only, good luck with cross platform," I'd accept it and move on. It's their prerogative to spend their resources enhancing their own platform.<p>Instead, I've been told a half-dozen times that there's a new cross-platfotm GUI option, and had I bought into any such endeavor, I would have been rugpulled by sudden deprecation or loss of focus. I think people who did buy into any cross-platfotm GUI frameworks from Microsoft in the last ten years are justifiably feeling like they've been hung out to dry, sometimes multiple times.<p>I don't complain about Elixir, Ruby, Go, or Rust GUI development because I don't try to use these languages for GUI development, because their core leadership and marketing doesn't promote this as one of the primary use cases. C#, and the .NET ecosystem at large, are THE premier Windows GUI development tools. When their marketing pivots to cross-platform GUI development, and they rugpull you a half-dozen times in a decade with half-finished and baffling experiments, it kinda smarts.<p>Most people developing desktop apps are probably okay with switching to web-based options, including potentially running their own standalone rendering executable on top of a web view or chromium instance. But there's a significant minority that are building desktop apps because they want better performance and native visual integration/theming, who don't benefit from browser process isolation and the hairy JS event loop situation, who don't want to serialize every user action in the GUI, who saw what XAML set out to accomplish (GUI style templates that convert to framework code that could equally be hand-written, modified, or extended) and ran marathons with it. Maybe this will get better with WASM improvements in the next few years, but it doesn't change that the approach to GUI design is fundamentally different, and in a lot of ways needlessly harder, when you have to pretend your GUI isn't all running on the same machine.</p>
]]></description><pubDate>Thu, 02 May 2024 16:02:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=40237780</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=40237780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40237780</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Uno: Create Beautiful Cross Platform .NET Apps Faster"]]></title><description><![CDATA[
<p>Presumably because, much like the half-dozen other flexible cross-platform frameworks Microsoft and friends have gotten one-third of the way to a viable product before abandoning to chase the next shiny thing, it's riddled with bugs, about ten years behind the documentation of a more usable framework from the 2010s, and has basically zero third party support from the likes of Infragistics and SyncFusion.<p>UWP left behind a shockingly large number of perfectly serviceable pieces of WPF, at a time when the emergent UI experience on Windows 8 was being written off by almost everyone, Windows Phone was DoA, and people were starting to realize they could just write web pages and run GUIs in the browser instead. It's been a long, bumpy, downhill ride ever since. The fact that Electron.NET and Blazor is a serious UI suggestion from Microsoft these days should tell you everything you need to know.<p>I'm sure with enough effort it's usable and maybe even nice in some ways. I did some proof-of-concept work with it two years ago and got maybe 50% of the way to where I wanted to be in 8 hours, but got stuck at styling issues for which there was limited documentation. In the end, I'm more confident these days in WPF + Avalonia if I really need cross-platform - even if there's comparable bugs and limited documentation, there's at least some momentum still behind the project. UWP, all three busted half-finished versions of WinUI, MAUI, Blazor + Webview2, Blazor + Electron.NET... even Avalonia, thanks to the weird decision to change styles to behave more like CSS... it all still struggles to be as usable as WPF.</p>
]]></description><pubDate>Thu, 02 May 2024 05:37:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=40233021</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=40233021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40233021</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "Google's First Tensor Processing Unit: Architecture"]]></title><description><![CDATA[
<p>This is frankly infeasible. Between the decades of trade secrets they would first need to discover, the tens- or maybe hundreds- of billions in capital needed to build their very first leading edge fab, the decade or two it would take for any such business to mature to the extent it would be functional, and the completely inconsequential volumes of devices they'd produce, they would probably be lighting half a trillion dollars on fire just to get a few years behind where the leading edge sits today, ten or more years from now. The only reason leading edge fabs are profitable today is because of decades of talent and engineering focused on producing general purpose computing devices for a wide variety of applications and customers, often with those very same customers driving innovation independently in critical focus areas (e.g. Micron with chip-on-chip HDI yield improvements, Xilinx with interdie communication fabric and multi chip substrate design). TPUs will never generate the required volumes, or attract the necessary customers, to achieve remotely profitable economies of scale, particularly when Google also has to set an attractive price against their competitors.<p>If Google has a compelling-enough business case, existing fabs will happily allocate space for their hardware. TPU is not remotely compelling enough.</p>
]]></description><pubDate>Tue, 26 Mar 2024 05:56:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=39824627</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=39824627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39824627</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "The appendix is not, in fact, useless"]]></title><description><![CDATA[
<p>He didn't lose his brain mass, it was compressed into a thin shell around his skull.[0] Moreover, his IQ was 75 - I'm not sure ending up in the bottom 5% of human intelligence as a consequence of chronic hydrocephaly is a "normal" life.<p>[0] <a href="https://www.untrammeledmind.com/2018/02/so-his-brains-just-squished-rather-than-only-10-there-a-bonsai-brains/" rel="nofollow">https://www.untrammeledmind.com/2018/02/so-his-brains-just-s...</a></p>
]]></description><pubDate>Thu, 08 Feb 2024 07:25:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=39299294</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=39299294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39299294</guid></item><item><title><![CDATA[New comment by ejiblabahaba in "The Enchippening"]]></title><description><![CDATA[
<p>I don't agree with the premise.<p>Integrated circuit manufacturing scales well because the cost per transistor is lower with every process improvement, so as process improvements accumulate, more value (or perhaps more productivity) per wafer is created. This strictly applies to digital circuitry - most analog circuitry requires a certain geometry or process technology not available on a massively optimized modern digital process. While some analog technology improvements are made, it's at a much slower pace. Moreover, a lot of analog redesigns come about as a means to abandon older, less cost-effective semiconductor technology - so the argument that cheap older tech is a boon to any industry outside of limited volume and high cost R&D is suspicious.<p>Solar cost improvements are in large part a function of massive economies of scale coupled with cheap manufacturing costs and government subsidies from the primary country of origin (which country? Take a guess). I'm sure there's been improvements in a handful of solar material designs, but if you make a large enough volume of panels in a country with cheap labor and massive government subsidies, costs go down. It's not rocket science. To the extent that modern semiconductor technology has improved solar, maybe an argument can be made for improved digital inverter controllers and for silicon carbide FETs maturing enough to make high-voltage panels efficient and ubiquitous at grid-scale deployments. But it's mostly volume, cheap labor costs, and government subsidies.<p>I don't know about biotech. The following is speculation. I think a lot of biotech was feasible 25 years ago in terms of manufacturability of microfluidics or molecular identification circuitry; but the cost of mass manufacture, and particularly the compute requirements to quickly recover signal from a highly noisy process, wasn't economical or simple to build until digital signal processing and general purpose compute became cheap and fast enough to integrate alongside the biosensors, often as copackaged modules, or generic coprocessors (ARM, RISC-V, etc) that can be built on a ton of different processes and which export raw sensor data quickly enough to allow much heavier duty professors to do the bulk of the work.<p>I think actual fabrication tech improves slowly, because at its core it's chemistry and material science which also improves slowly (see: batteries). Digital circuitry is a mind-blowing exception, thanks to the incredible amount of usable abstractions that can be built from the manufacture of two specific transistor recipes. But digital circuitry is so powerful, it becomes possible to abstract away massive chunks of otherwise difficult problems in software, firmware, or dedicated transistor-level algorithm implementations, which reduces the amount of physical-world improvements needed to make useful innovations.</p>
]]></description><pubDate>Thu, 25 Jan 2024 02:12:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=39125446</link><dc:creator>ejiblabahaba</dc:creator><comments>https://news.ycombinator.com/item?id=39125446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39125446</guid></item></channel></rss>