<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Tobba_</title><link>https://news.ycombinator.com/user?id=Tobba_</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 08:43:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Tobba_" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Tobba_ in "F-35 Program Cutting Corners to “Complete” Development"]]></title><description><![CDATA[
<p>"swarm" means large numbers by definition, otherwise you have... well, plain-old coordination. Large numbers of F-35-sized aircraft is flaunting economics in the face a quite a bit (so I'm sure the US military loves the idea).</p>
]]></description><pubDate>Thu, 30 Aug 2018 20:02:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=17880223</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17880223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17880223</guid></item><item><title><![CDATA[New comment by Tobba_ in "F-35 Program Cutting Corners to “Complete” Development"]]></title><description><![CDATA[
<p>The idea of drone swarms doesn't go well together with aerodynamics and basic physical intuition. If you shrink an aircraft down, the aerodynamic cross-section (i.e the drag force) scales with the area (scale^2), but your engine thrust is going to drop roughly by the decrease in volume (scale^3).<p>So you end up losing maximum airspeed <i>and</i> fuel efficiency (in terms of the mass you're moving) the smaller you go. Unless the drones in your swarm were really big, it doesn't work out.<p>Although, I imagine we'll see some smaller, unmanned jet fighters in the future (assuming someone figures out how to control something like that remotely, or autonomously). A smaller aircraft has the advantage of a smaller radar cross-section and being more difficult to hit. Doing away with the pilot cuts out a lot of weight and frees up room for a larger engine and fuel tank, offseting the downsides of the smaller size somewhat. There should be a sweet spot where that works out.</p>
]]></description><pubDate>Thu, 30 Aug 2018 17:55:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=17878983</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17878983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17878983</guid></item><item><title><![CDATA[New comment by Tobba_ in "24-core CPU and I can’t type an email"]]></title><description><![CDATA[
<p>On Win10 you need to have enough swap space on an SSD; HDD I/O has totally kneecapped latency, seemingly due to blind usage of NCQ (AHCI driver got replaced from Win7), so it chokes out entirely when combined with the even more broken swap manager.<p>"Enough" meaning "infinite", because you're going to have <i>something</i> leaking all that memory. Task manager doesn't even show memory usage by default (add the "commit size" field), only the working set (and the swapping is insanely aggressive, so memory leaks just don't show up). Nobody seems to actually check that; even built-in stuff like Windows Update leaks pretty badly.<p>Oh and the memory accounting doesn't even work.</p>
]]></description><pubDate>Fri, 17 Aug 2018 10:54:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=17781793</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17781793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17781793</guid></item><item><title><![CDATA[New comment by Tobba_ in "A Dutch first: Ingenious BMW theft attempt"]]></title><description><![CDATA[
<p>Why the hell does RSA ever get used anymore, esp. in smartcards etc? It's been obsolete for like 10 years thanks to ECC, and ECC is <i>way</i> easier to implement (esp. 127-bit and 521-bit).</p>
]]></description><pubDate>Fri, 10 Aug 2018 12:33:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=17732677</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17732677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17732677</guid></item><item><title><![CDATA[New comment by Tobba_ in "JEP 335: Deprecate the Nashorn JavaScript Engine"]]></title><description><![CDATA[
<p>Wouldn't code instrumentation be trivially bypassable using an eval construct? esp. something like<p><pre><code>  []["constructor"]["constructor"]("while (true) { }")()
</code></pre>
Or does Nashorn have a mechanism for forcing the code static?</p>
]]></description><pubDate>Thu, 07 Jun 2018 06:35:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=17253760</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17253760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17253760</guid></item><item><title><![CDATA[New comment by Tobba_ in "Ditch the Batteries: Off-Grid Compressed Air Energy Storage"]]></title><description><![CDATA[
<p>Not <i>necessarily</i>, 1.3MJ (360 Wh; but screw that unit) of diesel isn't much of an explosive under normal circumstances. It really depends on the maximum power output under normal operation, and whether it can occur uncontrollably.</p>
]]></description><pubDate>Thu, 24 May 2018 15:54:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=17145210</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17145210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17145210</guid></item><item><title><![CDATA[New comment by Tobba_ in "NSA encryption plan for ‘internet of things’ rejected by ISO"]]></title><description><![CDATA[
<p>Good.</p>
]]></description><pubDate>Tue, 15 May 2018 15:29:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=17074671</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17074671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17074671</guid></item><item><title><![CDATA[New comment by Tobba_ in "Linux ate my RAM (2009)"]]></title><description><![CDATA[
<p>Windows 10 is borderline unusable due to it (it evicts RAM <i>very</i> aggressively to use as disk cache). Doesn't help that their IO scheduler is completely screwed up too (and they removed the ability to disable NCQ, so disk performance on HDDs is down the drain).</p>
]]></description><pubDate>Tue, 15 May 2018 10:26:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=17072917</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17072917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17072917</guid></item><item><title><![CDATA[New comment by Tobba_ in "Multiple OS Vendors Release Security Patches After Misinterpreting Intel Docs"]]></title><description><![CDATA[
<p>On that subject, I'm curious whether there is any CPU out there that sets the overflow flag incorrectly when computing (-1) - n when n is the most negative number (which negates to itself, so implementing subtraction by simply negating the RHS and adding will screw up the flags).</p>
]]></description><pubDate>Thu, 10 May 2018 15:09:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=17039521</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17039521</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17039521</guid></item><item><title><![CDATA[New comment by Tobba_ in "Multiple OS Vendors Release Security Patches After Misinterpreting Intel Docs"]]></title><description><![CDATA[
<p>As far as I understand, what's happening is:<p>* There's an old feature which causes POP SS/MOV SS instructions to delay all interrupts until the next instruction has executed, to safely allow changing both SS and SP without an interrupt firing inbetween on a bad stack.<p>* If such an instruction itself causes an interrupt (by triggering a memory breakpoint through the debug registers), it is delayed (as intended).<p>* The delayed interrupt will fire after the second instruction <i>even if the second instruction disabled interrupts</i>.<p>* By means of the above, a MOV SS instruction triggering a #DB followed by an INT n instruction will cause the #DB exception to fire before the first instruction of the interrupt handler, even though this should be impossible (as entering the handlers sets IF=0, disabling interrupts).<p>* The OS #DB handler assumes GS has been fixed up by the previous interrupt handler, which in now under user control.</p>
]]></description><pubDate>Thu, 10 May 2018 13:23:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=17038578</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17038578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17038578</guid></item><item><title><![CDATA[New comment by Tobba_ in "Multiple OS Vendors Release Security Patches After Misinterpreting Intel Docs"]]></title><description><![CDATA[
<p>To be fair, Intel docs are so consistently gibberish that it might as well be classified a separate language (similar to english, but only a quarter the information density).<p>In this case it seems they just didn't properly specify a piece of insane behaviour though. Hell, I'd consider it an outright CPU bug if I'm reading this right. Seemingly there's a "feature" where loading SS causes interrupts to be delayed until after the next instruction, even if the next instruction disables interrupts - so you can cause an interrupt to fire on the first instruction of the handler (where it should be impossible).</p>
]]></description><pubDate>Thu, 10 May 2018 13:03:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=17038463</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=17038463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17038463</guid></item><item><title><![CDATA[New comment by Tobba_ in "C Is Not a Low-Level Language"]]></title><description><![CDATA[
<p>It might still be possible. The JVM and .NET both have their speed annihilated by their awful choice of memory model.</p>
]]></description><pubDate>Tue, 01 May 2018 19:42:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=16970711</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16970711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16970711</guid></item><item><title><![CDATA[New comment by Tobba_ in "Mathematics I Use (2012)"]]></title><description><![CDATA[
<p>The secret to linear algebra is homogenous coordinates together the exterior algebra. plus matrices sprinkled in. If you use those for everything all your problems simply disappear.</p>
]]></description><pubDate>Fri, 20 Apr 2018 16:12:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=16885978</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16885978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16885978</guid></item><item><title><![CDATA[New comment by Tobba_ in "Fifty or Sixty Years of Processor Development for This?"]]></title><description><![CDATA[
<p>I'm not talking about the fundamentally misguided memory-distributed computing stuff, I mean "improve flexibility enough that you can bolt some additional units on as offload" (address translation in this case would take some work though). The magic of presenting software with a more or less monolithic core in this case is that you don't have that problem, since you can simply do it the usual way.<p>Also, I don't think the trouble with added complexity out of the hot path is any added latency, it's that they're needlessly burning up the thermal budget. Not that raising the voltage is the best way of increasing frequency, but it's sure to do so.</p>
]]></description><pubDate>Wed, 04 Apr 2018 13:59:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=16754716</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16754716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16754716</guid></item><item><title><![CDATA[New comment by Tobba_ in "Fifty or Sixty Years of Processor Development for This?"]]></title><description><![CDATA[
<p>I've heard a few times that the Cell wasn't all that bad in terms of performance, just very difficult to program. Not sure how true that is, but ostensibly the useability is just a tooling issue. Probably not a tooling issue that can be solved short-term though.</p>
]]></description><pubDate>Wed, 04 Apr 2018 09:46:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=16753310</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16753310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16753310</guid></item><item><title><![CDATA[New comment by Tobba_ in "Fifty or Sixty Years of Processor Development for This?"]]></title><description><![CDATA[
<p>It'll change if someone can manage to take the "central" out of the CPU internals. You don't necessarily need software to see anything other than a monolithic core, but having to plumb everything through one central execution unit is hugely inefficient, if anything due to the latency involved. For example, if you're performing an indirect load and hit DRAM while loading the pointer, that result has to be brought into the core, then all way back to the memory controller the same way it came. So far that's just been worked around by throwing in bigger and bigger caches, but the size of first-level cache is at a dead end for now (due to needing physical proximity).<p>Heck, current x86 chips could be juiced quite a bit if you could take out the requirement for backwards compatibility. Instruction encoding being the obvious thing (not that it's not hip and RISC, but that it's an absolute mess that a huge proportion of the chips power has to be wasted on, and is pretty space-inefficient due to how horribly allocated things are). Less obviously just removing things like the data stack instructions (which, at least on Intel, have a dedicated "stack engine" to optimize them), the ability to read/write instruction memory directly (creates a mess of self-modifying code detection to maintain correct behaviour, and complicates L1 cache coherency a bit). Trimming transistors reduces the power consumption, which in turn means you can raise the voltage without the chip melting, and can clear up space in your critical data path.</p>
]]></description><pubDate>Wed, 04 Apr 2018 09:20:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=16753218</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16753218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16753218</guid></item><item><title><![CDATA[New comment by Tobba_ in "Fifty or Sixty Years of Processor Development for This?"]]></title><description><![CDATA[
<p>Personally I'd predict the opposite (and current) pattern, creating generic chips which can replace many ASICs / less generic chips. If you can produce a chip which can replace 10 other low/medium-volume designs, economical scaling will win out as long as you're not adding too much overhead. This has been driving FPGAs forward for quite some time, although they're inherently pretty inefficient in terms of die size. Plus, it also provides some logistical advantages in terms of supply chain fragility.<p>See also: how ridiculously cheap microcontrollers have gotten, and the current messy DRAM pricing (high-capacity chips used in phones sometimes ending up cheaper than low-performance/capacity chips).</p>
]]></description><pubDate>Wed, 04 Apr 2018 09:12:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=16753195</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16753195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16753195</guid></item><item><title><![CDATA[New comment by Tobba_ in "Fifty or Sixty Years of Processor Development for This?"]]></title><description><![CDATA[
<p>GaAs logic is probably happening at some point, just not yet. All improvements like that which would be incredibly expensive to develop will be held off on until all cheaper options have been exhausted. It does seem to be slowly moving though.</p>
]]></description><pubDate>Wed, 04 Apr 2018 09:01:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=16753148</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16753148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16753148</guid></item><item><title><![CDATA[New comment by Tobba_ in "Fifty or Sixty Years of Processor Development for This?"]]></title><description><![CDATA[
<p>Seems like most people here are bringing up speed of light problems, which <i>are</i> a concern, but it's not what stops you. The problem is that your yield goes down exponentially with die size, and binning them becomes a clusterfuck. The opposite direction of making <i>smaller</i> dies is fairly attractive though. For example, AMD split Threadripper into multiple dies on an MCM and seem to be saving a fortune on it, at the expense of some die area for interconnects. That way they can test and bin dies individually an assemble an MCM of known-good dies from the same bin.<p>I remember reading that GPUs are getting to fairly monsterous die sizes though - and they're paying for it.</p>
]]></description><pubDate>Wed, 04 Apr 2018 07:51:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=16752863</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16752863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16752863</guid></item><item><title><![CDATA[New comment by Tobba_ in "Fifty or Sixty Years of Processor Development for This?"]]></title><description><![CDATA[
<p>I think we're far from the ceiling on CPU performance so far, but we seem to have hit a (micro)architectural dead end. Currently a <i>lot</i> of time and transistors is spent simply shuffling data around the chip, or between the CPU and memory, while the actual computational units simply sit idle. Or similarily, units that sit idle because they can't be used for the current task, even if they <i>should</i> be - the FPUs on modern x86 cores are a pretty good example of this. FP operations are just fused integer/fixed-point operations, but it's been designed into a corner where it <i>has</i> to be a special unit to deal with all the crap quickly.<p>We've probably optimized silicon transistors to death though; that's why it's coming to a stop now. GaAs or SiGe are some of the alternatives there. Although there's still quite a lot of advancements there that simply aren't economical yet. For example, SOI processes at low feature sizes seem to be suitable for mass-produced chips now, but it hasn't made it out of the low-power segment yet. MRAM seems to be viable and might be able to provide us with bigger caches (in the same die area), but right now it's mainly used to replace small flash memories (plus some more novel things like non-volatile write buffers, but it's horrifically expensive). So we've probably got a few big boosts left there, but it's not gonna last forever.<p>The next obvious architectural advancement right now is asynchronous logic. In theory, it's superior in every way - power and timing noise immunity, speed isn't limited by the worst-case timings, no/reduced unnecessary switching (i.e lower power, meaning higher voltages without the chip melting itself). On paper, you run into some big problems on the data path - quasi-delay-insensitive circuits need a <i>lot</i> more transistors and wires, and the current alternative is to use a separate delay path to time the operations, which is a bit iffy. You do at least get rid of the Lovecraftian clock distribution tree that's getting problematic for current synchronous logic. In practice, the tools to work with it and engineers/designers that know how to work it don't exist, and the architecture is entirely up in the air. So it's many years of development behind right now and a huge investment that nobody really bothered with while they could just juice the microarchitecture and physical implementation.</p>
]]></description><pubDate>Wed, 04 Apr 2018 07:44:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=16752827</link><dc:creator>Tobba_</dc:creator><comments>https://news.ycombinator.com/item?id=16752827</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16752827</guid></item></channel></rss>