<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mgsouth</title><link>https://news.ycombinator.com/user?id=mgsouth</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 07:18:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mgsouth" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mgsouth in "Synology Lost the Plot with Hard Drive Locking Move"]]></title><description><![CDATA[
<p>I've no experience with Synology and have no opinion regarding their motivations, execution, or handling of customers.<p>However...<p>Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.<p>[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)</p>
]]></description><pubDate>Sat, 19 Apr 2025 17:46:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43738040</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=43738040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43738040</guid></item><item><title><![CDATA[New comment by mgsouth in "Vibe Coding in Common Lisp"]]></title><description><![CDATA[
<p>One man's ongoing journey coaxing an LLM to write Common Lisp code. Bonus "AI generated" poem by Stanislaw Lem, and a five-paragraph story actually generated by LLM: "The Unspeakable Syntax: A Tale of Lispian Horror." [3] Suprisingly entertaining.<p>Posts so far:<p>[1] <a href="https://funcall.blogspot.com/2025/03/vibe-coding-in-common-lisp.html" rel="nofollow">https://funcall.blogspot.com/2025/03/vibe-coding-in-common-l...</a><p>[2] <a href="https://funcall.blogspot.com/2025/03/vibe-coding-in-common-lisp-continued.html" rel="nofollow">https://funcall.blogspot.com/2025/03/vibe-coding-in-common-l...</a><p>[3] <a href="https://funcall.blogspot.com/2025/03/ai-silliness.html" rel="nofollow">https://funcall.blogspot.com/2025/03/ai-silliness.html</a><p>[4] <a href="https://funcall.blogspot.com/2025/03/vibed-into-non-functioning.html" rel="nofollow">https://funcall.blogspot.com/2025/03/vibed-into-non-function...</a></p>
]]></description><pubDate>Fri, 28 Mar 2025 17:03:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43507718</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=43507718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43507718</guid></item><item><title><![CDATA[Vibe Coding in Common Lisp]]></title><description><![CDATA[
<p>Article URL: <a href="http://funcall.blogspot.com/2025/03/vibe-coding-in-common-lisp.html">http://funcall.blogspot.com/2025/03/vibe-coding-in-common-lisp.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43507717">https://news.ycombinator.com/item?id=43507717</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 28 Mar 2025 17:03:51 +0000</pubDate><link>http://funcall.blogspot.com/2025/03/vibe-coding-in-common-lisp.html</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=43507717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43507717</guid></item><item><title><![CDATA[New comment by mgsouth in "Why some DVLA digital services don't work at night"]]></title><description><![CDATA[
<p><p><pre><code>    > I think a new approach might be to ignore the specifics of the old system, implement a new system
</code></pre>
It doesn't work like that. When you're revamping large, important, fingers-in-everything-and-everybody's-fingers-in-it systems you can't ignore <i>anything</i>. A (presumably) hypothetical example is sorting names. Simple, right? You just plop an ORDER-BY in the SQL, or call a library function. Except for a few niggling details:<p>1. This is an old IBM COBOL system. That means EBCDIC, not UTF or even ASCII.<p>1.A Fine, we'll mass-convert all the old data from EBCDIC to UTF. Done.<p>1.A.1 Which EBCDIC character set? There are multiple variants. Often based on nationality. Which ones are in use? Can you depend on all records in a dataset using the same one (hint: no.) Can you depend on all fields in a particular record using the same one? (hint: no.) Can you depend on all records using the same one for a particular field? (hint...) Can you depend on any sane method for figuring out what a particular field in a particular record in a particular dataset is using? Nope nope nope.<p>1.A.2 Looking at program A, you find it reads data from source B and merges it with source C. Source B, once upon a time, was from a region with lots of French names, and used code page 279 ('94 French). Except for those using 274 (old Belgium). And one really ancient set of data with what appears to be a custom code set only used by two parishes. Program A muddles through well enough to match up names with C, at least well enough for programs D, E, and F.<p>1.A.3 But it's not good enough for program G (when handling the Wednesday set of batches). G has to cross-reference the broken output from A with H to figure out what's what.<p>1.B You have now changed the output. It works for D and F, but now E is broken, and all the adhoc, painstakingly hand-crafted workarounds in G are completely clueless.<p>1.C Oh, and there's consumer J that wasn't properly documented, you don't know exists, and handles renewals for 60-70 year old pensioners who will be <i>very vocal</i> when their licenses are bungled.<p>2. Speaking of birth years, here's a mishmash of 2-, 4-, and even 3-digit years....</p>
]]></description><pubDate>Fri, 17 Jan 2025 06:10:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=42734571</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42734571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42734571</guid></item><item><title><![CDATA[How did they make cars fall apart in old movies (2017)]]></title><description><![CDATA[
<p>Article URL: <a href="https://movies.stackexchange.com/questions/79161/how-did-they-make-cars-fall-apart-in-old-movies">https://movies.stackexchange.com/questions/79161/how-did-they-make-cars-fall-apart-in-old-movies</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42679127">https://news.ycombinator.com/item?id=42679127</a></p>
<p>Points: 281</p>
<p># Comments: 102</p>
]]></description><pubDate>Mon, 13 Jan 2025 01:41:52 +0000</pubDate><link>https://movies.stackexchange.com/questions/79161/how-did-they-make-cars-fall-apart-in-old-movies</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42679127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42679127</guid></item><item><title><![CDATA[New comment by mgsouth in "Supernovae evidence for foundational change to cosmological models"]]></title><description><![CDATA[
<p>As another layman, no, I don't think so.<p>The "twin paradox" [1] is a prime example. The two twins depart from a common point in time and space, go about their separate travels, and meet again at a common point in space-time. Despite both twins always having the same constant speed of light, one of the twins takes a <i>shorter path through time</i> to get to the meeting point--one twin aged less than the other. In the paradox case, the shorter/longer paths are due to differences in acceleration. But the same thing happens due to differences in gravitation along two paths. (In fact, IIUC, acceleration and gravitational differences are the same thing.)<p>Just thinking about the math makes my head hurt, but it's apparent that two different photons can have taken very different journeys to reach us. For example, the universe was much denser in the dim past. Old, highly red-shifted photons have spent a lot of time slogging through higher gravitational fields. As a layman, that would suggest to me that, on average, time would have.. moved slower for them?... they would be even older than naive appearances suggest. I don't think the actual experts are naive, so that's been accounted for, or there's confounding factors. But I could also imagine that more chaotic differences, such as supernovas in denser galatic centers vs. the suburbs, or from galaxies embedded in huge filaments, could be hard to calculate.<p>[1] <a href="https://en.wikipedia.org/wiki/Twin_paradox" rel="nofollow">https://en.wikipedia.org/wiki/Twin_paradox</a></p>
]]></description><pubDate>Mon, 06 Jan 2025 01:24:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42606622</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42606622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42606622</guid></item><item><title><![CDATA[New comment by mgsouth in "macOS menu bar app that shows how full the ISS urine tank is in real time"]]></title><description><![CDATA[
<p>You don't really want an agile toilet interface. This is more a waterfall project.</p>
]]></description><pubDate>Wed, 25 Dec 2024 00:49:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42506071</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42506071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42506071</guid></item><item><title><![CDATA[New comment by mgsouth in "Using GPS satellites to detect tsunamis via ionospheric ionization waves"]]></title><description><![CDATA[
<p>OK, the "typically 10^12 TEC" vs.  a +/- 1 TECU (10^16 TEC) disturbance was really bugging me. I think the slide has an error, or there's an apples/oranges issue. The +/- 1 TECU looks to be consistent, but the typical background level is "a few TECU to several hundred" [1]. A Wikipedia page has shows the levels over the US being between 10 - 50 TECU on 2023-11-24, and says that "very small disturbances of 0.1 - 0.5 TEC units" are "primarily generated by gravity waves propagating upward from lower atmosphere." [2].<p>[1] <a href="https://www.swpc.noaa.gov/phenomena/total-electron-content" rel="nofollow">https://www.swpc.noaa.gov/phenomena/total-electron-content</a><p>[2] <a href="https://en.wikipedia.org/wiki/Total_electron_content" rel="nofollow">https://en.wikipedia.org/wiki/Total_electron_content</a></p>
]]></description><pubDate>Fri, 20 Dec 2024 13:44:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42470992</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42470992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42470992</guid></item><item><title><![CDATA[New comment by mgsouth in "Using GPS satellites to detect tsunamis via ionospheric ionization waves"]]></title><description><![CDATA[
<p>Pretty astounding, isn't it? I don't see a paper, but there was a webinar [1]. There's a technical synopsis at 8:00. The phenomenon they're measuring is actually signficant. It's the total number of (free?) electrons between the satellite and the receiver. Typically its about 10^12 electrons/m^3 (@8:00 in video). The disturbance from the 2011 earthquake and tsunami was, if I'm reading the movie/chart correctly, about +/- 1 TECU, which is 10^16 electrons/m^3 (@10:40). The water elevation may only be a few feet in open ocean, but it's over a vast area. That's a lot of power.<p>They're measuring it by looking for phase differences in the received L-band (~2GHz) signals, rather than amplitude. That eliminates lots of noise. And they're looking for a particular pattern, which lets you get way below the noise floor. For example, the signal strength of the GNSS (GPS) signal itself might be -125 dBm, while the noise level is -110 dBm [2]. That means the signal is 10^-12 _milliwatts_, and the noise is about 30 times larger. But by looking for a pattern the receiver gets a 43 dB processing boost, putting the effective signal well above the noise.<p>[1] <a href="https://www.youtube.com/watch?v=BEpZmRPPWFo" rel="nofollow">https://www.youtube.com/watch?v=BEpZmRPPWFo</a><p>[2] <a href="https://www.nxp.com/docs/en/brochure/75016740.pdf" rel="nofollow">https://www.nxp.com/docs/en/brochure/75016740.pdf</a></p>
]]></description><pubDate>Fri, 20 Dec 2024 06:06:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42468698</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42468698</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42468698</guid></item><item><title><![CDATA[New comment by mgsouth in "Crashing rockets and recovering data from damaged flash chips"]]></title><description><![CDATA[
<p>The red line is axial acceleration. The rocket rapidly slows to terminal velocity, reaching it at about 25 sec., then continues to slowly decelerate as t.v. decreases as the air gets thicker. [edit: *] The black line is estimated velocity, as integration of the acceleration. It gives up trying to calculate that at about 45 sec. Based on the barometer readings, it looks like it was going about 650 fps at impact.<p>What I find interesting is the 4-second delay before igniting the second stage. This is very inefficient compared to immediately igniting it when the first stage burns out. Max-Q (airspeed pressure) issues? 30,000 ft permit ceiling?<p>Edit: * At 25 sec. it's still going up, so the velocity is decreasing due mainly to gravity, but the rocket is ballistic so the accelerometer is slightly negative due to air friction adding to the gravity deceleration. At about 40 sec. it has reached max altitude and velocity is zero. Accelerometer is still close to zero. Velocity picks up, as shown by barometric altitude curve. Eyeballing it, at about 65 sec. its reached terminal velocity, as shown by barometer curve being pretty flat. Decrease after that is due to decreasing t.v.</p>
]]></description><pubDate>Wed, 18 Dec 2024 21:36:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42455601</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42455601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42455601</guid></item><item><title><![CDATA[New comment by mgsouth in "EXT built-in panic cmd with user-gen FS"]]></title><description><![CDATA[
<p>[This is a link to a Mastodon infosec topic. I've completely editorialized the page title, so am posting as Tell HN instead.] [Edit: Well, I submitted it that way. HN stripped the "Tell HN:". The original page's title is pretty useless, so don't know what the proper thing to do is.]<p>EXT (all versions) has a <i>filesystem</i> flag telling the kernel to panic on FS error. In the link, Will Dormann demonstrates inserting a USB key with a malicous image and instantly rebooting the PC.<p>In this case, the laptop had USB auto-mounting enabled. However, I believe this should apply to <i>any</i> mounts against user-modifiable or -specifiable sources. NFS, FUSE, user namespaces, even local files with "-o loop" option. And the MOUNT(8) man page has this interesting tidbit:<p><pre><code>    Since util-linux 2.35, mount does not exit when user permissions are
    inadequate according to libmount’s internal security rules. Instead, it
    drops suid permissions and continues as regular non-root user. This
    behavior supports use-cases where root permissions are not necessary
    (e.g., fuse filesystems, user namespaces, etc).</code></pre></p>
]]></description><pubDate>Wed, 11 Dec 2024 20:40:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42392697</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42392697</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42392697</guid></item><item><title><![CDATA[EXT built-in panic cmd with user-gen FS]]></title><description><![CDATA[
<p>Article URL: <a href="https://infosec.exchange/@wdormann/113625346544970814">https://infosec.exchange/@wdormann/113625346544970814</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42392696">https://news.ycombinator.com/item?id=42392696</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 11 Dec 2024 20:40:09 +0000</pubDate><link>https://infosec.exchange/@wdormann/113625346544970814</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42392696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42392696</guid></item><item><title><![CDATA[New comment by mgsouth in "Tsunami Warning for Northern California"]]></title><description><![CDATA[
<p>For folks jumping on saying "that's not a carrier thing". <i>All</i> comms are a carrier thing. Whether it's ETWS, SMS, or IP, it's going through the carrier, they process it, and they do extensive traffic management. Carriers absolutely can and will inspect, proxy, aggregate, and do anything else that will tease out another few % of "free" capacity.<p>[Edit:]  All too real scenario: Carrier knows about particular IP addresses and ports used by alert service. Carrier makes provision for separate path for it. Carrier also tries to shave said provisioning to the bone, calculates a worst-case, and adds 5% capacity. Which doesn't get updated when that particular app gets a 6% boost in subscriptions. Back in the old days the traffic management folks would be on top if it, but that's all been outsourced...</p>
]]></description><pubDate>Thu, 05 Dec 2024 20:00:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42332055</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42332055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42332055</guid></item><item><title><![CDATA[New comment by mgsouth in "Tsunami Warning for Northern California"]]></title><description><![CDATA[
<p>It varies, a lot, and depends upon a lot of things. I'm not current on all the current details, but many moons ago was involved in push notification development.<p>* Notification path. IoS at the time was pretty protective of the user's battery, and had specific services you had to use. I imagine there's special treatment now for emergency communications.<p>* Phone state. How deeply asleep is it? Are there other background apps frequently contacting the mothership? Multiple apps can get their requests batched together, so as to minimize phone wake-ups. You can also benefit from greedy apps--VoIP apps, for example, might be allowed/figured out a hack to allow frequent check-ins, and the other apps might see a latency benefit.<p>* Garbage carriers. Hopefully emergency alerts have a separate path, but I've noticed my provider (who shall remain nameless but is a three-letter acronym with an ampersand in the middle) sometimes delays SMS messages by tens of minutes. (TBF, in my case there might also be a phone problem [Android], but since nameless provider forced it on me when they went 4G-only they're still getting the blame.)<p>In your case, my money would be on the carrier. Pushing a notification to all phones in an area can be taxing, and cheaping out on infrastructure is very much a thing.<p>For docs, your best bet would be to go to the developer sites and pull up the "thou shalt..." rules, particularly regarding network activity, push notification, and permitted background activities. And yeah, Apple was much more dictatorial, for good reasons.</p>
]]></description><pubDate>Thu, 05 Dec 2024 19:52:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=42331958</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42331958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42331958</guid></item><item><title><![CDATA[New comment by mgsouth in "The capacitor that Apple soldered incorrectly at the factory"]]></title><description><![CDATA[
<p>You can test ESR in-circuit, with caveats. Here's a good thread from EEVblog [1].<p>[1] <a href="https://www.eevblog.com/forum/beginners/is-there-any-way-to-test-capacitors-while-on-the-circuit-board/" rel="nofollow">https://www.eevblog.com/forum/beginners/is-there-any-way-to-...</a></p>
]]></description><pubDate>Thu, 28 Nov 2024 02:07:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=42261726</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42261726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42261726</guid></item><item><title><![CDATA[New comment by mgsouth in "The capacitor that Apple soldered incorrectly at the factory"]]></title><description><![CDATA[
<p>Totally believable if the debugging device was doing something with a serial port. I once hacked something together to interface a PC serial port to a Raspberry Pi. The PC serial is real-ish RS-232, with negative voltages. The Pi side was just 0/3.3V positive. I had a nice 18-volt power brick laying around, and just split it's output down the middle--what was 0 volt ground was used as -9 volts, the middle voltage was now 0 volt ground, and the 18-v line was now +9 V.<p>At first everything seemed OK. but when I plugged a monitor into the PI I Was Made To Realize a) the nice 18-volt PS really was high quality, and although it was transformer-isolated its output ground was tied to the wall socket earth, b) monitors also tie HDMI cable ground to earth, and so c) my lash-up now had dueling grounds that were 9V apart.</p>
]]></description><pubDate>Wed, 27 Nov 2024 18:02:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42258149</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42258149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42258149</guid></item><item><title><![CDATA[New comment by mgsouth in "Using gRPC for (local) inter-process communication (2021)"]]></title><description><![CDATA[
<p>My most vivid gRPC experience is from 10 years or so ago, so things have probably changed. We were heavily Go and micro-services. Switched from, IIRC, protobuf over HTTP, to gRPC "as it was meant to be used." Ran into a weird, flaky bug--after a while we'd start getting transaction timeouts. Most stuff would get through, but errors would build and eventually choke everything.<p>I finally figured out it was a problem with specific pairs of servers. Server A could talk to C, and D, but would timeout talking to B. The gRPC call just... wouldn't.<p>One good thing is you <i>do</i> have the source to everything. After much digging through amazingly opaque code, it became clear there was a problem with a feature we didn't even need. If there are multiple sub-channels between servers A and B. gRPC will bundle them into one connection. It also provides protocol-level in-flight flow limits, both for individual sub-channels and the combined A-B bundle. It does it by using "credits". Every time a message is sent from A to B it decrements the available credit limit for the sub-channel, and decrements another limit for the bundle as a whole. When the message is <i>processed by the recipient process</i> then the credit is added back to the sub-channel and bundle limits. Out of credits? Then you'll have to wait.<p>The problem was that failed transactions were not credited back. Failures included processing time-outs. With time-outs the sub-channel would be terminated, so that wasn't a problem. The issue was with the bundle. The protocol spec was (is?) silent as to who owned the credits for the bundle, and who was responsible for crediting them back in failure cases. The gRPC code for Go, at the time, didn't seem to have been written or maintained by Google's most-experienced team (an intern, maybe?), and this was simply dropped. The result was the bundle got clogged, and A and B couldn't talk. Comm-level backpressure wasn't doing us any good (we needed full app-level), so for several years we'd just patch new Go libraries and disable it.</p>
]]></description><pubDate>Wed, 20 Nov 2024 22:43:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=42198907</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42198907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42198907</guid></item><item><title><![CDATA[New comment by mgsouth in "Using gRPC for (local) inter-process communication (2021)"]]></title><description><![CDATA[
<p>One does not simply walk into RPC country. Communication modes are architectual decisions, and they flavor everything. There's as much difference between IPC and RPC as there is between popping open a chat window to ask a question, and writing a letter on paper and mailing it. In both cases you can pretend they're equivalent, and it will work after a fashion, but your local communication will be vastly more inefficient and bogged down in minutia, and your remote comms will be plagued with odd and hard-to-diagnose bottlenecks and failures.<p>Some <i>generalities</i>:<p>Function call: The developer just calls it. Blocks until completion, errors are due to bad parameters or a resource availability problem. They are handled with exceptions or return-code checks. Tests are also simple function calls. Operationally everything is, to borrow a phrase from aviation regarding non-retractable landing gear, "down and welded".<p>IPC: Architectually, and as a developer, you start worrying about your function as a resource. Is the IPC recipient running? It's possible it's not; that's probably treated as fatal and your code just returns an error to its caller. You're more likely to have a m:n pairing between caller and callee instances, so requests will go into a queue. Your code may still block, but with a timeout, which will be a fatal error. Or you might treat it as a co-routine, with the extra headaches of deferred errors. You probably won't do retries. Testing has some more headaches, with IPC resource initialization and tear-down. You'll have to test queue failures. Operations is also a bit more involved, with an additional resource that needs to be baby-sat, and co-ordinated with multiple consumers.<p>RPC: IPC headaches, but now you need to worry about lost messages, and messages processed but the acknowledgements were lost. Temporary failures need to be faced and re-tried. You will need to think in terms of "best effort", and continually make decisions about how that is managed. You'll be dealing with issues such as at-least-once delivery vs. at-most-once. Consistency issues will need to be addressed much more than with IPC, and they will be thornier problems. Resource availability awareness will seep into everything; application-level back-pressure measures _should_ be built-in. Treating RPC as simple blocking calls will be a continual temptation; if you or less-enlightened team members subcumb then you'll have all kinds of flaky issues. Emergent, system-wide behavior will rear its ugly head, and it will involve counter-intuitive interactions (such as bigger buffers reducing throughput). Testing now involves three non-trivial parts--your code, the called code, and the communications mechanisms. Operations gets to play with all kinds of fun toys to deploy, monitor, and balance usage.</p>
]]></description><pubDate>Wed, 20 Nov 2024 22:15:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42198692</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42198692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42198692</guid></item><item><title><![CDATA[New comment by mgsouth in "Emit-C: A time travelling programming language"]]></title><description><![CDATA[
<p>Ooh, clever. Vaguely envision entropy problems, that some mash-up of Godel's Incompleteness theorem, Maxwell's Demon, and Bell's Inequality, and Newton's laws conspires against it. Maybe sending changes back add entropy, or moves it around? Would make a good old-school SF story, with backwater multi-verse dumps for waste entropy, a free-lance troubleshooter uncovering a secret corporate scandal regarding deleterious effects, etc.</p>
]]></description><pubDate>Wed, 20 Nov 2024 05:52:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=42191141</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42191141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42191141</guid></item><item><title><![CDATA[New comment by mgsouth in "SpaceX Super Heavy splashes down in the gulf, canceling chopsticks landing"]]></title><description><![CDATA[
<p>No. This and all previous flights have intentionlly been barely sub-orbital, with less than one orbit. Launch in Texas, re-entry over Indian Ocean. A full orbit at that altitude takes about 90 minutes; this was over in a little over an hour.<p>The reason was safety. If it was orbital, then controlling the re-entry would <i>require</i> a sucessful relight of the engines. If that failed then the re-entry point would depend upon the vagaries of orbital decay from residual atmospheric drag. That's no doubt why today's relight was so brief; they didn't want to significantly alter the reentry point.</p>
]]></description><pubDate>Wed, 20 Nov 2024 02:35:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=42190343</link><dc:creator>mgsouth</dc:creator><comments>https://news.ycombinator.com/item?id=42190343</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42190343</guid></item></channel></rss>