<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: keithwinstein</title><link>https://news.ycombinator.com/user?id=keithwinstein</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 19:12:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=keithwinstein" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by keithwinstein in "Ask HN: Unemployed almost a year after graduating MIT – a rant"]]></title><description><![CDATA[
<p>I'm sorry you're going through this! But also a little suspicious because a nearly word-for-word message was posted four days ago on Reddit with some of the details different, including the major and presence of a master's degree, but most of the same phrasing (<a href="https://www.reddit.com/r/mit/comments/1q9gdff/unemployed_almost_a_year_after_graduating_mit_a/" rel="nofollow">https://www.reddit.com/r/mit/comments/1q9gdff/unemployed_alm...</a>). If these rants are somehow from the same person (maybe you did both majors and only discussed one in each post?), fair enough and I really am sorry, but I do wonder if we're being experimented upon. :-(</p>
]]></description><pubDate>Wed, 14 Jan 2026 20:44:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46623020</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=46623020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46623020</guid></item><item><title><![CDATA[New comment by keithwinstein in "Avoid 2:00 and 3:00 am cron jobs (2013)"]]></title><description><![CDATA[
<p>> Even more pedantically, “standard time” is not necessarily consistent across each zone (particularly, during the period for which in parts of the zone it is advanced by an hour) since "standard time” only advances for those states, or parts of states, for which an exemption is not in place.<p>I can't find a source (including 15 U.S.C. § 260a) that supports this reading, although I agree it's a little ambiguous. The law suggests that a region that doesn't observe DST is observing "the standard time otherwise applicable during that period" and is exempt from the provisions regarding advancement, not that "Pacific standard time" depends on where you are (see 15 U.S.C. § 263).<p>> So, the Unix-y convention [] is the simplest way<p>No argument there!</p>
]]></description><pubDate>Tue, 28 Oct 2025 00:50:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45728153</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=45728153</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45728153</guid></item><item><title><![CDATA[New comment by keithwinstein in "Avoid 2:00 and 3:00 am cron jobs (2013)"]]></title><description><![CDATA[
<p>If we're being really pedantic, in the U.S. context the zones observe "standard time" all year. "Standard time" refers to the standardization across the zone, and the practice of advancing the clock during daylight-saving time changes each zone's standard time. The Unix-style usage of "EST" vs. "EDT" isn't pedantically correct (e.g., New York observes "eastern standard time" even in summer).<p>See 15 U.S.C. §§ 260a & 263 (<a href="https://www.law.cornell.edu/uscode/text/15/chapter-6/subchapter-IX" rel="nofollow">https://www.law.cornell.edu/uscode/text/15/chapter-6/subchap...</a>).</p>
]]></description><pubDate>Mon, 27 Oct 2025 23:15:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45727459</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=45727459</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45727459</guid></item><item><title><![CDATA[New comment by keithwinstein in "Oklahoma's "TV nudes" scandal was Jackie Chan movie on Samsung streaming service"]]></title><description><![CDATA[
<p>Shades of <a href="https://www.theregister.com/2006/03/24/tuttle_centos/" rel="nofollow">https://www.theregister.com/2006/03/24/tuttle_centos/</a> (from simpler times, almost 20 years ago!)</p>
]]></description><pubDate>Sat, 20 Sep 2025 10:14:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45312000</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=45312000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45312000</guid></item><item><title><![CDATA[New comment by keithwinstein in "Why MIT switched from Scheme to Python (2009)"]]></title><description><![CDATA[
<p>This story has been reposted many times, and I think GJS's remarks (as recorded by Andy Wingo) are super-interesting as always, but this is really not a great account of "why MIT switched from Scheme to Python."<p>Source: I worked with GJS (I also know Alexey and have met Andy Wingo), and I took 6.001, my current research still has us referring to SICP on a regular basis, and in 2006 Kaijen Hsiao and I were the TAs for what was <i>basically</i> the first offering of the class that quasi-replaced it (6.01) taught by Leslie Kaelbling, Hal Abelson, and Jacob White.<p>I would defer to lots of people who know the story better than me, but here's my understanding of the history. When the MIT EECS intro curriculum was redesigned in the 1980s, there was a theory that an EECS education should start with four "deep dives" into the four "languages of engineering." There were four 15-unit courses, each about one of these "languages":<p>- 6.001: Structure and Interpretation of Computer Programs (the "procedural" language, led by Abelson and Sussman)<p>- 6.002: Circuits and Electronics ("structural" language)<p>- 6.003: Signals and Systems ("functional" language)<p>- 6.004: Computation Structures ("architectural" language)<p>These were intellectually deep classes, although there was pain in them, and they weren't universally beloved. 6.001 wasn't really <i>about</i> Scheme; I think a lot of the point of using Scheme (as I understood it) is that the language is so minimalist and so beautiful that even this first intro course can be <i>about</i> fundamental concepts of computer science without getting distracted by the language. This intro sequence lasted until the mid-2000s, when enrollment in EECS ("Course 6") declined after the dot-com crash, and (as would be expected, and I think particularly worrisome) the enrollment drop was greater among demographic groups that EECS was eager to retain. My understanding circa 2005 is that there was a view that EECS had broadened in its applications, and that beginning the curriculum with four "deep dives" was offputting to students who might not be as sure that they wanted to pursue EECS and might not be aware of all the cool places they could go with that education (e.g. to robotics, graphics, biomedical applications, genomics, computer vision, NLP, systems, databases, visualization, networking, HCI, ...).<p>I wasn't in the room where these decisions were made, and I bet there were multiple motivations for these changes, but I understood that was part of the thinking. As a result, the EECS curriculum was redesigned circa 2005-7 to de-emphasize the four 15-unit "deep dives" and replace them with two 12-unit survey courses, each one a survey of a bunch of cool places that EECS could go. The "6.01" course (led by Kaelbling, Abelson, and White) was about robots, control, sensing, statistics, probabilistic inference, etc., and students did projects where the robot drove around a maze (starting from an unknown position) and sensed the walls with little sonar sensors and did Bayesian inference to figure out its structure and where it was. The "6.02" course was about communication, information, compression, networking, etc., and eventually the students were supposed to each get a software radio and build a Wi-Fi-like system (the software radios proved difficult and, much later, I helped make this an acoustic modem project).<p>The goal of these classes (as I understood) was to expose students to a broad range of all the cool stuff that EECS could do and to let them get there sooner (e.g. two classes instead of four) -- keep in mind this was in the wake of the dot-com crash when a lot of people were telling students that if they majored in computer science, they were going to end up programming for an insurance company at a cubicle farm before their job was inevitably outsourced to a low-cost-of-living country.<p>6.01 used Python, but in a very different way than 6.001 "used" Scheme -- my recollection is that the programming work in 6.01 (at least circa 2006) was minimal and was only to, e.g., implement short programs that drove the robot and averaged readings from its sonar sensors and made steering decisions or inferred the robot location. It was nothing like the big programming projects in 6.001 (the OOP virtual world, the metacircular evaluator, etc.).<p>So I don't think it really captures it to say that MIT "switched from Scheme to Python" -- I think the MIT EECS intro sequence switched from four deep-dive classes to two survey ones, and while the first "deep dive" course (6.001) had included a lot of programming, the first of the new survey courses only had students write pretty small programs (e.g. "drive the robot and maintain equal distance between the two walls") where the simplest thing was to use a scripting language where the small amount of necessary information can be taught by example. But it's not like the students <i>learned</i> Python in that class.<p>My (less present) understanding is that >a decade after this 2006-era curricular change, the department has largely deprecated the idea of an EECS core curriculum, and MIT CS undergrads now go through something closer to a conventional CS0/CS1 sequence, similar to other CS departments around the country (<a href="https://www.eecs.mit.edu/changes-to-6-100a-b-l/" rel="nofollow">https://www.eecs.mit.edu/changes-to-6-100a-b-l/</a>). But all of that is long after the change that Sussman and Wingo are talking about here.</p>
]]></description><pubDate>Fri, 25 Jul 2025 19:14:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44687148</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=44687148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44687148</guid></item><item><title><![CDATA[New comment by keithwinstein in "BPS is a GPS alternative that nobody's heard of"]]></title><description><![CDATA[
<p>You don't need ATSC 3.0 to do this kind of thing! The short-term stability of the oscillators they use for commercial DTV transmission is apparently good enough that just having one local reference to compare GPS vs. each TV station's phase (and distribute that data) can produce a pretty good positioning system. Rosum was doing this back in 2005: <a href="https://www.tvtechnology.com/news/tv-signals-used-for-geopositioning" rel="nofollow">https://www.tvtechnology.com/news/tv-signals-used-for-geopos...</a></p>
]]></description><pubDate>Sun, 13 Apr 2025 03:25:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43669847</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=43669847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43669847</guid></item><item><title><![CDATA[New comment by keithwinstein in "Is NixOS truly reproducible?"]]></title><description><![CDATA[
<p>I think you're both right in a sense. Bazel doesn't (in general) prevent filesystem access, e.g. to library headers in /usr/include. If those headers change (maybe because a Debian package got upgraded or whatever), Bazel won't know it has to invalidate the build cache. I think the FAQ is still technically correct because upgrading the Debian package for a random library dependency counts as "chang[ing] the toolchain" in this context. But I don't think you'd call it hermetic by default.<p>Check out the previous discussion at <a href="https://news.ycombinator.com/item?id=23184843">https://news.ycombinator.com/item?id=23184843</a> and below:<p>> Under the hood there's a default auto-configured toolchain that finds whatever is installed locally in the system. Since it has no way of knowing what files an arbitrary "cc" might depend on, you lose hermeticity by using it.</p>
]]></description><pubDate>Wed, 12 Feb 2025 23:40:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43031003</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=43031003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43031003</guid></item><item><title><![CDATA[New comment by keithwinstein in "Hofstadter on Lisp (1983)"]]></title><description><![CDATA[
<p>The first edition of SICP came out in the fall of 1984 (a year after these Hofstadter columns). This fall is the 40th anniversary!</p>
]]></description><pubDate>Thu, 17 Oct 2024 01:15:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=41865509</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=41865509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41865509</guid></item><item><title><![CDATA[New comment by keithwinstein in "Parsing protobuf at 2+GB/s: how I learned to love tail calls in C (2021)"]]></title><description><![CDATA[
<p>No -- as discussed upthread, clang's musttail attribute requires the target function to have the same number of arguments as the caller and for each argument to be similar to the corresponding caller argument. That's stricter than the underlying LLVM musttail marker (when targeting the tailcc/swifttailcc calling conventions) and is too restrictive to implement Wasm's tail-call feature (and probably Scheme's, etc.), at least if arguments are getting passed to functions natively.<p>It would be nice if the more relaxed rules of the LLVM musttail marker with tailcc could be exposed in clang (and gcc). I think that's basically what "return goto" would do.</p>
]]></description><pubDate>Mon, 19 Aug 2024 21:05:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=41294590</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=41294590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41294590</guid></item><item><title><![CDATA[New comment by keithwinstein in "Parsing protobuf at 2+GB/s: how I learned to love tail calls in C (2021)"]]></title><description><![CDATA[
<p>Yes, wasm2c implements the Wasm tail-call feature with trampolines, exactly this way. (<a href="https://github.com/WebAssembly/wabt/blob/main/test/wasm2c/tail-calls.txt">https://github.com/WebAssembly/wabt/blob/main/test/wasm2c/ta...</a> has an example.)<p>Doing it with a trampoline is probably slower than if C really had tail calls. On the other hand, adding "real" tail calls to C would probably require changing the ABI (e.g. to "tailcc" or "fastcc -tailcallopt"), and I think there's some reason to think this would probably impose a penalty everywhere (<a href="https://llvm.org/docs/CodeGenerator.html#tail-call-optimization" rel="nofollow">https://llvm.org/docs/CodeGenerator.html#tail-call-optimizat...</a>).</p>
]]></description><pubDate>Mon, 19 Aug 2024 18:26:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=41293439</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=41293439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41293439</guid></item><item><title><![CDATA[New comment by keithwinstein in "A Historical Tour of Silicon Valley (2010)"]]></title><description><![CDATA[
<p>I don't think that's quite right -- I think you're thinking of 1601 S. California Ave. Faceboook had there and 1050 Page Mill, but not 1501 Page Mill afaik -- that was HP/HPE until recently and is still an office building, now leased by Tesla.<p>After Facebook moved out of 1601 California Ave. in 2011, it was leased to Theranos. Then it (and some of the buildings on adjacent parcels) got knocked down to turn it into a Stanford faculty housing development that opened in 2017-2019 (University Terrace). I think that's the only housing in the Stanford Research Park.</p>
]]></description><pubDate>Mon, 29 Jan 2024 06:08:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=39173224</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=39173224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39173224</guid></item><item><title><![CDATA[New comment by keithwinstein in "Smoother sailing: Studying audio imperfections in Steamboat Willie"]]></title><description><![CDATA[
<p>My understanding is that jitter refers to essentially stochastic variation of an ideally isochronous/periodic process or signal. You might say "the event is supposed to occur exactly every 1 second; in practice we observed jitter of +/- 50 milliseconds," referring to the RMS (1-standard-deviation estimate) of a sample of inter-event durations. Wikipedia has a nice article: <a href="https://en.wikipedia.org/wiki/Jitter" rel="nofollow">https://en.wikipedia.org/wiki/Jitter</a><p>In the context of video, I think the most common use of "judder" is about the deterministic and periodic variation in timing that occurs when content of one frame rate is shown on a different-rate display. The most common situation is a "3:2 pulldown," where 24 frame-per-second film content is adapted to a 60 field-per-second or frame-per-second video signal. This is done by repeating one film frame for 3 video fields/frames (so 3/60 seconds), and then the next film frame for 2 video fields/frames, then the next one for 3, then 2, then 3, then 2, etc. (2/5 = 24/60 so it works out on average.) That repeating variation in frame duration or number of repetitions is seen as "judder." With a cinema projector or recent TV, you don't have this; the frames are shown 1/24 s apart and with an equal number of flashes each. But on an older 60 Hz TV, you'll have judder.<p>(I've also seen "judder" occasionally used to refer to the stuttery motion that comes from showing 24 fps content on a sample-and-hold display, like an OLED [without black-frame insertion] or a "good" LCD that just shows each frame with constant brightness for the entire 1/24 s and then almost-instantly switches to the next frame. But I don't think this is the correct usage.)</p>
]]></description><pubDate>Fri, 26 Jan 2024 08:16:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=39140038</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=39140038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39140038</guid></item><item><title><![CDATA[New comment by keithwinstein in "A Lighthouse Keeper Hangs Up Her Bonnet"]]></title><description><![CDATA[
<p>You're definitely right that Boston Light is officially in Hull, but The Graves (still a USCG-managed lighthouse, and bigger than Boston Light) is in Boston. See <a href="https://arc-gis-hub-home-arcgishub.hub.arcgis.com/datasets/massgis::massachusetts-cities-and-towns-from-survey-points" rel="nofollow">https://arc-gis-hub-home-arcgishub.hub.arcgis.com/datasets/m...</a></p>
]]></description><pubDate>Wed, 27 Dec 2023 19:22:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=38785505</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38785505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38785505</guid></item><item><title><![CDATA[New comment by keithwinstein in "A Lighthouse Keeper Hangs Up Her Bonnet"]]></title><description><![CDATA[
<p>I met Sally Snowman a few times -- we used to visit Boston Light 1-2 times every summer on MIT's sailboat with a group of students. She was super-welcoming and shared a ton of knowledge and stories about the lighthouse and station. We really enjoyed seeing the lens, etc., and thinking we were like the MIT students in the 1890s who lived out there to try to understand how sound traveled across the sea and create a better foghorn, I think not with a ton of success.<p>One time we arrived at Little Brewster and I was wearing a "captain's hat" that some friends had gotten me from West Marine, and the Coast Guard auxiliarists who helped us dock gave me a hard time, Boston-style. I think they don't love it when recreational sailors wear pretend rank insignia. They were smiling about it but I didn't wear the hat on later visits!<p>I hope she has a great retirement and they find somebody even half as dedicated to be her successor. The only reason Boston Light still has a keeper at all (the only one left in the country) is because Ted Kennedy and John Kerry sponsored an amendment to the 1989 Coast Guard Authorization Act to the effect that "The Boston Light shall be operated on a permanently manned basis." I just went back to look at it and in Ted Kennedy's remarks he says, "Thousands of visitors a year come out in Boston Harbor to visit to Boston Light. They learn about the history of the Light from the museum exhibits at its base. They climb the historic tower, and enjoy the view of the Boston skyline and the charm of Little Brewster Island. Most important, they gain a new understanding and appreciation of the important work of the Coast Guard and the role of Boston Light in our Nation's history." The part about thousands of visitors coming out I think is sadly no longer true. :-(</p>
]]></description><pubDate>Wed, 27 Dec 2023 19:09:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=38785370</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38785370</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38785370</guid></item><item><title><![CDATA[New comment by keithwinstein in "The best WebAssembly runtime may be no runtime"]]></title><description><![CDATA[
<p>The goal usually isn't for <i>one</i> party to take a C program and transform it into "safe" machine code. The goal is for a possible adversary to take a C program and produce an IR, and then for somebody else (maybe you) to validate that IR and produce safe machine code. Wasm is a vastly better interchange format between distrustful parties than C would be!<p>(There are probably even better interchange formats coming on the horizon; Zachary Yedidia has some cutting-edge work on "lightweight fault isolation" that will be presented at the upcoming ASPLOS. Earlier talk here: <a href="https://youtu.be/AM5fdd6ULF0" rel="nofollow noreferrer">https://youtu.be/AM5fdd6ULF0</a> . But outside of the research world, it's hard to beat Wasm for this.)<p>Less important: I don't think going through Wasm has to be viewed as an "extra step" -- every compiler uses an IR, and if you want that IR to easily admit a "safe" lowering (especially one that enforces safety across independently compiled translation units), it will probably look at least a little like Wasm, which is quite minimal in its design. Remember that Wasm evolved from things like PNaCl which is basically LLVM IR, and RLBox/Firefox considered a bunch of other SFI techniques before wasm2c.</p>
]]></description><pubDate>Mon, 11 Dec 2023 23:32:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=38606967</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38606967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38606967</guid></item><item><title><![CDATA[New comment by keithwinstein in "The best WebAssembly runtime may be no runtime"]]></title><description><![CDATA[
<p>Well, a bunch of ways.<p>It's much faster to execute than adding a software bounds-check on every load. (Because the module declares its memories explicitly, it's very easy for a runtime to use a zero-cost strategy to enforce that memory loads/stores are all in-bounds.)<p>But Wasm's safety is more than bounds-checking memory loads/stores. E.g., Wasm indirect function calls are safe, including cross-module function calls for modules compiled separately, because there's a runtime type check (which wasm2c does very efficiently, but not zero-cost).<p>And, Wasm modules are provably isolated (their only access outside the module is via explicit imports). Whereas if you wanted that from "normal C code," it's a lot harder -- at some point you'll have to scan something (the source? the object file?) to enforce isolation and make sure it's not, e.g., jumping to an arbitrary address or making a random syscall. There's obviously a huge amount of good work on SFI but it's not easy to do either on "normal C code" or on arbitrary x86-64 machine code.</p>
]]></description><pubDate>Mon, 11 Dec 2023 21:44:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=38605943</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38605943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38605943</guid></item><item><title><![CDATA[New comment by keithwinstein in "The best WebAssembly runtime may be no runtime"]]></title><description><![CDATA[
<p>It takes tens or hundreds of microseconds to launch a new thread on Linux, and tens or hundreds of milliseconds (or more) to launch a new VM.<p>It takes tens of <i>cycles</i> to instantiate a Wasm module and call one of its exported functions.<p>There are some serious benefits to OS-mediated hardware isolation, but there are also some real benefits to the "ahead-of-time" isolation you can get from something like Wasm (e.g. via wasm2c->a C compiler->machine code, but also with more mainstream tools like wasmtime).</p>
]]></description><pubDate>Mon, 11 Dec 2023 20:03:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=38604770</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38604770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38604770</guid></item><item><title><![CDATA[New comment by keithwinstein in "The best WebAssembly runtime may be no runtime"]]></title><description><![CDATA[
<p>wasm2c (part of WABT) does this transpilation in a spec-conforming way; it passes all* the WebAssembly tests and enforces the memory-safety and determinism requirements and the rest of the spec. The memory bounds-checking itself doesn't have a runtime performance impact because it's all done with mprotect() and a segfault handler. (There are some other differences between w2c2 and wasm2c that also have to do with spec-conformance and safety; e.g., enforcing type-safety of indirect function calls. This costs <4 cycles but it's not zero.)<p>Re: bounds checks, the thing that consumes cycles isn't the bounds check itself, it's Wasm's requirement that OOB accesses produce a <i>deterministic</i> trap, even if the result of an OOB load is never observed and could be optimized out. wasm2c has to prevent the compiler from optimizing out an unobserved OOB load, and that forced liveness defeats some compiler optimizations (probably more than it needs to). But even with all that, we're talking like a <30% slowdown compared with native compilation across the SPECcpu benchmarks.<p>If you want to transpile arbitrary Wasm to native code in a spec-conforming way, you're probably better-off using wasm2c (which, disclosure, I work on). If you trust the Wasm module, or you're good with the isolation you get from your operating system and don't need Wasm's determinism, w2c2 seems great. Both of these are far less battle-hardened than V8 or wasmtime, especially when you include the fact that now you need an optimizing C compiler in the TCB.<p>---<p>* The Wasm testsuite repo has recently merged in the "v4" version of the exception-handling proposal, and WABT is still on "v3". But it does pass all the core tests (including tail calls) at least until GC is merged.</p>
]]></description><pubDate>Mon, 11 Dec 2023 19:53:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=38604628</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38604628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38604628</guid></item><item><title><![CDATA[New comment by keithwinstein in "I want to convince you to have an on-premise offering"]]></title><description><![CDATA[
<p>ObNote that in standard English it's "premises," even for a single building. Some explanation here: <a href="https://ahdictionary.com/word/search.html?q=premises" rel="nofollow noreferrer">https://ahdictionary.com/word/search.html?q=premises</a></p>
]]></description><pubDate>Tue, 05 Dec 2023 18:15:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=38534782</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38534782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38534782</guid></item><item><title><![CDATA[New comment by keithwinstein in "Write your own terminal"]]></title><description><![CDATA[
<p>:-) The UTF-8/Unix FAQ and existing terminal emulators don't agree with you here. As you say, there's no spec for this, but here's what Kuhn's FAQ says (<a href="https://www.cl.cam.ac.uk/~mgk25/unicode.html#term" rel="nofollow noreferrer">https://www.cl.cam.ac.uk/~mgk25/unicode.html#term</a>):<p>"UTF-8 still allows you to use C1 control characters such as CSI, even though UTF-8 also uses bytes in the range 0x80-0x9F. It is important to understand that a terminal emulator in UTF-8 mode must apply the UTF-8 decoder to the incoming byte stream before interpreting any control characters. C1 characters are UTF-8 decoded just like any other character above U+007F."<p>The existing ANSI terminal emulators that support UTF-8 input and C1 controls seem to agree on this (VTE, GNU screen, Mosh). xterm, urxvt, tmux, PuTTY, and st don't seem to support C1 controls in UTF-8 mode. So I don't think poking holes in the UTF-8 decoder is necessary, especially since allowing C1 in UTF-8 mode is rare anyway.</p>
]]></description><pubDate>Sun, 12 Nov 2023 04:00:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=38237171</link><dc:creator>keithwinstein</dc:creator><comments>https://news.ycombinator.com/item?id=38237171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38237171</guid></item></channel></rss>