<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lgg</title><link>https://news.ycombinator.com/user?id=lgg</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 11:00:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lgg" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lgg in "ZFS: Apple's new filesystem that wasn't (2016)"]]></title><description><![CDATA[
<p>That is ABSOLUTELY incorrect. SSDs have enormous amounts of error detection and correction builtin explicitly because errors on the raw medium are so common that without it you would never be able to read correct data from the device.<p>It has been years since I was familiar enough with the insides of SSDs to tell you exactly what they are doing now, but even ~10-15 years ago it was normal for each raw 2k block to actually be ~2176+ bytes and use at least 128 bytes for LDPC codes. Since then the block sizes  have gone up (which reduces the number of bytes you need to achieve equivalent protection) and the lithography has shrunk (which increases the raw error rate).<p>Where exactly the error correction is implemented (individual dies, SSD controller, etc) and how it is reported can vary depending on the application, but I can say with assurance that there is no chance your OS sees uncorrected bits from your flash dies.</p>
]]></description><pubDate>Sun, 27 Apr 2025 15:47:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43812749</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=43812749</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43812749</guid></item><item><title><![CDATA[New comment by lgg in "OpenAI Is a Bad Business"]]></title><description><![CDATA[
<p>It really is not the same. Amazon was not profitable because it was building out logistics, and then AWS data centers. There were defensible moats around their business that their growth facilitated. Google built out data centers and fiber, again tangible assets they had that competitors did not.<p>OpenAI's spending is mostly buying compute from other people. In other words, OpenAI's growth is paying for Microsoft's data centers. The only real asset OpenAI is building are their models. While they may be the best models available today it is unclear if that those any durable advantage since everyone in the industry is advancing very quickly, and it is unclear if they can really monetize effectively them without the infrastructure to run them.</p>
]]></description><pubDate>Tue, 15 Oct 2024 17:55:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=41851159</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=41851159</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41851159</guid></item><item><title><![CDATA[New comment by lgg in "The Remarkable Life of Ibelin"]]></title><description><![CDATA[
<p>Target did not figure out a teen was pregnant before she did. She knew she was pregnant, which led to changes in her purchasing habits. Target detected that and sent her promotions which disclosed her father who had need been informed.<p>Here is a gift link a NYTimes article with more details:
<a href="https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?unlocked_article_code=1.QE4.BIGd.4FW_-KgfGNrf&smid=nytcore-ios-share&referringSource=articleShare" rel="nofollow">https://www.nytimes.com/2012/02/19/magazine/shopping-habits....</a></p>
]]></description><pubDate>Sun, 06 Oct 2024 01:07:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=41753998</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=41753998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41753998</guid></item><item><title><![CDATA[New comment by lgg in "I'm funding Ladybird because I can't fund Firefox"]]></title><description><![CDATA[
<p>From <a href="https://ladybird.org" rel="nofollow">https://ladybird.org</a> (not some subpage, but literally on the main page):<p>Why build a new browser in C++ when safer and more modern languages are available?<p>Ladybird started as a component of the SerenityOS hobby project, which only allows C++. The choice of language was not so much a technical decision, but more one of personal convenience. Andreas was most comfortable with C++ when creating SerenityOS, and now we have almost half a million lines of modern C++ to maintain.<p>However, now that Ladybird has forked and become its own independent project, all constraints previously imposed by SerenityOS are no longer in effect. We are actively evaluating a number of alternatives and will be adding a mature successor language to the project in the near future. This process is already quite far along, and prototypes exist in multiple languages.</p>
]]></description><pubDate>Sun, 07 Jul 2024 22:15:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=40900916</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=40900916</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40900916</guid></item><item><title><![CDATA[New comment by lgg in "Private Cloud Compute: A new frontier for AI privacy in the cloud"]]></title><description><![CDATA[
<p>From: <a href="https://support.apple.com/guide/security/secure-enclave-sec59b0b31ff/1/web/1" rel="nofollow">https://support.apple.com/guide/security/secure-enclave-sec5...</a><p>“A randomly generated UID is fused into the SoC at manufacturing time. Starting with A9 SoCs, the UID is generated by the Secure Enclave TRNG during manufacturing and written to the fuses using a software process that runs entirely in the Secure Enclave. This process protects the UID from being visible outside the device during manufacturing and therefore isn’t available for access or storage by Apple or any of its suppliers.“</p>
]]></description><pubDate>Tue, 11 Jun 2024 14:44:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=40646758</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=40646758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40646758</guid></item><item><title><![CDATA[New comment by lgg in "Speeding up ELF relocations for store-based systems"]]></title><description><![CDATA[
<p>Absolutely, I just think "when adhering to this convention" is a high risk. Admittedly I mostly work on macOS so I don't have a nearly as deep of an experience with ELF, but in my experience even when a system looks to be well maintained that you often find surprising numbers of projects being "clever" and breaking conventions as soon as you try to do something that depends on everyone actually globally following the convention.</p>
]]></description><pubDate>Tue, 07 May 2024 02:10:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=40281581</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=40281581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40281581</guid></item><item><title><![CDATA[New comment by lgg in "Speeding up ELF relocations for store-based systems"]]></title><description><![CDATA[
<p>Not really... symbol versioning is a form of namespacing, but it is somewhat orthogonal to this.<p>Symbol versioning allows you to have multiple symbols with the same name namespaced by version, but you still have no control over what library in the search path they will be found in. So it does not improve the speed of the runtime searching (since they could be in any library an the search path and you still need to search for them in order), and it does not provide the the same binary compatibility support and dylib hijacking protection (since again, any dylibs earlier in the search path could declare a symbol with he same name.<p>One could use symbol versioning to construct a system where you had the same binary protection guarantees, but it would involve every library declaring a unique version string, and guaranteeing there are no collisions. The obvious way to do that would be to use the file path as the symbol version, at which point you have reinvented mach-o install names, except:<p>1. You still do not get the runtime speed ups unless you change the dynamic linker behavior to use the version string as the search path, which would require ecosystem wide changes.<p>2. You can't actually use symbol versioning to do versioned symbols any more, since you overloaded the use of version strings (mach-o binaries end up accomplishing symbol versioning through header tricks with `asmname`, so it is not completely intractable to do even without explicit support).</p>
]]></description><pubDate>Mon, 06 May 2024 22:29:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=40280239</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=40280239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40280239</guid></item><item><title><![CDATA[New comment by lgg in "Speeding up ELF relocations for store-based systems"]]></title><description><![CDATA[
<p>Windows and macOS both use a form of two level name-spacing, which does the same sort of direct binding to a target library for each symbol. Retrofitting that into a binary format is pretty simple, but retrofitting it into an ecosystem that depends on the existing flat namespace look up semantics is not. I think it is pretty clever that the author noticed the static nature of the nix store allows them to statically evaluate the symbol resolutions and get the launch time benefits of two level namespaces.<p>I do wonder if it might make more sense to rewrite the binaries to use Direct Binding[1]. That is an existing encoding of library targets for symbols in ELF that has been used by Solaris for a number of years.<p>1: <a href="https://en.wikipedia.org/wiki/Direct_binding" rel="nofollow">https://en.wikipedia.org/wiki/Direct_binding</a></p>
]]></description><pubDate>Sun, 05 May 2024 21:23:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=40268546</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=40268546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40268546</guid></item><item><title><![CDATA[New comment by lgg in "Linkers and Loaders (1999) [pdf]"]]></title><description><![CDATA[
<p>Conceptually not much has changed since the book was written, but in practice there has been a lot of advancement. For example, ASLR and the increase in the number of libraries has greatly increased the pressure to make relocations efficient, modern architecture including PC relative load/store and branch instructions has greatly reduced the cost of PIC code, and code signing has made mutating program text to apply relocations problematic.<p>On Darwin we redesigned our fixup format so it can be efficiently applied during page in. That did in include adding a few new load commands to describe the new data, as well as a change in how we store pointers in the data segments, but those are not really properties of mach-o so much as the runtime.<p>I generally find that a lot of things attribute to the file format are actually more about how it is used rather than what it supports. Back when Mac OS X first shipped people argued about PEF vs mach-o, but what all the arguments all boiled down to was the calling convention (TOC vs GOT), either of which could have been support by either format.<p>Another example is symbol lookup. Darwin uses two level namespaces (where binds include both the symbol name and the library it is expected to be resolved from),  and Linux uses flat namespaces (where binds only include the symbol name which is then searched for in all the available libraries). People often act as though that is a file format difference, but mach-o supports both (though the higher level parts of the Darwin OS stack depend on two level namespaces, the low level parts can work with a flat namespace, which is important since a lot of CLI tools that are primarily developed on Linux depend on that). Likewise, ELF also supports both, Solaris uses two level namespaces (they call it ELF Direct Binding).</p>
]]></description><pubDate>Thu, 07 Mar 2024 00:30:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=39623639</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=39623639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39623639</guid></item><item><title><![CDATA[New comment by lgg in "Memory leak proof every C program"]]></title><description><![CDATA[
<p>There is a bug here... Clearly the author intended to cache the value of nextmalloc to avoid calling dlsym() on every malloc. The correct code should be:<p><pre><code>  static void *(*nextmalloc)(size_t) = NULL;
  if (!nextmalloc) 
    nextmalloc = dlsym(RTLD_NEXT, "malloc");
  }
</code></pre>
Somehow the fact that the optimization is incorrectly missed here feels appropriate ;-)</p>
]]></description><pubDate>Sun, 21 Jan 2024 00:46:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39074238</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=39074238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39074238</guid></item><item><title><![CDATA[New comment by lgg in "Going declarative on macOS with Nix and Nix-Darwin"]]></title><description><![CDATA[
<p>It really isn't unavoidable on macOS. @rpath is designed specifically to handle this sort of thing, and is how all the command line tools distributed with Xcode link to libraries embedded in Xcode and continue working even when you drag Xcode to a new location.<p>Admittedly supporting that would require updating how all the tools are built and not just defaulting to Whatever Linux Does™, which is probably too much effort to justify in this case, but it is hardly an unsolvable (or even an unsolved) problem.</p>
]]></description><pubDate>Tue, 16 Jan 2024 21:12:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=39019202</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=39019202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39019202</guid></item><item><title><![CDATA[New comment by lgg in "Nordic is getting involved in RISC-V"]]></title><description><![CDATA[
<p>I have been out of this area for almost a decade now, but I have very fond memories of the nRF5 SDK. When I was evaluating the (then new) Nordic BLE SoC's for future products it was so much nicer than the TI CC2540 we had used in our first BLE device.</p>
]]></description><pubDate>Fri, 10 Nov 2023 21:02:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=38224490</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=38224490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38224490</guid></item><item><title><![CDATA[New comment by lgg in "Analogue 3D"]]></title><description><![CDATA[
<p>That is only true as the speed of the system performing the emulation approaches infinity ;-)<p>Yes, you can do all the same things in software, in fact it is trivial, just take the same output from you EDA tools and run it in a simulator. Of course that is so slow it cannot interface with (most) real external HW like CRTs and accessories, but in some technical sense it is software taking the exact same set of inputs as an FPGA, and generating the exact same outputs (just much, much slower).<p>If we accept that as the premise then then we can consider emulators an optimization where instead of using the simulated verilog we try to manually write code that performs equivalent operations, but can run fast enough to hit the original timing constraints of the HW we are replacing. The thing is that the code is constrained by the limits of the modern HW it is running on, and sometimes the modern HW just cannot do what legacy HW did.<p>An NES does not have a frame buffer (it does not even have enough ram to hold ~5% of a rendered frame of its output!). To cope with that the games generate their output line by line as that the video signal is being generated. What that means is that you click a button on the controller it can change the output of the scanline that is currently writing to the screen (and you can release it updating the output before the frame is being generated, changing subsequent lines). IOW, the input latency is less than a single frame of input. That is not true with modern computers where we render into a memory mapped frame buffer which is then transmitted to the screen with a complex series of of chips including the GPU and DC, and ultimately synchronized on the blanking intervals.<p>On an FPGA you can design a display pipeline that matches that of legacy consoles, and get the same latency. Of course you could also do the same thing in software emulation on a computer if you clock it so high that it renders and outputs one frame of video for each scanline of output on the original, but given the NES had a framerate of ~60 (59.94) fps and vertical resolution of 240p that comes out to a framerate of ~14,400 fps to hit the latency target for accurate emulation.<p>Now in practice most of the time it is a non-issue and emulation is more than sufficient, but some old games do very funky things to exploit whatever they could on the limited HW they run on.<p>It is also worth noting that FPGAs are a lot more interesting for older systems. Once you get to more modern systems that look more like modern computers the strict timing becomes less important. In particular, once you get to consoles that have frame buffers the timing becomes much less sensitive because the frame buffer acts as a huge synchronization point where you can absorb and hide a lot of timing mismatches.</p>
]]></description><pubDate>Mon, 16 Oct 2023 16:48:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=37902596</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37902596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37902596</guid></item><item><title><![CDATA[New comment by lgg in "Why did the Motorola 68000 processor family fall out of use in PCs?"]]></title><description><![CDATA[
<p>It is a pretty easy mistake to make if you are used to how fast new processors come out now, but you comparing an i386 from 1991 to to a 68020 from the mid 80s.<p>In 1985 when the 386 came out I believe the fastest speed you could get was 16MHz. They added higher speed variants for years afterwards. Intel made a 40MHz 386 in 1991 that was strictly aimed at embedded users who want more perf but were not ready to move to 486 based designs (386CX40), I doubt almost anyone used on in a PC. AMD made a Am386 at 40MHz which was a reverse engineered clone of the 386, but again that came out in the 90s (the big selling point was that you could reuse your existing 386 motherboards instead of replacing them like you needed to for a 486).</p>
]]></description><pubDate>Sat, 07 Oct 2023 17:41:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=37803785</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37803785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37803785</guid></item><item><title><![CDATA[New comment by lgg in "Humane, a startup full of ex-iPhone talent, trying to make phones obsolete"]]></title><description><![CDATA[
<p>I find the whole idea unappealing, but they did keep their promise and show off a prototype back in spring://www.youtube.com/watch?v=gMsQO5u7-NQ</p>
]]></description><pubDate>Sun, 01 Oct 2023 20:50:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=37730596</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37730596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37730596</guid></item><item><title><![CDATA[New comment by lgg in "macOS Containers v0.0.1"]]></title><description><![CDATA[
<p>Generally speaking macOS does not guarantee syscall stability, and does not generally guarantee compatibility for any binaries not linked to `libSystem.dylib` (that is the supported ABI boundary)[1]. This has a number of implications, including (but not limited to):<p>* The most obvious is the commonly mentioned fact that syscalls may change. Here is an example where golang program broke because they were directly using the `gettimeofday()` syscalls[2].<p>* The interface between the kernel and the dynamic linker (which is required since ABI stability for statically linked executables is not guaranteed) is private and may change between versions. That means if your chroot contains a `dyld` from an OS version that is not the same as the host kernel it may not work.<p>* The format of the dyld shared cache changes most releases, which means you can't just use the old dyld that matches the host kernel in your chroot because it may not work with the dyld shared cache for the OS you are trying to run in the chroot.<p>* The system maintains a number of security policies around platform binaries, and those binaries are enumerated as part of the static trust cache[3]. Depending on what you are doing and what permissions it needs you may not be able to even run the system binaries from another release of macOS.<p>In practice you can often get away with a slight skew (~1 year), but you can rarely get away with skews of more than 2-3 years.<p>[1]: <a href="https://developer.apple.com/library/archive/qa/qa1118/_index.html" rel="nofollow noreferrer">https://developer.apple.com/library/archive/qa/qa1118/_index...</a><p>[2]: <a href="https://github.com/golang/go/issues/16606">https://github.com/golang/go/issues/16606</a><p>[3]: <a href="https://support.apple.com/guide/security/trust-caches-sec7d38fbf97/web" rel="nofollow noreferrer">https://support.apple.com/guide/security/trust-caches-sec7d3...</a></p>
]]></description><pubDate>Tue, 26 Sep 2023 15:31:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=37660835</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37660835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37660835</guid></item><item><title><![CDATA[New comment by lgg in "Nushell and Uutils"]]></title><description><![CDATA[
<p>I presume the reason they do it is that the premise of Nushell is that it uses pipes of structured output instead of simple text streams. That means that they need all the tools to output datas in that form. They could include wrappers for all OS provided binaries and handle the conversion in those wrappers, but that makes you incredibly fragile to minor output or flags changes, and in many cases those wrappers would end up being more complex than the complex than the commands themselves.</p>
]]></description><pubDate>Mon, 18 Sep 2023 19:17:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=37560688</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37560688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37560688</guid></item><item><title><![CDATA[New comment by lgg in "Mac ROM-Inator II Restock and Partnerships"]]></title><description><![CDATA[
<p>I think you missed my point. It wasn’t just ROM limited, the ram controller they used in the LC did not have enough address lines to address more than 4MB per SIMM slot, end of story. No amount of firmware hacking can ever make it support 16MB SIMMs without what amounts to a total board redesign. Given the 2MB soldered to the board (which took the address lines for two of the slots) that meant the machine was physically limited to 10MB unless you wanted to break out a soldering iron. Yes, the ROM has a software limit, but it reflected the actual limits of the HW (and more likely was due how the ROM went about detecting the ram then any explicit intent to limit things… it is not shocking that the software only works with supported physical configurations and not board reworks).<p>The LCII on the other hand is a bit less excusable since it could physically hold 12MB but only 10 was usable. As I said I suspect the reason is that it was a fairly quick revision they squeezes in before the redesigned LCIII and they just didn’t rev those parts of the ROM, but it still seemed pretty bad).<p>If you want more info there is a pretty deep dive on this here: <a href="https://68kmla.org/bb/index.php?threads/technical-explanation-of-why-the-lc-lc-ii-and-classic-ii-have-a-10mb-ram-limit.43622/" rel="nofollow noreferrer">https://68kmla.org/bb/index.php?threads/technical-explanatio...</a></p>
]]></description><pubDate>Wed, 13 Sep 2023 01:54:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=37491253</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37491253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37491253</guid></item><item><title><![CDATA[New comment by lgg in "Mac ROM-Inator II Restock and Partnerships"]]></title><description><![CDATA[
<p>Not to say there has never been software based market segmentation, but this example is just not right.<p>First off the LC was in no way a threat to the Iici. The IIci had 32 bit data bus with a 25MHz 68030 and and supported a CPU cache. The LC with a 16MHz 68020 with a 16 bit bus. The Iici was conservatively twice as fast.<p>Second, the LC HW did not support nearly as much ram as the Iici. It shipped with 2MB soldered down (which logically you can think of 2 1MB SIMMs) and had 2 slots that each supported 4MB SIMMs, which were the highest density commonly available at the time. The (cheaper) memory controller used in the LC only supported 24 bits of physical addresses (and only in so many configurations), resulting in a maximum of ~16MB. Once you account for the soldered down two megabytes and how the slots had to be configured that left you with the ability to install 4MB into the each slot you get 10MB.<p>Technically speaking it was probably possible to get it to support 12MB or 16MB with a ROM patch if you desoldered the builtin memory and soldered from the address lines on the controller to some custom memory board. But as shipped with the builtin RAM and the controller chip they included 10MB was the most it could reasonably use.<p>The LCII did up the builtin memory to 4MB and had a software limit of 10MB like the LC (which meant if you installed 4MB SIMMs you would be missing 2MB), but I suspect that was more a result of how quickly it came to market (it was essentially just an LC with a 68030 and 4MB of ram, both of which greatly improved the experience of using the machine with System 7, which shipped after the original Mac LC).<p>Within a year or so after the LCII the the LCIII shipped with a completely redesigned board, and it supported 36MB of ram.<p>Source: I owned a Mac LC, paid for and installed a 2Mb memory upgrade to get it to 4MB, then eventually did a motherboard swap to upgrade it to an LCIII. I can even still tell you how much each of those upgrades cost ;-)</p>
]]></description><pubDate>Mon, 11 Sep 2023 23:13:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=37474956</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37474956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37474956</guid></item><item><title><![CDATA[New comment by lgg in "Using LD_PRELOAD to cheat, inject features and investigate programs"]]></title><description><![CDATA[
<p>This is awesome. macOS actually enables the same env var protections by default if your process is opted into the hardened runtime. You can do that by passing —-options=runtime to your codesign invocation.</p>
]]></description><pubDate>Fri, 08 Sep 2023 21:52:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=37439846</link><dc:creator>lgg</dc:creator><comments>https://news.ycombinator.com/item?id=37439846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37439846</guid></item></channel></rss>