<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: rnijveld</title><link>https://news.ycombinator.com/user?id=rnijveld</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:06:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=rnijveld" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by rnijveld in "Async Rust never left the MVP state"]]></title><description><![CDATA[
<p>You realize this article talks about Rust on embedded hardware specifically, where you don’t have threads or big runtimes? There is no hate going on here either, just attempts to make things better. Might I suggest you click through to the homepage and I think you’ll figure out the rest.</p>
]]></description><pubDate>Tue, 05 May 2026 08:28:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=48019565</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=48019565</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48019565</guid></item><item><title><![CDATA[Three Years of Rusty Sudo]]></title><description><![CDATA[
<p>Article URL: <a href="https://trifectatech.org/blog/three-years-of-rusty-sudo/">https://trifectatech.org/blog/three-years-of-rusty-sudo/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47947850">https://news.ycombinator.com/item?id=47947850</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 29 Apr 2026 13:06:52 +0000</pubDate><link>https://trifectatech.org/blog/three-years-of-rusty-sudo/</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=47947850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47947850</guid></item><item><title><![CDATA[New comment by rnijveld in "Please donate to keep Network Time Protocol up – Goal 1k"]]></title><description><![CDATA[
<p>As someone working on an NTP implementation (specifically ntpd-rs) I have to add some context to this: I do believe that donating to the Network Time Foundation is fine, but it is not required to keep the Network Time Protocol up in any way.<p>Firstly, the most important reason the ntp.org domain name is so well known is because of the NTP pool, which is an entirely separate project (the Network Time Foundation calls it an associated project), which was allowed to use the `pool.ntp.org` domain name, but does not directly receive significant funding from the Network Time Foundation as far as I understand (I do not know the details of the domain name arrangement). That pool project was developed independently of the Network Time Foundation and is run by a different group of volunteers, mostly being developed and maintained by Ask Bjørn Hansen and hosting servers entirely consisting of (sometimes professional) volunteer operators. This is what many NTP implementations, specifically many Linux distributions, use as their standard source of time. But it does not appear to depend much on the Network Time Foundation for continued existence.<p>Secondly, despite all the claims made on the Network Time Foundation site, the IETF took over development and maintenance of the NTP protocol for something like two decades now already under the NTP working group. This was all done with the Network Time Foundation fully agreeing this was the way forward. But for some reason they still consider themselves exempted from any process that the IETF uses and consider themselves as the true developers of the protocol. They constantly frustrate the processes that the IETF uses, claiming that they should receive special treatment as being the 'reference implementation'. Meanwhile, the IETF NTP WG does not have a concept of the reference implementation at all, instead considering all NTP implementations equal.<p>Aside from this frustrating stance, the Network Time Foundation also didn't do much work on trying to forward the standard at all, instead relying on the status quo from the late 90s and early 2000s. Meanwhile the IETF NTP WG worked on standardizing a way to secure NTP traffic (with regular NTP traffic being relatively easy to man in the middle, with older implementations even being so predictable that faking responses didn't even need reading the requests). That much more secure standard, NTS, was fully standardized in September of 2020, but the Network Time Foundation continues to not implement this standard. All of this has resulted in almost every Linux distribution that I know of replacing their ntpd implementation with NTPsec (with ntpd not even being available as an alternative anymore for installation).<p>Meanwhile people also started working on NTPv5, in order to remove some of the unsafe and badly defined parts of the standard, and in general bring the spec back up to date. As part of this process, it was decided some time ago that in contrast to the previous NTP standards, the algorithms specifying what a client should do in order to synchronize the time should be removed from the standard (the algorithms specified in the previous standards were not being used by any implementation, not even the ntpd implementation by the Network Time Foundation itself). NTPv5 instead focuses on the wire format of NTP packets and the simple interactions between parties. Yet despite there having been a consensus call on this, and despite no current implementation following the exact algorithm as specified in NTPv4, the Network Time Foundation continues to frustrate the process by claiming that these algorithms are an essential part of the standard.<p>All of this frustration was also a large part of why the PTP protocol was eventually developed at the IEEE. That is to say: even though the operating mode of PTP is often quite different to that of NTP these days, the information that needs to be transferred is essentially the same, and the packets could have trivially been defined to be the same as long as NTP had built in a little bit of additional flexibility a little bit earlier. This would have also helped NTP in the end (with for example hardware timestamping only being implemented for PTP right now, even though it could have been just as useful in NTP), and with PTP now also aiming to introduce a simpler client-server model via CSPTP that looks a whole lot like what NTP was trying to achieve all this time with its most used operating mode.<p>It is my belief that the Network Time Foundation continues to push themselves in a corner of more and more irrelevance even though that did not need to be. The historical significance of David Mills' ntpd implementation is definitely there, and we should applaud the initial efforts and their focus on keeping the protocol open and widely available. And I do believe that the current people at the Network Time Foundation could still provide more than enough valuable input in the standardization process, but they cannot claim anymore to be the sole developers of the NTP protocol. Times have changed, there are now multiple implementations with an equally valid claim. Especially with GNSS (specifically GPS) being under attack more and more these days, we need alternative ways of synchronizing computer clocks to a standard time in a secure way. NTP and NTS are perfectly positioned to take on that task and we need to make sure that we keep the standard up to date for our evolving world.<p>Edit: if you want something else to donate to, I would consider donating to the IETF, NTPsec, or maybe donating some time to the NTP pool. I would also link to donations for Chrony (one of the other major NTP server implementations) but they do not appear to offer anything. Linking to my own project's donation page does not seem fair considering the contents of this post.</p>
]]></description><pubDate>Wed, 12 Nov 2025 13:52:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45900184</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=45900184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45900184</guid></item><item><title><![CDATA[New comment by rnijveld in "Please donate to keep Network Time Protocol up – Goal 1k"]]></title><description><![CDATA[
<p>The ntp pool is actually independently run and funded and has nothing to do with the NTPd implementation nor the NTP Foundation, other than them allowing the pool to use that DNS name.</p>
]]></description><pubDate>Wed, 12 Nov 2025 12:29:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45899334</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=45899334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45899334</guid></item><item><title><![CDATA[New comment by rnijveld in "Memory-safe sudo to become the default in Ubuntu"]]></title><description><![CDATA[
<p>The features we specifically don’t support are those related to direct LDAP support within sudo, so things like loading a sudoers file directly from LDAP. Sudo-rs will use any user retrieved via NSS, such as when configured using SSSD to load LDAP users. And from the authentication side you can use whatever PAM supports, so anything like Kerberos etc, which again can be coupled with the same LDAP database.</p>
]]></description><pubDate>Tue, 06 May 2025 18:24:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43908125</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=43908125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43908125</guid></item><item><title><![CDATA[New comment by rnijveld in "Zlib-rs is faster than C"]]></title><description><![CDATA[
<p>I would argue compile time changes don't matter much, as the amount of data going through zlib all across the world is so large, that any performance gain should more than compensate any additional compilation time (and zlib-rs compiles in a couple of seconds anyway on my laptop).<p>As for dependencies: zlib, zlib-ng and zlib-rs all obviously need some access to OS APIs for filesystem access if compiled with that functionality. At least for zlib-rs: if you provide an allocator and don't need any of the file IO you can compile it without any dependencies (not even standard library or libc, just a couple of core types are needed). zlib-rs does have some testing dependencies though, but I think that is fair. All in: all of them use almost exactly the same external dependencies (i.e.: nothing aside from libc-like functionality).<p>zlib-rs is a bit bigger by default (around 400KB), with some of the Rust machinery. But if you change some of that (i.e. panic=abort), use a nightly compiler (unfortunately still needed for the right flags) and add the right flags both libraries are virtually the same size, with zlib at about 119KB and zlib-rs at about 118KB.</p>
]]></description><pubDate>Mon, 17 Mar 2025 09:51:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43386695</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=43386695</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43386695</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>Ah right! I always forget about that since we don’t implement the management protocol in ntpd-rs. I think it’s insane that stuff should go over the same socket as the normal time messages. Something I don’t ever see us implementing.</p>
]]></description><pubDate>Tue, 25 Jun 2024 15:54:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=40790101</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40790101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40790101</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>Probably, but we still need to parse that string on the client side as well. If you’re willing to do the work I’m sure we would accept a pull request for it! There’s just so many things to do in so little time unfortunately. I think reducing our dependencies is a good thing, but our dependencies for JSON parsing/writing are used so commonly in Rust and the way we use it hopefully prevents any major security issues that I don’t think this should be a high priority for us right now compared to the many things we could be doing.</p>
]]></description><pubDate>Tue, 25 Jun 2024 08:33:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785961</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785961</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>I also agree that there is too much churn in the Rust ecosystem and that we should try and slow things down in the coming years. ntpd-rs also does this: our MSRV is 1.70 right now (that was released over a year ago) and we test our code on CI against this version (as well as the current stable release). And we go a little further. Using the `direct-minimal-versions` (nightly only right now unfortunately) flag we downgrade our dependencies to the minimal ones we've specified in our `Cargo.toml` and test against those dependencies as well, as well as the latest dependencies specified in `Cargo.lock` which we update regularly. This allows us to at least partially verify that we still work against old versions of our dependencies, allowing our upstream packagers to more easily match their packages against our own. Of course we all should update to newer versions whenever possible, but sometimes that is hard to do (especially for package maintainers in distributions such as Fedora and Debian, who have to struggle with so many packages at the same time) and we shouldn't create unnecessary work when its not needed. Hopefully this is our way of helping the ecosystem slow down a little and focus more on security and functionality and less on redoing the same thing all over again every year because of some shiny new feature.</p>
]]></description><pubDate>Tue, 25 Jun 2024 08:07:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785796</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785796</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>I'm afraid this is a pretty common sentiment. NTS has been out for several years already and is implemented in several implementations (including our ntpd-rs implementation, and others like chrony and ntpsec). Yet its usage is low and meanwhile the fully unsecured and easily spoofable NTP remains the default, in effect allowing anyone to manipulate your clock almost trivially (see our blog post about this: <a href="https://tweedegolf.nl/en/blog/121/hacking-time" rel="nofollow">https://tweedegolf.nl/en/blog/121/hacking-time</a>). Hopefully we can get NTS to the masses more quickly in the coming years and slowly start to decrease our dependency on unsigned NTP traffic, just as we did with unencrypted HTTP traffic.</p>
]]></description><pubDate>Tue, 25 Jun 2024 07:49:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785655</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785655</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>Our project also includes a PTP implementation, statime (<a href="https://github.com/pendulum-project/statime/">https://github.com/pendulum-project/statime/</a>), that includes a Linux daemon. Our implementation should work as well or even better than what linuxptp does, but it's still early days. One thing to note though is that NTP can be made to be just as precise (if not more precise), given the right access to hardware (unfortunately most hardware that does timestamping only does so for PTP packets). The reason for this precision is simple: NTP can use multiple sources of time, whereas PTP by design only uses a single source. This gives NTP more information about the current time and thus allows it to more precisely estimate what the current time is. The thing with relying purely on GNSS is that those signals can be (and are in practice) disrupted relatively easily. This is why time synchronization over the internet makes sense, even for large data centers. And doing secure time synchronization over the internet is only practically possible using NTP/NTS at this time. But there is no one size fits all solution for time synchonization in general.</p>
]]></description><pubDate>Tue, 25 Jun 2024 07:42:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785612</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785612</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>I do think that memory safety is important for any network service. The probability of something going horribly wrong when a network packet is parsed in a wrong way is just too high. NTP typically does have more access to the host OS than other daemons, with it needing to adjust the system clock.<p>Of course, there are many other services that could be made memory safe, and maybe there is some sort of right or smart order in which we should make our core network infrastructure memory safe. But everyone has their own priorities here, and I feel like this could end up being an endless debate of whatabout-ism. There is no right place to start, other than to just start.<p>Aside from memory safety though, I feel like our implementation has a strong focus on security in general. We try and make choices that make our implementation more robust than what was out there previously. Aside from that, I think the NTP space has had an under supply of implementations, with there only being a few major open source implementations (like ntpd, ntpsec and chrony). Meanwhile, NTP is one of those pieces of technology at the core of many of the things we do on the modern internet. Knowing the current time is one of these things you just need in order to trust many of the things we take for granted (without knowledge of the current time, your TLS connection could never be trusted). I think NTP definitely deserves this attention and could use a bunch more attention.</p>
]]></description><pubDate>Tue, 25 Jun 2024 07:31:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785549</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785549</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>I agree that amplification and reflection definitely are worries, which is why we are working towards NTS becoming a default on the internet. NTS would prevent responses by a server from a spoofed packet and at the same time would make sure that NTP clients can finally start trusting their time instead of hoping that there are no malicious actors anywhere near them. You can read about it on our blog as well: <a href="https://tweedegolf.nl/en/blog/122/a-safe-internet-requires-secure-time" rel="nofollow">https://tweedegolf.nl/en/blog/122/a-safe-internet-requires-s...</a><p>One thing to note about amplification: amplification has always been something that NTP developers have been especially sensitive to. I would say though that protocols like QUIC and DNS have far greater amplification risks. Meanwhile, our server implementation forces that responses can never be bigger than the requests that initiated them, meaning that no amplification is possible at all. Even if we would have allowed bigger responses, I cannot imagine NTP responses being much bigger than two or three times their related request. Meanwhile I've seen numbers for DNS all the way up to 180 times the request payload.<p>As for your worries: I think being a little cautious keeps you alert and can prevent mistakes, but I also feel that we've gone out of our way to not do anything crazy and hopefully we will be a net positive in the end. I hope you do give us a try and let us know if you find anything suspicious. If you have any feedback we'd love to hear it!</p>
]]></description><pubDate>Tue, 25 Jun 2024 07:15:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785438</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785438</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785438</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>I don’t think our dependency tree is perfect, but I think our dependencies are reasonable overall. We use JSON for transferring metrics data from our NTP daemon to our prometheus metrics daemon. We’ve made this split for security reasons, why have all the attack surface of a HTTP server in your NTP daemon? That didn’t make sense to us. Which is why we added a readonly unix socket to our NTP daemon that on connecting dumps a JSON blob and then closes the connection (i.e. doing as little as possible), which is then usable by our client tool and by our prometheus metrics daemon. That data transfer uses json, but could have used any data format. We’d be happy to accept pull requests to replace this data format with something else, but given budget and time constraints, I think what we came up with is pretty reasonable.</p>
]]></description><pubDate>Tue, 25 Jun 2024 06:35:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785191</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785191</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>I would encourage you to take a look at some of our testing data and an explanation of our algorithm in our repository (<a href="https://github.com/pendulum-project/ntpd-rs/tree/main/docs/algorithm">https://github.com/pendulum-project/ntpd-rs/tree/main/docs/a...</a>). I think we are very much in spitting distance of Chrony in terms of synchronization performance, sometimes even beating Chrony. But we’d love for more people to try our algorithm in their infrastructure and report back. The more data the better.</p>
]]></description><pubDate>Tue, 25 Jun 2024 06:20:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785088</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785088</guid></item><item><title><![CDATA[New comment by rnijveld in "More Memory Safety for Let's Encrypt: Deploying ntpd-rs"]]></title><description><![CDATA[
<p>In our internal testing we are very close to Chrony with our synchronization performance, some of our testing data and an explanation of our algorithm is published in our repository: <a href="https://github.com/pendulum-project/ntpd-rs/tree/main/docs/algorithm">https://github.com/pendulum-project/ntpd-rs/tree/main/docs/a...</a><p>Given the amount of testing we (and other parties) have done, and given the strong theoretical foundation of our algorithm I’m pretty confident we’d do well in many production environments. If you do find any performance issues though, we’d love to hear about them!</p>
]]></description><pubDate>Tue, 25 Jun 2024 06:16:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=40785066</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=40785066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40785066</guid></item><item><title><![CDATA[New comment by rnijveld in "Sudo-rs dependencies: when less is better"]]></title><description><![CDATA[
<p>This only works on Linux of course since the Linux kernel and libc are not tightly coupled, on any other OS dynamically linking against libc is a necessity. Also, I've never seen anyone statically linking against glibc? Is that even something people do? I'd consider your system provided libc being broken is a similar situation to an unbootable kernel: you just need a rescue stick/partition to fix it, or reinstall your OS.</p>
]]></description><pubDate>Thu, 28 Mar 2024 15:53:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=39853049</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=39853049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39853049</guid></item><item><title><![CDATA[New comment by rnijveld in "Sudo-rs dependencies: when less is better"]]></title><description><![CDATA[
<p>I'd like to respond to a few things. I think using dependency count as a metric is a bad idea, that metric could easily be lowered by just copying all the code over to your project. As you rightfully say, the logic has to exist one way or another. Our approach definitely wasn't that though: it was and is never our goal to have no dependencies, but we do think that dependencies should be part of the safety story, i.e. is a dependency better or worse than what you would write yourself based on your specific use case. This needle will fall much much quicker towards writing it yourself for something like sudo, but still, at least the considerations you make should stay the same, even if the decision ends up being different. Considerations such as: is the communication with the dependency team worth it for the amount of code we save, are their goals aligned with ours, is the number of transitive dependencies that I take within my codebase with this dependency small enough, how much code am I actually saving, would I even be able to do this myself, could I help the wider community with my contributing back to that dependency, etc. I do feel that right now more often than not dependencies are just bolted onto a project as needed, and no consideration is given to any burden such a dependency might have. Aside from that, I think at least a much better metric would be something like 'teams', 'groups' or 'projects' needed to keep your project working. Still not perfect, nor is any other metric, but sometimes it helps to quantify things.<p>Some responses to your notes:<p>- The trouble is that we had to re-implement an existing CLI, and as you might expect with something that evolved over a period of some 30 years, there are quite a few weird behaviors in sudo. We initially had a mostly working implementation based on clap, but could not get some parts of the CLI to parse nicely, i.e. the code just looked hacky, and had to do all kinds of post-processing to complete the parsing of the CLI, resulting in lots of additional code. Maybe we should have looked at something like lexopt, but we just went ahead and did it ourselves initially just to see how it would go, we kind of liked the result and never looked at any alternative implementations. I do believe we looked at clap alternatives for a little while to see if something would make our parsing a little easier, but lexopt didn't surface at that time for whatever reason. We're not perfect either. I do think our parsing is pretty decent though.<p>- We did think about contributing back, but in the end we wanted a little more control over where the password (or more precisely 'hidden input') was stored in memory, and needed some specific parts for handling TTYs (given our setuid context) resulting in us quickly deconstructing rpassword until almost nothing of it was left. I think it's a little hard to contribute those things back, but as a side project I'd love to contribute some of the changes we made back to rpassword, there just wasn't the time to do it at that time as it would be quite a bit of work.<p>- Glob is a hard one, as the Rust crate is not entirely compatible with how the original sudo works. But the logic has to be there one way or another and if we have to decide between libc (i.e. probably C code) and Rust, we'd prefer to go with Rust of course. That already resulted in an issue being opened for incompatibilities of course, but it's a hard one: I'd prefer to keep the Rust code, so I hope that someone who manages glob at least agrees that it should be as compatible as possible. But I can't and don't have the expectation that their team has the same priorities, and thus we are back at one of the reasons why a dependency might not always be worth it. There's always choices to be made. For now though, we'll keep the Rust crate dependency, as it works well enough!<p>- Thiserror is great for prototyping, but loses its value quickly once you know what kind of errors you have, it just takes a few lines of extra code. But, thinking about teams etc: given that it is not that big, and is created and maintained by dtolnay, whose code you probably already use in multiple ways in nearly any other project, it's not the worst either. For sudo-rs though, I still think it was the better choice to remove it.<p>- All the sudo-* packages were mostly removed because we didn't want to expose any public API for all that internal stuff. Our initial goal is to get sudo the CLI application working, not to provide all the building blocks while the API is still in flux. We initially put it all in separate crates because of compilation time worries, but in the end those worries were unfounded. It's one of those things where Rust is still somewhat limited: we can't specify these sort of semi-private dependencies in the crates ecosystem right now, if we would have been able to specify 'nobody but us can use these as a dependency' they would have probably stayed as separate crates.<p>BTW: I'd like to thank you for continuing to work on Clap! There might have been a time I would have been a little worried about all the breaking changes and churn happening, but since that has stabilized I couldn't be happier! I don't think there's anyone on the sudo-rs team that had anything against clap, and I did not want to single out clap in our post specifically, so I hope you don't consider it an attack against clap. At least personally I use clap in basically every other project with a CLI.</p>
]]></description><pubDate>Thu, 28 Mar 2024 15:47:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=39852939</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=39852939</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39852939</guid></item><item><title><![CDATA[New comment by rnijveld in "The first stable release of a memory safe sudo implementation"]]></title><description><![CDATA[
<p>I can only thank you for the work you've done in creating sudo, I think it's an invaluable tool in the general day to day use for so many people. As someone working on sudo-rs, our goal with creating it never was to invalidate any of the work previously done, and we are very much aware that our implementation will not be bug free, especially not at the start.<p>For me personally, creating this Rust version allowed me to work on something that I would normally not be able to work on, given how I would not rate my confidence in writing relatively safe C code very high. If nothing else, at least we already found a few bugs in the original sudo because of this work. Despite the 43 years of bugfixing, such a piece of software is unlikely to ever be free of bugs, even if just for the changing surroundings.<p>Other than that, having some alternatives can never hurt, as long as we keep cooperating and trying to learn from each others work (and from each others mistakes).</p>
]]></description><pubDate>Mon, 06 Nov 2023 18:43:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=38166848</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=38166848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38166848</guid></item><item><title><![CDATA[New comment by rnijveld in "Passkeys are now enabled by default for Google users"]]></title><description><![CDATA[
<p>They don’t have to be though, a yubikey can be used as a passkey as well.</p>
]]></description><pubDate>Tue, 10 Oct 2023 15:19:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=37833196</link><dc:creator>rnijveld</dc:creator><comments>https://news.ycombinator.com/item?id=37833196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37833196</guid></item></channel></rss>