<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: davidstrauss</title><link>https://news.ycombinator.com/user?id=davidstrauss</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 05:29:51 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=davidstrauss" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by davidstrauss in "Lennart Poettering, Christian Brauner founded a new company"]]></title><description><![CDATA[
<p>Hi, I'm David, founding product lead.<p>Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.<p>Edit: I've reached out privately by email for next steps, as you requested.</p>
]]></description><pubDate>Tue, 27 Jan 2026 20:13:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46785809</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=46785809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46785809</guid></item><item><title><![CDATA[New comment by davidstrauss in "Busybox removes support for systemd"]]></title><description><![CDATA[
<p>> because upstart could grok the scripts without change<p>I don't believe that's the case. The "service" command (which is part of the sysvinit-utils package, not Upstart) invokes either Upstart or SysV init as necessary, but Upstart itself has no awareness of the SysV init world. You couldn't have an Upstart service depend on a SysV init one, list SysV service status with Upstart, or enable a SysV service through Upstart.<p>In case you're curious, the wrapping on the systemd side is more comprehensive. SysV scripts appear as units, and systemd parsed the semi-standard metadata at the top of most SysV init files to determine when the service should start if enabled (translating from the traditional run levels). As units, the SysV init scripts are possible to enable/disable, start/stop, and list using the standard systemd commands. They can also participate in dependency graphs alongside systemd units.</p>
]]></description><pubDate>Tue, 03 Nov 2015 01:34:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=10496771</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=10496771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10496771</guid></item><item><title><![CDATA[New comment by davidstrauss in "Busybox removes support for systemd"]]></title><description><![CDATA[
<p>The closest I could find in the docs to what digi_owl said is the following:<p>> Internally, these functions send a single datagram with the state string as payload to the AF_UNIX socket referenced in the $NOTIFY_SOCKET environment variable. If the first character of $NOTIFY_SOCKET is "@", the string is understood as Linux abstract namespace socket. The datagram is accompanied by the process credentials of the sending service, using SCM_CREDENTIALS.<p>I can see how someone would be reluctant to rely on that, even given the interface promise and the nudging of the systemd developers. To be more consistent with what's a stable, public interface and the admonition to avoid internals, I would probably drop the word "internally."<p>Indeed, I've created a pull request:
<a href="https://github.com/systemd/systemd/pull/1759" rel="nofollow">https://github.com/systemd/systemd/pull/1759</a></p>
]]></description><pubDate>Tue, 03 Nov 2015 01:21:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=10496726</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=10496726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10496726</guid></item><item><title><![CDATA[New comment by davidstrauss in "Busybox removes support for systemd"]]></title><description><![CDATA[
<p>> it's not exactly a good choice for embedded systems.<p>I don't develop embedded systems, but systemd is actually popular in that space because of its watchdog capabilities and its inclusion in projects like GenIVI and Tizen.<p>More detail is in this post from an embedded systems developer here:
<a href="http://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/832368-busybox-drops-systemd-support?p=832615#post832615" rel="nofollow">http://www.phoronix.com/forums/forum/phoronix/latest-phoroni...</a></p>
]]></description><pubDate>Mon, 02 Nov 2015 21:47:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=10495524</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=10495524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10495524</guid></item><item><title><![CDATA[New comment by davidstrauss in "Busybox removes support for systemd"]]></title><description><![CDATA[
<p>Directly talking to the notify socket is not considered using systemd internals. It is documented as a stable, public interface:<p><a href="https://wiki.freedesktop.org/www/Software/systemd/InterfaceStabilityPromise/" rel="nofollow">https://wiki.freedesktop.org/www/Software/systemd/InterfaceS...</a><p>Socket activation doesn't have any systemd-based interface. You just get a file descriptor passed in the normal Unix way. The systemd library functions related to socket activation are utility functions for examining the inherited socket, but they are just wrappers for any other way you might do so.<p>You can configure daemons like nginx or PHP-FPM to use sockets inherited from systemd instead of their own, and it works fine. They don't have any specific support for systemd socket activation, nor do they need to. They can't even tell the difference between the systemd sockets and ones they'd get on a configuration reload.</p>
]]></description><pubDate>Mon, 02 Nov 2015 21:43:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=10495497</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=10495497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10495497</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>The entire BSD kernel and init system are all in one repository. Are those developers "bringing on" confusion by doing that?</p>
]]></description><pubDate>Mon, 03 Aug 2015 17:43:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=9998147</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9998147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9998147</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>> The fact that Lennart was giving out presentations about nspawn specifically makes me believe it's very much intended to be used in production, as a "chroot on steroids".<p>This is a recent development -- and why I put the caveat "for now." systemd-nspawn is probably going to be marked production-ready quite soon, especially because it's the foundation for the CoreOS Rocket container tool.</p>
]]></description><pubDate>Fri, 31 Jul 2015 20:12:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=9984131</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9984131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9984131</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>I don't think Red Hat was the driving force behind systemd's container and VM integration.<p>systemd-nspawn was created to provide rapid testing of systemd, and it was (and, for now, still is) marked as an experimental utility that isn't for production purposes.<p>If you look at many of the early blog posts mentioning systemd and containers/VMs, you'll actually see my work and my company (Pantheon) mentioned more than Red Hat. In more recent posts, you'll see lots of CoreOS influence. Project Atomic isn't working with systemd in any direct, significant way; you won't see much coordination other than in some shared philosophy about a stateless base OS, and even that is more attributable to CoreOS's influence than Red Hat's.<p>I think the influence of Red Hat on systemd is overstated. Of the two perspectives: (1) Red Hat driving everything and (2) Lennart answering mostly to himself, the latter is way more accurate, for better or worse (depending on your opinion of him).</p>
]]></description><pubDate>Fri, 31 Jul 2015 19:09:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=9983690</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9983690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9983690</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>> Might wish to check your history a bit.<p>No matter how many complaints you may find about SysV boot time -- or even discussion about how to improve it -- it does not make systemd's primary purpose solving that problem.<p>How many discussions does OpenStack's development team have about security? Is the primary purpose of OpenStack to provide security?<p>> Well, it is no longer optional to compile in systemd; it can still be deactivated at run time, but the code (and checks to see which one should be used) will always be there.<p>But that is not what the article said. It said, "Systemd made kdbus non-optional in its release." That implies that use of it is non-optional, which isn't the case at all. It is like saying Fedora has made Intel graphics non-optional because they ship support for it.</p>
]]></description><pubDate>Fri, 31 Jul 2015 18:04:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=9983272</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9983272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9983272</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>> Why assume the user is too stupid or lazy to manually invoke vim and then systemctl daemon-reload?<p>This is what it does:<p>(1) Locates the current unit file, regardless of whether it shipped with a package or is already a custom one in /etc.
(2) If there isn't one in /etc, it copies the current one into the correct place.
(3) It opens $EDITOR on that file.
(4) It runs systemd daemon-reload.<p>It's really the first two steps that can be annoying because you'd otherwise have to run "systemctl status" to find where it is currently and then copy it over. I guess you could script that, but is it really so terrible to support that in systemctl -- which is just a normal user CLI utility, not anything with advanced privileges or critical impacts on system stability?<p>Edit: Punctuation</p>
]]></description><pubDate>Fri, 31 Jul 2015 17:58:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=9983220</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9983220</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9983220</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>If that were the case, wouldn't the author support the kdbus work? It seems like they're not a fan of that, either, given the (misleading) complaints about systemd's support for kdbus.<p>Edit: phrasing</p>
]]></description><pubDate>Fri, 31 Jul 2015 17:53:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=9983181</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9983181</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9983181</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>> Honest question: Why does an init system need to know anything about screen brightness in the first place? Shouldn't X11 handle screen brightness?<p>I think that's a reasonable question. I am only a regular desktop user of systemd for anything with a display, so I don't have a strong opinion there. All of my advanced systemd work is on server systems; I have more opinions there.<p>> This is, I think, about the fact that systemctl edit is even a thing that exists. What's the problem with ed, vim, nano, pico, emacs, etc. that necessitates some kind of built-in systemd editor?<p>There isn't a built-in systemd editor; that's how disingenuous this piece is. Running "systemctl edit <unit-name>" invokes $EDITOR, whatever that is configured to be. Totally normal Unix behavior here.</p>
]]></description><pubDate>Fri, 31 Jul 2015 17:26:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=9982983</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9982983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9982983</guid></item><item><title><![CDATA[New comment by davidstrauss in "The Systemd Chronicles"]]></title><description><![CDATA[
<p>As a systemd committer, I certainly can. I don't have time for all of them, so I'll pull the first couple and a few other egregious ones.<p>> Systemd was introduced to decrease the boot up time. Now that they do not understand all implications and dependencies, let us add some artifical time we found out might work for the developers laptops. More on this small world hypothesis of the systemd devleopers below.<p>systemd was not introduced to decrease boot time; it was introduced to properly manage dependencies and parallelism. Such an approach happens to massively improve boot times in many cases, but that's a side-effect. The delay introduced is specifically to account for the slow/unreliable initialization of certain docking station hardware that has no other known reliable method for detection. (This is what happens in Linux with certain reverse-engineered hardware.) Importantly, this delay doesn't impact boot time, only introduces a delay before allowing the system to sleep, so even the (made up) point about systemd being about boot times isn't affected here.<p>> Screen brightness is something that should crash your boot up when it is not working.<p>The TODO item is about avoiding restoration of screen brightness at boot to such a low level that some laptops consider it to be a "backlight off" state. Someone may have shut a laptop down (even automatically) with the backlight off, but we think it should probably turn back on on the next boot. Absolutely nothing to do with "crashing" bootup.<p>> Systemd made kdbus non-optional in its release.<p>Totally made up. systemd's DBus library provides equivalent support for usermode DBus and kdbus.<p>> This one is a setback. Why is there no default editor in systemd in case of factory reset?<p>I'm not sure what this is even claiming. Is this some sort of trolling about complexity the author thinks systemd will eventually add and is some sort of advance critique?<p>In general, the piece shows disingenuous portrayal of actual issues to the level of clickbait and fails to understand that not everything in systemd's git repository runs as part of PID 1 (like the network management tools, for example, are a totally separate and optional daemon).</p>
]]></description><pubDate>Fri, 31 Jul 2015 17:08:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=9982903</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=9982903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9982903</guid></item><item><title><![CDATA[New comment by davidstrauss in "Containers, Not Virtual Machines, Are the Future Cloud"]]></title><description><![CDATA[
<p>Especially once you start using hardware RAID controllers, the physical overhead of running I/O commands is pretty abstracted away from the kernel.</p>
]]></description><pubDate>Wed, 19 Jun 2013 08:25:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=5904376</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=5904376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5904376</guid></item><item><title><![CDATA[New comment by davidstrauss in "Containers, Not Virtual Machines, Are the Future Cloud"]]></title><description><![CDATA[
<p>> However, using xen or kvm solves that problem, by giving each guest their own ram that nobody else can fuck with.<p>It gets hard to say whether a shared page cache is a <i>good thing</i> or not, even though it may be unfair. I say this because I/O bandwidth getting exhausted is a huge issue.<p>For example, using cgroups to limit memory allocations for groups or processes seemed like a great way to fairly distribute memory. But, doing so forced such cgroups into swapping when they tried to exceed their limits, even when there was available memory on the host system. The swapping was so bad in terms of saturating disk I/O that we had two choices to maintain quality of service: (1) set the hard limit and OOM kill (or equivalent) within the cgroup when it gets exceeded or (2) not treat it as a hard limit and monitor usage separately. We chose the latter.<p>So, I honestly wonder, is it better to enforce separate page caches in the cause of fairness, even if it results in less efficient disk I/O? Or is it better to have a unified page cache and dedicate system-wide resources to increasing effective disk I/O bandwidth? (Do we focus on slicing the pie more fairly or increasing the size the pie while cutting sloppily?)</p>
]]></description><pubDate>Wed, 19 Jun 2013 08:21:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=5904361</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=5904361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5904361</guid></item><item><title><![CDATA[New comment by davidstrauss in "Containers, Not Virtual Machines, Are the Future Cloud"]]></title><description><![CDATA[
<p>Just a note as the author of the Linux Journal article, I absolutely would have mentioned Docker if I had written the article now. Unfortunately, the article only recently made it to publication (and now post-paywall) despite my having written it in January.</p>
]]></description><pubDate>Wed, 19 Jun 2013 03:13:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=5903553</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=5903553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5903553</guid></item><item><title><![CDATA[New comment by davidstrauss in "Containers, Not Virtual Machines, Are the Future Cloud"]]></title><description><![CDATA[
<p>I just timed a spin-up of a 16GB Fedora 18 instance on in the current-generation Rackspace Cloud's DFW data center. It took 7 minutes and 10 seconds to complete. Launching a somewhat larger-in-RAM instance on EC2 in Oregon took over 10 minutes; I stopped timing. In my experience running an infrastructure that's gone through thousands of cloud VM launches, those numbers are typical.</p>
]]></description><pubDate>Wed, 19 Jun 2013 02:43:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=5903453</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=5903453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5903453</guid></item><item><title><![CDATA[New comment by davidstrauss in "Containers, Not Virtual Machines, Are the Future Cloud"]]></title><description><![CDATA[
<p>> I am intimately familiar with eWLM and I think it's quite unfair to call it containers.<p>I feel like you're strawmanning me, here. I specifically avoided calling WLM "containers." I said they're building blocks that, combined with MAC-style service isolation (which also existed on 1980s mainframes), allow containerized isolation of consolidated services on a single OS image. It also wouldn't be fair to directly call cgroups "containers," though they provide similar building blocks for what I'm sure we agree are "containers" once configured by a tool like LXC.<p>I compared WLM to cgroups because both track utilization over a window of time and manage scheduling in a way that avoids starvation while enforcing other priority or fairness goals. The documented resource limitation and contention-sharing configurations for WLM [1] are undeniably similar to cgroups.<p>> I'll give you a simple example. Do dd if=/dev/sda of=foo.img (or even /dev/null) in one container and measure I/O performance in the other.<p>Linux kernel cgroups have supported block I/O limits and weights for fair sharing under contention since 2008. I'll give you the benefit of the doubt here, though, and assume you're actually trying to provide an example of breaking the page cache because you mention that immediately afterward.<p>> The page cache is a global resource and so far there is not way to isolate it within containers. Buffered I/O is pretty fundamental to all workloads.<p>Shared page caching among containers is also one of the primary efficiency gains when multiple containers access the same on-disk assets. All along the spectrum, there are trade-offs between optimization via sharing (which risks decreased predictability) versus isolated resources (which necessitate efficiency losses through redundancy). I don't suggest containers are special in this regard.<p>Regardless, this same issue exists for virtualization, especially once you get down to areas like the caches on RAID controllers or other buffering done on the host machine. Isolating resources and security always has overhead.<p>> cgroups aren't free. See <a href="https://www.berrange.com/posts/2013/05/13/a-new-configurable.." rel="nofollow">https://www.berrange.com/posts/2013/05/13/a-new-configurable...</a>.<p>Of course they're not free, but your suggestion that they're heavier weight than hypervisors running kernels is both counter-intuitive and unexplained/undocumented in your post.<p>> I already cited a widely published benchmark (SpecVIRT).<p>There wasn't anything you "cited" about the benchmark other than its existence because you didn't provide any numbers resulting from benchmark to support your arguments.<p>So, your post is at the stage of hypothesis: you have a prediction and a way to falsify it. Please don't pretend the results have already come out in your favor.<p>> You article cites made up numbers (5-10 minutes to provision a guest).<p>The numbers in the article are based on wall time for provisioning servers from public images on the Rackspace Cloud and EC2 using the API. Your qemu-img example is contrived because real-world systems don't keep local images sitting around on the host machines to provision new instances, disallowing the twin advantages of local, high-bandwidth I/O and copy-on-write.<p>In contrast, container creation that shares existing host machine libraries and binaries is very fast, including in real-world deployments (like PaaS providers).<p>> Note that I'm not normally one to bad talk anything. Containers are fine for that sort of thing. But you're claiming that containers obsolete virtualization and that's just plain silly.<p>But you are strawmanning me, again. I never said "containers obsolete virtualization." I said that containers will be "the future of the cloud," meaning that they should come to dominate given how well they're starting to fit most cloud users' needs. That doesn't imply obsolescence any more than saying "ATMs are the future of consumer banking" implies that tellers are obsolete and have no role in any sort of transaction.<p>> I expect better from you :-)<p>Well, I'd hope so, given our time in college together. I <i>just</i> realized this, by the way. :-)<p>[1] <a href="http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.aix.baseadmn%2Fdoc%2Fbaseadmndita%2Fht_cnfgwlm.htm" rel="nofollow">http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%...</a></p>
]]></description><pubDate>Wed, 19 Jun 2013 01:52:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=5903308</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=5903308</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5903308</guid></item><item><title><![CDATA[New comment by davidstrauss in "Containers, Not Virtual Machines, Are the Future Cloud"]]></title><description><![CDATA[
<p>Yes: "All the buffered writes are still system wide and not per group. Hence we will not see service differentiation between buffered writes between groups." [1]<p>[1] <a href="https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt" rel="nofollow">https://www.kernel.org/doc/Documentation/cgroups/blkio-contr...</a></p>
]]></description><pubDate>Tue, 18 Jun 2013 23:35:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=5902739</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=5902739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5902739</guid></item><item><title><![CDATA[New comment by davidstrauss in "Containers, Not Virtual Machines, Are the Future Cloud"]]></title><description><![CDATA[
<p>RCTL can enforce specific limits, which is good if you either want to divide resources such that there can't be (or is unlikely to be) contention.<p>cgroups offers hard limits for some things, like memory, but it mostly opts for a model using "shares" that determine the fractional access to resources versus other cgroups holding shares against the same resource.<p>For example, assume there's CPU contention. cgroup A has 10 CPU shares and cgroup B has 90. Processes in cgroup A will get 90% of the CPU time, but it will not starve cgroup A because cgroup A will still get 10%.<p>This shares-based model also has a major effect when there isn't contention. Shares-based resources are burstable. Even cgroup A (with 10 CPU shares) can use 100% of CPU if nothing else needs it.<p>This "burstable" nature can be good or bad. It's good in the sense that most users will probably get more CPU than their shares guarantee most of the time. It's bad because users can start expecting more than their shares guarantee and get a nasty surprise when resources get under contention.<p>It's time to drop some analogies.<p>cgroups are very much like a highway with an HOV lane (or more): anyone can go very fast when there's no contention. But, during rush hour, lanes get distributed as "shares" of the road to the HOV and non-HOV groups. Neither the HOV nor the non-HOV drivers get starved for road access (though responsiveness may not be equivalent, by design).<p>Traditional "nice" is like emergency vehicle traffic. An ambulance every now and then works fine as "-20 nice" traffic. But, if you filled the road with ambulances, it would starve normal traffic of roadway access.<p>RCTL is sort of like a person riding reserved right-of-way public transit. From the time the person hops on the train at point A to when they get off at point B, it will be the same duration any time of day. They don't get to go faster during low-traffic times, but they also don't have to worry about a significantly worse experience during rush hour.<p>Capsicum seems focused on intra-application isolation; I'm not sure how to compare it to other OS-level containers.</p>
]]></description><pubDate>Tue, 18 Jun 2013 22:29:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=5902471</link><dc:creator>davidstrauss</dc:creator><comments>https://news.ycombinator.com/item?id=5902471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5902471</guid></item></channel></rss>