<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: paulfurtado</title><link>https://news.ycombinator.com/user?id=paulfurtado</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 11 Apr 2026 11:24:41 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=paulfurtado" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by paulfurtado in "AWS Adds support for nested virtualization"]]></title><description><![CDATA[
<p>It is great for isolation. There are so many VM based containerization solutions at this point, like Kata Containers, gvisor, and Firecracker. With kata, your kubernetes pods run in isolated VMs. It also opens the door for live migration of apps between ec2 instances, making some kinds of maintenance easier when you have persistent workloads. Even if not for security, there are so many ways a workload can break a machine such that you need to reboot or replace (like detaching an ebs volume with a mounted xfs filesystem at the wrong moment).<p>The place I've probably wanted it the most though is in CI/CD systems: it's always been annoying to build and test system images in EC2 in a generic way.<p>It also allows for running other third party appliances unmodified in EC2.<p>But also, almost every other execution environment offers this: GCP, VMWare, KVM, etc, so it's frustrating that EC2 has only offered it on their bare metal instance types. When ec2 was using xen 10+ years ago, it made sense, but they've been on kvm since the inception of nitro.</p>
]]></description><pubDate>Fri, 13 Feb 2026 01:19:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46997734</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=46997734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46997734</guid></item><item><title><![CDATA[New comment by paulfurtado in "Xiaomi Home Integration for Home Assistant"]]></title><description><![CDATA[
<p>The benefit of docker for home assistant is the packaging of it, rather than isolation. You can always run a container with host network mode and privileged mode so that it can access everything it needs to the same as if it were running directly on the host.</p>
]]></description><pubDate>Tue, 17 Dec 2024 00:06:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=42436938</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=42436938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42436938</guid></item><item><title><![CDATA[New comment by paulfurtado in "Deploying fiber in the home"]]></title><description><![CDATA[
<p>You don't need a fiber splicer for this. You can order a pre-terminated fiber cable online.<p>On fs.com you can order fiber in custom lengths. An armored pre-terminated 350ft OS2 duplex cable costs $128: <a href="https://www.fs.com/products/20720.html" rel="nofollow">https://www.fs.com/products/20720.html</a>
Non-armored would be as cheap as $40: <a href="https://www.fs.com/products/50147.html?attribute=58053&id=1787087" rel="nofollow">https://www.fs.com/products/50147.html?attribute=58053&id=17...</a><p>If you don't have a conduit, you can buy direct-burial cable. Two strands at 350 feet would be $590: <a href="https://www.lanshack.com/2-Strand-CustomLine-Corning-ALTOS-Outdoor-Armored-Direct-Burial-OSP-DB-Singlemode-Assembly-by-QuickTreX-P8013.aspx" rel="nofollow">https://www.lanshack.com/2-Strand-CustomLine-Corning-ALTOS-O...</a>
6 strands at 350 feet would be $687: <a href="https://www.lanshack.com/6-Strand-CustomLine-Corning-ALTOS-Outdoor-Armored-Direct-Burial-OSP-DB-Singlemode-Assembly-by-QuickTreX-P8015.aspx" rel="nofollow">https://www.lanshack.com/6-Strand-CustomLine-Corning-ALTOS-O...</a><p>If you have some extra length, just coil it somewhere in the wall and don't bother splicing or re-terminating it. You can also use keystone jacks or couplers at both ends too so you have flexibility later without re-running it through the conduit.</p>
]]></description><pubDate>Mon, 04 Mar 2024 21:01:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=39596043</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=39596043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39596043</guid></item><item><title><![CDATA[New comment by paulfurtado in "Deploying fiber in the home"]]></title><description><![CDATA[
<p>I used the VLAN "trick" for connecting my cable modem to my router for a few years in an old multi-floor apartment with pre-wired ethernet, but it's not ideal because the router is then not able to detect the link state of the modem. For example, if you unplug a cable modem and plug it back in, normally the link would go down on the router and then come back up, and when the link returns the router will attempt to fetch a new DHCP lease.<p>If you have a static IP, it should be fine, but this became an annoyance the couple of times the IP changed when I was living there.</p>
]]></description><pubDate>Mon, 04 Mar 2024 18:02:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=39593681</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=39593681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39593681</guid></item><item><title><![CDATA[New comment by paulfurtado in "Reaching the Unix philosophy's logical extreme with WebAssembly"]]></title><description><![CDATA[
<p>This is of course not the purpose of your post, but since you're interested in this topic, I wanted to mention that you can now create memory-backed files on linux using the memfd_create syscall without using any filesystem (nor unlink)  and you can also execute them without the /proc/self/fd trick by using the execveat syscall. In glibc, there is fexecve which uses execveat or falls back to the /proc trick on older kernels.</p>
]]></description><pubDate>Tue, 29 Aug 2023 03:04:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=37303080</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=37303080</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37303080</guid></item><item><title><![CDATA[New comment by paulfurtado in "Unbounded memory usage by Linux TCP for receive buffers, and how we fixed it"]]></title><description><![CDATA[
<p>Do you have any references to specific bugs here? We depend pretty heavily on containers and I'd love to look into these and see if we are impacted and whether we should carry these patches</p>
]]></description><pubDate>Tue, 30 May 2023 02:57:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=36120669</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=36120669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36120669</guid></item><item><title><![CDATA[New comment by paulfurtado in "I replaced grub with systemd-boot"]]></title><description><![CDATA[
<p>FWIW on AWS, all nitro instance types can boot as UEFI if the AMI has itself set to use UEFI and all arm64 instances only support UEFI. So if you stick to modern instance types, you can happily use UEFI in AWS.<p>GCP looks like it does UEFI too. And Azure Gen2 instances use UEFI too.</p>
]]></description><pubDate>Thu, 16 Feb 2023 21:38:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=34826727</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=34826727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34826727</guid></item><item><title><![CDATA[New comment by paulfurtado in "Can I exec a new process without an executable file? (2015)"]]></title><description><![CDATA[
<p>memfd is a tmpfs file descriptor, but does not use any mounted tmpfs filesystem. It works no matter what filesystems are mounted or access you have.<p>It's truly great for situations where APIs refuse to take anything other than files and you don't worry about cleanup. Ex: loading certs from memory into a python openssl context.</p>
]]></description><pubDate>Fri, 04 Nov 2022 23:18:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=33475449</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=33475449</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33475449</guid></item><item><title><![CDATA[New comment by paulfurtado in "Show HN: SadServers – Test your Linux troubleshooting skills"]]></title><description><![CDATA[
<p>If the goal of the test is to debug a sad linux server, containers are going to severely limit what ways the server can be sad in, isn't it?</p>
]]></description><pubDate>Wed, 26 Oct 2022 23:21:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=33350959</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=33350959</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33350959</guid></item><item><title><![CDATA[New comment by paulfurtado in "Show HN: SadServers – Test your Linux troubleshooting skills"]]></title><description><![CDATA[
<p>User namespaces have resulted in multiple new container breakout CVEs in the last year. Some guides actually recommend disabling user namespaces because they are still somewhat new and perilous.</p>
]]></description><pubDate>Wed, 26 Oct 2022 23:19:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=33350938</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=33350938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33350938</guid></item><item><title><![CDATA[New comment by paulfurtado in "Kubernetes Hardening Guidance [pdf]"]]></title><description><![CDATA[
<p>If you use cri-o as the runtime along with an openshift container registry, it will actually verify signatures at the runtime layer. In addition to crio, podman and anything based on containers/image supports this too.<p>Really that just means a registry that sends back a header indicating it supports signatures and serves up the right signature endpoints. It's shocking this isn't more common.<p>But if you just want to check signatures at the cluster's point of entry, you can use an admission controller to block the pods from being created with unsigned images.</p>
]]></description><pubDate>Wed, 05 Oct 2022 23:56:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=33102722</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=33102722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33102722</guid></item><item><title><![CDATA[New comment by paulfurtado in "When Every Ketchup but One Went Extinct"]]></title><description><![CDATA[
<p>Are you from the US or another country? Heinz uses different ketchup recipes in different countries and I personally think there is a huge difference. In the US, it is by far the best ketchup, but I hate the heinz in Canada.<p><a href="https://www.livingabroadincanada.com/2009/05/13/why-does-ketchup-taste-different-in-canada/" rel="nofollow">https://www.livingabroadincanada.com/2009/05/13/why-does-ket...</a></p>
]]></description><pubDate>Fri, 09 Sep 2022 07:19:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=32776300</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32776300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32776300</guid></item><item><title><![CDATA[New comment by paulfurtado in "Why Xen Wasn't Hit by RETBleed on Intel CPUs"]]></title><description><![CDATA[
<p>Since 2017, all new aws instance types have been "nitro", which is based on kvm <a href="https://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtualization-2017.html" rel="nofollow">https://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtual...</a><p>AWS does continue to support older instance types, however, but only legacy workloads are using them. They actually now even run the XEN based legacy instance types on nitro. <a href="https://perspectives.mvdirona.com/2021/11/xen-on-nitro-aws-nitro-for-legacy-instances/" rel="nofollow">https://perspectives.mvdirona.com/2021/11/xen-on-nitro-aws-n...</a></p>
]]></description><pubDate>Sat, 27 Aug 2022 05:20:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=32615972</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32615972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32615972</guid></item><item><title><![CDATA[New comment by paulfurtado in "How Discord supercharges network disks for extreme low latency"]]></title><description><![CDATA[
<p>When standard filesystems like ext4 and xfs hit enough io errors,they unmount the filesystem. I find that this happens pretty reliably in AWS at least and I can't imagine the filesystem possibly continuing to do very much when 100% of the underlying data has disappeared.<p>That said, from further reading of the GCP docs, it does sound like if they detect a disk failure they will reboot the VM as part of the not-so-live migration.</p>
]]></description><pubDate>Tue, 16 Aug 2022 13:49:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=32482686</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32482686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32482686</guid></item><item><title><![CDATA[New comment by paulfurtado in "How Discord supercharges network disks for extreme low latency"]]></title><description><![CDATA[
<p>Yes, under a graceful live migration with no hardware failure, the data is seamlessly moved to a new machine. The problem of moving local data is ultimately no different that live migrating the actual RAM in the machine. The performance does degrade briefly during the migration, but typically this is a very short time window.<p>You can read more about GCP live migrations here: <a href="https://cloud.google.com/compute/docs/instances/live-migration" rel="nofollow">https://cloud.google.com/compute/docs/instances/live-migrati...</a></p>
]]></description><pubDate>Tue, 16 Aug 2022 05:05:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=32479395</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32479395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32479395</guid></item><item><title><![CDATA[New comment by paulfurtado in "Scaling Kubernetes to Thousands of CRDs"]]></title><description><![CDATA[
<p>I can actually say that not supporting this really does hinder crossplane adoption. At work we operate large shared kubernetes clusters. A team attempted to use crossplane and we immediately had to remove it from our clusters immediately because it made kubectl and a variety of other tools completely unusable due to all the rate limiting and how frequently it refreshes cached discovery data. They wanted to use crossplane for like 10-20 of its supported object types. Instead, they had to actually run crossplane inside of a vcluster because there is no option to filter the number of CRDs it creates.<p>So while I can get behind this sentiment philosophically, until something changes upstream in kubernetes, this makes it really difficult to use crossplane in a cluster used for anything else and it probably makes sense to offer a workaround until then. Also, in practice, any security conscious users running crossplane in production are probably going to give it AWS credentials scoped to only the resources they want to allow it to manage, so even if you do install all of the CRDs in the cluster, 90% of them won't work due to their AWS credentials anyway.</p>
]]></description><pubDate>Tue, 16 Aug 2022 04:49:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=32479303</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32479303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32479303</guid></item><item><title><![CDATA[New comment by paulfurtado in "My network home setup v3.0"]]></title><description><![CDATA[
<p>It's not exactly normal, but it's probably a common enough experience in the US with 120v circuits and 15A breakers. Especially if in an old building with imperfect wiring causing excess resistance.<p>Many devices also operate less efficiently in high heat. If the AC unit is on a circuit with other devices, it is possible that the influx current when starting the AC unit trips the breaker. One might even be able to get away with 2 AC units on a 15A breaker as long as both compressors never start at the exact same time, but cause a trip when then kick in together.</p>
]]></description><pubDate>Tue, 16 Aug 2022 04:38:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=32479250</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32479250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32479250</guid></item><item><title><![CDATA[New comment by paulfurtado in "How Discord supercharges network disks for extreme low latency"]]></title><description><![CDATA[
<p>The disks are physically attached to the host. The VM running on that host moves from one host to another. GCP live-migrates every single VM running on GCP roughly once per week, so live migration is definitely seamless. Standard OSS hypervisors support live migration.<p>When hardware fails, the instance is migrated to another machine and behaves like the power cord was ripped out. It's possible they go down this path for failed disks too, but it's feasible that it is implemented as the disk magically starting to work again but being empty.<p>You can read more about GCP live migrations here: <a href="https://cloud.google.com/compute/docs/instances/live-migration" rel="nofollow">https://cloud.google.com/compute/docs/instances/live-migrati...</a></p>
]]></description><pubDate>Tue, 16 Aug 2022 04:11:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=32479112</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32479112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32479112</guid></item><item><title><![CDATA[New comment by paulfurtado in "How Discord supercharges network disks for extreme low latency"]]></title><description><![CDATA[
<p>When a local disk fails in an instance, you end up with an empty disk upon live migration. The disk won't disappear, but you'll get IO errors, and then the IO errors will go away once the migration completes but your disk will be empty.</p>
]]></description><pubDate>Tue, 16 Aug 2022 04:06:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=32479089</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32479089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32479089</guid></item><item><title><![CDATA[New comment by paulfurtado in "How Discord supercharges network disks for extreme low latency"]]></title><description><![CDATA[
<p>Both GCP and AWS provide super fast and cheap, but fallible local storage. If running an HA database, the solution is to mitigate disk failures by clustering at the database level. I've never operated scylladb before, but it claims to support high-availability and various consistency levels so the normal way to deal with this problem is to use 3 servers with fast local storage and replace an entire server when the disk fails.</p>
]]></description><pubDate>Tue, 16 Aug 2022 04:01:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=32479062</link><dc:creator>paulfurtado</dc:creator><comments>https://news.ycombinator.com/item?id=32479062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32479062</guid></item></channel></rss>