<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nyrikki</title><link>https://news.ycombinator.com/user?id=nyrikki</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 15:40:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nyrikki" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nyrikki in "Bambu Lab is abusing the open source social contract"]]></title><description><![CDATA[
<p>Orcas Slicer is a fork of Bambu Studio, which is a fork of PrusaSlicer, itself a descendant of Slic3r.<p>Orca Slicer was forked to improve usability and features, not to get around any cloud printing requirements, Bamboo added those later and removed the ability to print locally.<p>It has to impersonate to transfer a gcode file locally, which is another open standard.<p>Bamboo restricted LAN printing, that is the issue.</p>
]]></description><pubDate>Tue, 12 May 2026 15:50:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=48110034</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=48110034</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48110034</guid></item><item><title><![CDATA[New comment by nyrikki in "You gave me a u32. I gave you root. (io_uring ZCRX freelist LPE)"]]></title><description><![CDATA[
<p>1) the various projects refused even simple requests like allowing the admin to disable the —privileged flag, in the rootfull days.
2) The choice to break out CRI will zero authorization or mutations at the CRI level, while understandable to the containerd teams needs, exposed every other runtime to an unprotected alternative communication path.
3) The OCI groups refusal to provide guidance to LSM maintainers as to minimal configurations, while also handling the responsibilities of seccomp profiles to end users means only actively attacked vectors are protected and it becomes impossible for normal users to operate safely.
4) under the UNIX model it is the caller to clone/fork/unshare that must drop privileges.
5) This model was set in concrete by the OCI standards and now suffers from the frozen caveman pattern.<p>The capable()[0] syscall operates as one would expect for granting superior capabilities, and while the work to expand the isolation is something I am sure you are familiar with, you probably also realize that the number of entries in a default user also expanded just to support user namespaces.<p>But to be clear, the choices that docker/oci made are understandable from a local greedy choice perspective, it complicates the entire user space.<p>K8s mutating inlet controllers are a symptom of those choices.<p>Had a CRI contained a bounding set, enforced at a system level, especially with guidance and tools for users to use a minimal set, which they could expand on easily we would be in a better spot.<p>But as other projects cannot provide meaningful protections that cannot be simply bypassed by calling privileged CRIs it is also a barrier to convincing them to do the same.<p>Really there is a larger problem that OCI could be the leader on, but they are the ‘killer app’ and refuse to do so.<p>The bounding set for user capabilities is driven by containers, and while namespaces are not and never have been a security feature, this blocks their ability to have a strong security posture.<p>To be clear, expecting every end user to write minimal seccomp profiles is unrealistic, especially when docker prevents devs from accessing the local machine to discover what is happening.  I think podman is the only machine that allows that by default.<p>Basically while simplifying moby/containerd/CRI is an understandable choice, the refusal to address the costs of that local optim has fallout<p>[0] <a href="https://elixir.bootlin.com/linux/v7.0.5/source/kernel/capability.c#L414" rel="nofollow">https://elixir.bootlin.com/linux/v7.0.5/source/kernel/capabi...</a></p>
]]></description><pubDate>Tue, 12 May 2026 14:22:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=48108742</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=48108742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48108742</guid></item><item><title><![CDATA[New comment by nyrikki in "Bun's experimental Rust rewrite hits 99.8% test compatibility on Linux x64 glibc"]]></title><description><![CDATA[
<p>Zig is a middle ground.  It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.<p>I am of the opinion that it is horses for courses and not a universal better proposition.<p>Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…<p>While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.<p>1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.<p>2) defer[0] allows you to collocate the the freeing of resources with code.<p>That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.<p>I was working on some eBPF code in C and did really miss zig.<p>For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.<p>[0] <a href="https://zig.guide/language-basics/defer/" rel="nofollow">https://zig.guide/language-basics/defer/</a></p>
]]></description><pubDate>Sat, 09 May 2026 20:46:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=48078089</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=48078089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48078089</guid></item><item><title><![CDATA[New comment by nyrikki in "You gave me a u32. I gave you root. (io_uring ZCRX freelist LPE)"]]></title><description><![CDATA[
<p>It is a minimal improvement due to the introduction of user namespaces and the fallout from local team convenience for Docker and thus OCI.<p>It is very important that you realize that any capability is a slice of superuser privileges, and there are no implicit protections, only explicit additional constraints that restrict it in reference to root.<p>Look at the bounding set for a normal user on a fresh install of rhel/debian based systems:<p><pre><code>     $ grep ^Cap /proc/$$/status
     CapInh: 0000000000000000
     CapPrm: 0000000000000000
     CapEff: 0000000000000000
     CapBnd: 000001ffffffffff
</code></pre>
Note how trivial it is to gain all of those capabilities:<p><pre><code>    $ podman unshare
    # grep ^Cap /proc/$$/status
    CapInh: 0000000000000000
    CapPrm: 000001ffffffffff
    CapEff: 000001ffffffffff
    CapBnd: 000001ffffffffff
    CapAmb: 0000000000000000
    # capsh --decode=000001ffffffffff
    0x000001ffffffffff=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,cap_perfmon,cap_bpf,cap_checkpoint_restore
</code></pre>
The capabilities(7)[0] man page will help you with all of those.<p>But capabilities are just a thread local segmentation, which grants superuser or root rights in a vertical segmented fashion.<p>True, if a mechanism <i>chooses</i> to do additional tests based on credentials(7)[1], you can run with those elevated privileges in a lower bound, but that requires implicit coding.<p>Add in that LSMs are suffering from both resources and upstream teams that won't provide guidance or are challenging to work with, and there are literally a hundred commands to either abuse or just ld_preload to get unrestricted userns, allowing you to get around basic controls on clone()/unshare() that may be implemented.<p><pre><code>      $ grep -ir "userns," /etc/apparmor.d/ | wc -l
      100

</code></pre>
With apparmor every single browser (firefox,chrome,msedge,etc...) as well as busybox, slack, steam, visual studio, ... all have the unrestricted user namespaces and the ability to gain the FULL set of capabilities in the bounding set.<p>If you run `busybox` on a debian system, note how it has nsenter and unshare, so you can't mask those and yet busybox itself is unconstrained with elevated privlages.<p>The TL;DR point being, don't assume that any capability() is in itself a gate, as there are so many ways even for the user nobody to gain them.<p>[0] <a href="https://man7.org/linux/man-pages/man7/capabilities.7.html" rel="nofollow">https://man7.org/linux/man-pages/man7/capabilities.7.html</a>
[1] <a href="https://man7.org/linux/man-pages/man7/credentials.7.html" rel="nofollow">https://man7.org/linux/man-pages/man7/credentials.7.html</a></p>
]]></description><pubDate>Sat, 09 May 2026 17:54:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=48076826</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=48076826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48076826</guid></item><item><title><![CDATA[New comment by nyrikki in "You gave me a u32. I gave you root. (io_uring ZCRX freelist LPE)"]]></title><description><![CDATA[
<p>Namespaces _may_ result in limits on what you can do with a capability, but a capability is global in scope.<p>If a kernel feature is gated on cap_sys_admin only, it doesn't matter at all what namespace it is in.  Namespace support or additional constraints are not implicit and have to be added to each need.<p>People misunderstanding this is partially why we have this latest crop of vulnerabilities.</p>
]]></description><pubDate>Sat, 09 May 2026 16:44:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=48076285</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=48076285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48076285</guid></item><item><title><![CDATA[New comment by nyrikki in "Ask HN: We just had an actual UUID v4 collision..."]]></title><description><![CDATA[
<p>The original from SGI back in the mid 90's, before CPUs had RDRAND instructions etc... was a an actually practical solution.<p>At the time I was at the Internet company that originally got online-gaming banned in the US, we were looking at CCDs and Cesium emitters that required a license etc...<p>While I am not sure, it seems cloudflare basically implemented one after SGI's[0] patent expired.<p>The patent and the licensing cost and adding SGI was a major blocker for us doing it, the startup closed before we found a real solution.  But the best PRNGs like Blum Blum Shub were way too slow at the time.  But things did improve quickly at that time.<p>[0] <a href="https://patents.google.com/patent/US5732138A/en" rel="nofollow">https://patents.google.com/patent/US5732138A/en</a></p>
]]></description><pubDate>Sat, 09 May 2026 00:42:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=48070567</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=48070567</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48070567</guid></item><item><title><![CDATA[New comment by nyrikki in "Claude.ai unavailable and elevated errors on the API"]]></title><description><![CDATA[
<p>We are probably closer than you think, and SMBs have zero leverage.<p>The point is not avoiding vendors or duplicating everything. The point is designing systems so the software/platform never becomes the point of control.<p>A self-hosted, minimal sandbox instance using simple containers and tools is one way to help avoid that lock-in trap.<p>It is not zero cost, but strategically important to make sure that vendors don't shape your enterprise, but support it.<p>IMHO Systems should be designed to be as replaceable as possible, without adding the extreme complexity that a true 'multi-cloud' solution would offer as an example.<p>The point being is that the vendor and/or platform can be replaced anytime the business changes its goals, market shifts, strategies change ...<p>Keeping the door open and trying to minimize the migration cost is my point, not boiling the ocean.<p>Repurposing a decomed server or desktop with a GPU (3090 or RTX PRO 6000 Blackwell not DC class) with linux/podman and llama.cpp will help a team understand without much cost, but that is an ignorant of your situation claim on my part.<p>We both very much agree that upfront multi-vendor implementations are a very bad idea.  It suffers from the same problem IMHO, trying to plan past the planning horizon with aspects you have no control over.<p>Probably too much nuance to discuss here, but thanks for responding.</p>
]]></description><pubDate>Wed, 29 Apr 2026 20:12:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47953902</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47953902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47953902</guid></item><item><title><![CDATA[New comment by nyrikki in "Patch applies fake diffs from commit messages"]]></title><description><![CDATA[
<p>While patch[0] has problems, the issue here is not that it is undocumented.<p>Git recently added this doc on roundtripping, and the problem is with git.<p><pre><code>     Any line that is of the form:
     * three-dashes and end-of-line, or
     * a line that begins with "diff -", or
     * a line that begins with "Index: "

     is taken as the beginning of a patch, and the commit log message is terminated before the first occurrence of such a line.

</code></pre>
The patch isn't even the complicated forms with RCS, ClearCase, Perforce, or SCCS support, it is just doing what the pre-POSIX spec says.<p>The argument is if git should do input sanitation etc...<p>But `patch -p1` is doing exactly what was documented, even in the original Larry Wall usenet post of the program.<p>[0] <a href="https://pubs.opengroup.org/onlinepubs/9799919799/utilities/patch.html" rel="nofollow">https://pubs.opengroup.org/onlinepubs/9799919799/utilities/p...</a>
[1] <a href="https://github.com/git/git/blob/94f057755b7941b321fd11fec1b2e3ca5313a4e0/Documentation/format-patch-end-of-commit-message.adoc" rel="nofollow">https://github.com/git/git/blob/94f057755b7941b321fd11fec1b2...</a></p>
]]></description><pubDate>Tue, 28 Apr 2026 22:24:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47941683</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47941683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47941683</guid></item><item><title><![CDATA[New comment by nyrikki in "Claude.ai unavailable and elevated errors on the API"]]></title><description><![CDATA[
<p>To start, I want to be clear I am trying to understand not criticizing, and mistakes are how institutional knowledge grows.<p>Your last paragraph hints at retention struggles which complicates the issue.<p>But was vendor mitigation not part of the evaluation?  I get that most companies view governance and compliance as a pay to play issue, but there has always been an issue with rapidly changing areas and single source suppliers.<p>I admit to having my own preferences and being almost completely ignorant about what your needs are, but I have seen the value in having a rabbit to pull out of the hat.<p>If employee retention doesn’t allow for departure of individuals without complete loss of institutional knowledge I guess my position wouldn’t hold.<p>But during the rise of cloud computing I introduced an openstack install in our sandbox, not because I thought that we would stay on a private cloud but because it allowed our team to pull back the covers and understand what our cloud vendor was doing.<p>It was an adoption accelerator that enabled us to choose a vendor that was appropriate and to avoid the long tail of implementation.<p>I was valuable as a pivot when AMD killed seamicro with short notice, and the full cloud migration period was dramatically shortened.<p>I have a dozen other examples, but it is like stock options, volatility and uncertainty dramatically increase the value of keeping your options open.<p>We will have vendors fold, and a single source only story couples you org to the success of that vendor.<p>IMHO There is a huge difference between tying your success to an Oracle, who may be ‘safe’ if expensive as a captive customer and doing the same in uncertain markets.<p>Would you be willing (or able) to share more?</p>
]]></description><pubDate>Tue, 28 Apr 2026 21:20:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47940956</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47940956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47940956</guid></item><item><title><![CDATA[New comment by nyrikki in "The world in which IPv6 was a good design (2017)"]]></title><description><![CDATA[
<p>You aren't running it during an external transitive failure that happened on April 15th.<p>The problem isn't the happy path, the problem is when things fail, and that linux, in particular made it really hard to reliably disable [0]<p>Once that hits someone's vagrant or ansible code, it tends to stick forever, because they don't see the value until they try to migrate, then it causes a mess.<p>The last update on the original post link [1] explains this.  The ipv4 host being down, not having a response, it being the third Tuesday while Aquarius is rising into what ever,  etc... can invoke it.  It causes pains, is complex and convoluted to disable when you aren't using it, thus people are afraid to re-enable it.<p>[0] <a href="https://wiki.archlinux.org/title/IPv6#Disable_IPv6" rel="nofollow">https://wiki.archlinux.org/title/IPv6#Disable_IPv6</a>
[1] <a href="https://tailscale.com/blog/two-internets-both-flakey" rel="nofollow">https://tailscale.com/blog/two-internets-both-flakey</a></p>
]]></description><pubDate>Sun, 19 Apr 2026 23:43:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47828755</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47828755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47828755</guid></item><item><title><![CDATA[New comment by nyrikki in "The world in which IPv6 was a good design (2017)"]]></title><description><![CDATA[
<p>It is hard to cover decades of politics in one post on here, but rather than the IAB being in an ivory tower, at least for the first 15 years, I think it was ruled by inertia that was changing, and suffering a bit from The Mythical Man Month second system syndrome.<p>In the beginning it was an experiment and should have been ambitious, the IETF had just moved to CIDR which bought almost a decade of time, and they should have aimed high.<p>It is just when you significantly change a system, you need to show users how to accomplish the work they are doing with the old system, even if how they do that changes.  If you can't communicate a way to replace their old needs, or how that system is fitting new needs that you could never have predicted, you need to be flexible and demonstrate that ability.<p>If you look at the National Telecommunications and Information Administration. [Docket No. 160810714-6714-01] comments<p>Microsoft: <a href="https://www.ntia.gov/sites/default/files/publications/microsoft_10_4_0.pdf" rel="nofollow">https://www.ntia.gov/sites/default/files/publications/micros...</a>
ARIN: <a href="https://www.ntia.gov/sites/default/files/publications/arin_comments_10_14_0.pdf" rel="nofollow">https://www.ntia.gov/sites/default/files/publications/arin_c...</a><p>You will see that the address space argument is the only real one they make.  It isn't coincidence that rfc7599 came about ~20 years later when 160810714-6714-01 and federal requirements for IPv6 were being discussed.<p>If you look at the #nanog discussions between RFC 1883 (ipv6) (late 1996) being proposed and Ipv4 exhaustion in early in (2011) it wasn't just the IAB that was having philosophical discussions around this.<p>Both rfc3484 and rfc6724 suffered from the lack of executive sponsorship as called out in the above public comments. And the following from rfc6724's intro is often ignored with just pure compliance:<p>> They do not override choices made by applications or upper-layer protocols, nor do they preclude the development of more advanced mechanisms for address selection.<p>There are many ways that could have played out different, but I noticed Avery Pennarun's last update to that post pretty much says the same in different words.<p><a href="https://tailscale.com/blog/two-internets-both-flakey" rel="nofollow">https://tailscale.com/blog/two-internets-both-flakey</a><p>> IPv6 was created in a new environment of fear, scalability concerns, and Second System Effect. As we covered last time, its goal was to replace The Internet with a New Internet — one that wouldn’t make all the same mistakes. It would have fewer hacks. And we’d upgrade to it incrementally over a few years, just as we did when upgrading to newer versions of IP and TCP back in the old days</p>
]]></description><pubDate>Sun, 19 Apr 2026 23:23:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47828641</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47828641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47828641</guid></item><item><title><![CDATA[New comment by nyrikki in "The world in which IPv6 was a good design (2017)"]]></title><description><![CDATA[
<p>> AAAA records have lower priority than A records if you don't have a v6 address assigned on your system. (Link-locals don't count for this).<p>There is an expired 6man draft that explains some of the issues here.<p><a href="https://www.ietf.org/archive/id/draft-buraglio-6man-rfc6724-update-03.html" rel="nofollow">https://www.ietf.org/archive/id/draft-buraglio-6man-rfc6724-...</a><p>To be clear, I go and clean out the temporary fixes for dual stack problems, but you want some more info so here it is.<p><pre><code>     $ grep  'apt.systemd.daily' /var/log/syslog.1 |  grep '^2026-04-16T01:09' | wc -l
     86375

     $ grep  'apt.systemd.daily' /var/log/syslog.1 |  grep '^2026-04-16T01:09' | head -n 1
     2026-04-16T01:09:15.276295-06:00 MrBig apt.systemd.daily[45660]: /usr/bin/unattended-upgrade:2736: Warning: W:Tried to start delayed item http://us.archive.ubuntu.com/ubuntu questing-updates/main amd64 bpftool amd64 <snip>

     $ grep  'apt.systemd.daily' /var/log/syslog.1 |  grep '^2026-04-16T01:09' | head -n 1 | wc -c
     8116
</code></pre>
IPv6 aaaa timeout was shown to be the problem, adding `Acquire::ForceIPv4 "true";` fixed the problem on several hosts.<p><pre><code>     $ getent ahosts us.archive.ubuntu.com
     91.189.91.81    STREAM us.archive.ubuntu.com
     91.189.91.81    DGRAM  
     91.189.91.81    RAW    
     91.189.91.82    STREAM 
     91.189.91.82    DGRAM  
     91.189.91.82    RAW    
     91.189.91.83    STREAM 
     91.189.91.83    DGRAM   
     91.189.91.83    RAW    
     2620:2d:4002:1::101 STREAM 
     2620:2d:4002:1::101 DGRAM  
     2620:2d:4002:1::101 RAW    
     2620:2d:4002:1::102 STREAM 
     2620:2d:4002:1::102 DGRAM  
     2620:2d:4002:1::102 RAW    
     2620:2d:4002:1::103 STREAM 
     2620:2d:4002:1::103 DGRAM  
     2620:2d:4002:1::103 RAW    
</code></pre>
There are no non `fe80::` (link local addresses) on the host.<p><pre><code>     $ ip a | grep inet6
     inet6 ::1/128 scope host noprefixroute 
     inet6 fe80::786a:e338:3957:b331/64 scope link noprefixroute 
     inet6 fe80::a10c:eae9:9a49:c94d/64 scope link noprefixroute 
</code></pre>
So to be clear, I removed my temporary ipv4 only apt config, but there are a million places for this to be brittle and you see people doing so with sysctl net.ipv6.conf.*  netplan, systemd-networkd, NetworkManager, etc... plus the individual client etc....<p>Note:<p><a href="https://datatracker.ietf.org/doc/html/rfc6724#section-2.1" rel="nofollow">https://datatracker.ietf.org/doc/html/rfc6724#section-2.1</a><p>And how "::/0" > "::ffff:0:0/96"<p>And the preceding text:<p>> If an implementation is not configurable or has not been configured, then it SHOULD operate according to the algorithms specified here in conjunction with the following default policy table:<p>One could argue that GUA's without a non-link-local IPv6 address should just be ignored...and in a perfect world they would.<p>But as covered int the first link in this post this is not as easy or clear as expected and people tend to error towards following rfc6724 which states just below the above refrence:<p>> Another effect of the default policy table is to prefer communication using IPv6 addresses to communication using IPv4 addresses, if matching source addresses are available.<p>I am not an IPv6 hater...just giving observations that when you introduce a breaking change, and add additional friction, it dramatically reduces adoption.<p>Many companies I have been at basically just implement enough to meet Federal Government requirements and often intentionally strip it out of the backend to avoid the brittleness it caused.<p>I am old enough to remember when I could just ask for an ASN and a portable class-c and how nice that was, in theory IPv6 should have allowed for that in some form...I am just frustrated with how it has devolved into an intractable 'wicked problem' when there was a path.<p>The fact that people don't acknowledge the pain for users, often due to situations beyond their control is a symptom of that problem.  Ubuntu should never have even requested an IPv6 aaaa in the above system, and yes it only does because of politics and RFC requirements.</p>
]]></description><pubDate>Sun, 19 Apr 2026 21:40:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47827875</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47827875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47827875</guid></item><item><title><![CDATA[New comment by nyrikki in "The world in which IPv6 was a good design (2017)"]]></title><description><![CDATA[
<p>In fact, 30 years later, I just had to add a IPv6 block on Ubuntu’s apt mirrors this week, because the aaaa record query has higher priority and was timing out on my CI, killing build times.<p>That behavior is due to the same politics mentioned above.<p>A few more pragmatic decisions, or at least empathetic guidance would have dramatically changed the acceptance of ipv6.</p>
]]></description><pubDate>Sun, 19 Apr 2026 19:09:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47826762</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47826762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47826762</guid></item><item><title><![CDATA[New comment by nyrikki in "The world in which IPv6 was a good design (2017)"]]></title><description><![CDATA[
<p>Everyone forgets that the Internet Architecture Board took a religious view on "Internet transparency and the end-to-end principle" which was counter to the realities of limited tooling and actual site maintainers needs. [0]<p>There were many of us who, even when it was still IPng (IP Next Generation) in the mid 1990's, tried to get it working and spent significant amount of effort to do so, only to be hit with unrealistic ideological ideals that blocked our ability to deploy it, especially with the limitations of the security tools back in the day.<p>Remember when IPng started, even large regional ISPs like xmission had finger servers, many people used telnet and actually slackware enabled telnet with no root password by default!!!  I used both to get wall a coworker who was late to work because he was playing tw2000.<p>Back then we had really bad application firewalls like Altavista and PIX was just being invented, and the large surveillance capitalism market simply didn't exist then.<p>The IAB hampered deployment by choosing hills to die on without providing real alternatives, and didn't relent until IPv4 exhaustion became a problem, and they had lost their battle because everyone was forced into CGNAT etc...because of the IETF, not in spite of it.<p>The IAB and IETF was living in a MIT ITS mindset when the real world was making that model hazardous and impossible.  End to end transparency may be 'pretty' to some people, but it wasn't what customers needed.  When they wrote the RFCs to make other services simply fail and time out if you enabled IPv6 locally, but didn't have ISP support they burned a lot of good will and everyone just started ripping out the IPv6 stack and running IPv4 only.<p>IMHO, Like almost all tech failures, it didn't flail based on technical merits, it flailed based on ignorance of the users needs and a refusal to consider them, insisting that adopters just had to drink their particular flavor of Kool-aid or stick to IPv4, and until forced most people chose the latter.<p>[0] <a href="https://www.rfc-editor.org/rfc/rfc5902.txt" rel="nofollow">https://www.rfc-editor.org/rfc/rfc5902.txt</a></p>
]]></description><pubDate>Sun, 19 Apr 2026 18:42:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47826556</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47826556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47826556</guid></item><item><title><![CDATA[New comment by nyrikki in "Healthchecks.io now uses self-hosted object storage"]]></title><description><![CDATA[
<p>I can't comment directly on LXC but LXC is very different from runc/crun/your-CRI here, not better or worse, just different.<p>With podman, unfortunately we don't k8s Container Storage Interface (CSI), so you have to work with what you have.<p>When I said:<p>> it is often much safer to use mount NFS internally<p>What is more correct, is having the container runtime or container manager mount them, not the user inside the container.<p>But as you are trying to run unprivileged or at least with minimal privileges, which is all we can do with namespaces, you are cutting across the grain.<p>I do use podman pods and containers, mostly for the ease of development, but on more traditional long lived hosts.<p>I have a very real need to separate UIDs between co-hosted products, but don't need to actually run a VM for these specific use cases.<p>So I have particular rootful tasks that have to be done as the user root root in ansible:<p>1) Install OS packages
2) Create service admin and daemon user
3) Assign subuid/subgids ranges to those user security domains as needed
4) For specific services add NFS data directories to /etc/fstab with the 'user' and 'noauto' flags<p>In Podman I would then create<p><pre><code>     podman volume create --driver local --opt type=nfs --opt device=192.168.1.84:/path/to/share --opt o=addr=192.168.1.84,....

     podman run -d  --name nfs_test -v nfs-shared:/opt docker.io/library/debian:latest
</code></pre>
Which if you don't have the fstab entry will give you:<p><pre><code>     Error: mounting volume nfs-shared for container ...: mount.nfs: Operation not permitted for 192.168.1.84:/path/to/share on /home/user/.local/share/containers/storage/volumes/nfs-shared/_data
</code></pre>
That `_data` is one of the hints of the risk of <i>host</i> bind mounts, the risk is either having an inode that the host cares about or issues across containers etc...<p>While imperfect, this is following the named volume pattern, which really just uses  tells about it being in a container and doesn't expose the mount inode to the container.<p>What does happen inside the container entry point is validating that the expected UID is reachable, adding a user with the right UID offset and switching to that user.<p>A misconfigured host bind mount or leaking because you can't view who has access are the most common problems, and as containers run with elevated privileges <i>until you drop them</i> they can get around those protections, even if they aren't elevating to root in a rootless situation, they can still access the data of any running container with just a few trivial mistakes or new discovered vulnerabilities.<p><pre><code>    $ capsh --decode=00000000800405fb 0x00000000800405fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap

</code></pre>
While NFS is absolutely a whole new ball of wax with other issues,  one nice thing is that (at least the servers I know of) don't even support the concept of user namespaces and UID mapping, which makes it fragile and dangerous if you start mapping uid/gid's in, but can be an advantage if you can simply isolate uid/gid ranges.<p>IMHO it will be horses for courses and depend on your risk appetite as all options are least worst and there simply will be no best option, especially with OCI.</p>
]]></description><pubDate>Sat, 18 Apr 2026 22:30:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47820085</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47820085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47820085</guid></item><item><title><![CDATA[New comment by nyrikki in "Healthchecks.io now uses self-hosted object storage"]]></title><description><![CDATA[
<p>Not really s3, and I haven't touched LXC in a long but this may help on the OCI side.  I apologize if this is redundant to you.<p>Remember that UID mapping on namespaces is just a facade with an offset and a range typically based on subuid[0] and subgid[1] today.<p>In the container `cat /proc/self/uid_map`  or by looking at the pid from the host `cat /proc/$PID/uid_map` you can tell what those offsets are.<p><pre><code>     $ cat /proc/self/uid_map
         0       1000          1
         1     100000      65536
</code></pre>
Here you know that PID =0 in the container maps to PID 1000 in the host, with a length of 1<p>The Container PID offset of 1 maps to the host offset of  100000 for a length of 65536<p>With subuid/subgid you can assign ranges to the user that is instantiating the container, in the flowing I have two users that launch containers.<p><pre><code>     $ cat /etc/subuid
     debian:100000:65536
     runner:165536:65536

     $ cat /etc/subgid
     debian:100000:65536
     runner:165536:65536
</code></pre>
Assuming you pass in the host UID/GID that is how you can configure a compatible user at the entrypoint.<p>But note that only highly <i>trusted</i> containers should ever really use host bind mounts, it is often much safer to use mount NFS internally.<p>Host bind mounts of network filesystems, if that is what you are doing, is also fragile as far as dataloss goes.  I am an object store fan, but just wanted to give you the above info as it seems hard for people to find.<p>I would highly encourage you to look into the history of security problems with host bind mounts to see the wack-a-mole that is required with them to see if it fits in with your risk appitite.  But if you choose to use them, setting up dedicated uid/gid mappings and setting the external host to the expected <i>effective ID</i> of the container users is a better way than using Idmapped mounts etc...<p>[0] <a href="https://man7.org/linux/man-pages/man5/subuid.5.html" rel="nofollow">https://man7.org/linux/man-pages/man5/subuid.5.html</a>
[1] <a href="https://man7.org/linux/man-pages/man5/subgid.5.html" rel="nofollow">https://man7.org/linux/man-pages/man5/subgid.5.html</a></p>
]]></description><pubDate>Fri, 17 Apr 2026 23:43:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47811811</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47811811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47811811</guid></item><item><title><![CDATA[New comment by nyrikki in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>Parallelism can be tricky and always has a cost, but don't discount the 3090 which is more expensive these days in that price bracket.<p>3090 llama.cpp (container in VM)<p><pre><code>    unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_XL  105 t/s
    unsloth/gemma-4-26B-A4B-it-GGUF:UD-Q4_K_XL  103 t/s
</code></pre>
Still slow compaired to the<p><pre><code>    ggml-org/gpt-oss-20b-GGUF 206 t/s
</code></pre>
But on my 3x 1080 Ti 1x TITAN V getto machine I learned that multi gpu takes a lot of tuning no matter what. With the B70, where Vulkan has the CPU copy problem, and SYCL doesn't have a sponsor or enough volunteers, it will probably take a bit of profiling on your part.<p>There are a lot of variables, but PCIe bus speed doesn't matter that much for inference, but the internal memory bandwidth does, and you won't match that with PCIe ever.<p>To be clear, multicard Vulkan and absolutely SYCL have a lot of optimizations that could happen, but the only time two GPUs are really faster for inference is when one doesn't have enough ram to fit the entire model.<p>A 3090 has 936.2 GB/s of (low latency) internal bandwidth, while 16xPCIe5 only has 504.12, may have to be copied through the CPU, have locks, atomic operations etc...<p>For LLM inference, the bottleneck just usually going to be memory bandwidth which is why my 3090 is so close to the 5070ti above.<p>LLM next token prediction is just a form of autoregressive decoding and will primarily be memory bound.<p>As I haven't used the larger intel GPUs I can't comment on what still needs to be optimized, but just don't expect multiple GPUs to increase performance without some nvlink style RDMA support _unless_ your process is compute and not memory bound.</p>
]]></description><pubDate>Fri, 17 Apr 2026 00:05:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47801047</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47801047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47801047</guid></item><item><title><![CDATA[New comment by nyrikki in "Put your SSH keys in your TPM chip"]]></title><description><![CDATA[
<p>Lots of ways to establish a persistent presence with a short time life key, especially if it is in env or a file it is trivial to find.<p>In theory the Linux kernel keyring would help here, even with a tsm or in conjunction with it.<p>Unfortunately as the industry abandoned the core Unix permission system (uid/gid) all of these methods just get a devfs[null] bind mount.<p>Only process that also support the traditional co-hosting model like nginx and Postgres do.<p>We would need nonce keys to gain no value from kernel memory or hardware storage.</p>
]]></description><pubDate>Thu, 16 Apr 2026 18:31:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47797551</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47797551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47797551</guid></item><item><title><![CDATA[New comment by nyrikki in "Stealthy RCE on Hardened Linux: Noexec and Userland Execution PoC"]]></title><description><![CDATA[
<p>More concrete info here.<p>Container:<p><pre><code>    # ps -ef
    UID          PID    PPID  C STIME TTY          TIME CMD
    root           1       0  0 Apr11 ?        00:15:32 ./llama-server -hf...
</code></pre>
Host:<p><pre><code>    UID          PID    PPID  C STIME TTY          TIME CMD
    1000   99880   99878  0 Apr11 ?        00:15:29 ./llama-server -hf...

    $ cat /proc/99880/status | grep CapEff
    CapEff: 00000000800405fb
    $ capsh --decode=00000000800405fb
    0x00000000800405fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap

    $ cat /proc/99880/uid_map 
         0       1000          1
         1     100000      65536

</code></pre>
The _effective_ uid of 0 in a container is the default ubuntu user 1000, note how 0 is mapped to 1000, then everything else is mapped with an offset of 100000.<p>Footnote 2 on here is a hint on why that is a problem, note the last line.<p><a href="https://www.kernel.org/doc/html/latest/admin-guide/namespaces/compatibility-list.html" rel="nofollow">https://www.kernel.org/doc/html/latest/admin-guide/namespace...</a><p>The cap_dac_override and cap_fowner, which the user is expected to drop, also pose a problem, from the container side.<p>From the host side, this very public ld_preload method still works.<p><a href="https://www.openwall.com/lists/oss-security/2025/03/27/6" rel="nofollow">https://www.openwall.com/lists/oss-security/2025/03/27/6</a><p><pre><code>    $ uname -a
    Linux amd 6.17.0-20-generic #20-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 13 20:07:29 UTC 2026 x86_64 GNU/Linux

    $ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description: Ubuntu 25.10
    Release: 25.10
    Codename: questing


    $ sysctl kernel.apparmor_restrict_unprivileged_userns
    kernel.apparmor_restrict_unprivileged_userns = 1

    $ LD_PRELOAD=./shell.so /usr/bin/nautilus
    $ unshare -U -r -m /bin/sh
    # mount --bind /etc/passwd /etc/passwd
    # mount
    /dev/nvme0n1p2 on /etc/passwd type ext4 (rw,relatime)
</code></pre>
Note that I could use ld_preload, using weak nautilus apparmor defaults, escalate to root *in the default namespace* and mount on /etc/passwd!!!<p>Now in a container, that won't get you to the host, but it will help you get rid of the pesky udev[/null] and other bind mounts that prevent you from extracting data from other containers running as the same UID.  But I can't find a public version of that trick, so I will leave that to the reader.<p>The point is that for unix-like OSs, privilege dropping is where most security comes from, if you run with elevated privileges and don't drop them there are always trivial holes, and the OP shows how hard that can be to constrain.</p>
]]></description><pubDate>Tue, 14 Apr 2026 02:45:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47760652</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47760652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47760652</guid></item><item><title><![CDATA[New comment by nyrikki in "The tech jobs bust is real. Don't blame AI (yet)"]]></title><description><![CDATA[
<p>Chalk an cheese, Wavelength Division Multiplexing took out Globalcrossing Worldcom etc…<p>But the secondary market that grew out of it was because once it is in the ground it has a long lifespan and low upkeep costs, this is not the same thing as ultra high power density data centers.<p>Cooling needs to be balanced with demand, they may not work for even cloud scale type loads without serious issues etc…<p>Not that it matters, my hometown has an announced DC and it is looking more and more like it is a shill, as do several of the others in the area.</p>
]]></description><pubDate>Mon, 13 Apr 2026 23:05:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759118</link><dc:creator>nyrikki</dc:creator><comments>https://news.ycombinator.com/item?id=47759118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759118</guid></item></channel></rss>