<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: toredash</title><link>https://news.ycombinator.com/user?id=toredash</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 09:22:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=toredash" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by toredash in "LittleSnitch for Linux"]]></title><description><![CDATA[
<p>Is there any DNS based software to do block/allow? Kinda lika what's present in CiliumNetworkPolicies in Kubernetes networking?</p>
]]></description><pubDate>Thu, 09 Apr 2026 05:34:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47699654</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=47699654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47699654</guid></item><item><title><![CDATA[New comment by toredash in "Kubernetes Ingress Nginx is retiring"]]></title><description><![CDATA[
<p>I often find myself trying to tell people that KISS is a good thing. If something is somewhat complex it will be really complex after a few years and a few rotations of personnel.</p>
]]></description><pubDate>Fri, 14 Nov 2025 08:10:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45924889</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45924889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45924889</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>If the image repositories were AZ bound resources, that would make the CI build process more efficient.<p>Or, if the resources that CI build is utilizing within the image (after the image is pulled and started) is AZ bound, then yes the build process would be improved since the CI build would fetch AZ local resources, rather than crossing the AZ boundary</p>
]]></description><pubDate>Thu, 16 Oct 2025 12:50:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45604698</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45604698</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45604698</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>> If you have the setup on 3 AZs how would you route traffic only to the AZ where your RDS resides?<p>So specifically for RDS, AWS will provide two endpoint for the client application: A writer and a reader endpoint. Similar to this:
mydbcluster.cluster-c7tj4example.us-east-1.rds.amazonaws.com : Writer endpoint
mydbcluster.cluster-ro-c7tj4example.us-east-1.rds.amazonaws.com : Reader endpoint (notice -ro part).<p>The writer endpoint will always resolve to the active master, which is what the client application is configured to use, and thats the hostname my lookup service will use as input to determine the current location of the Writer instance.<p>My solution works only for hostnames that returns a single IP address, so it won't work for the Reader endpoints. As I wrote in the repository, a requirement for this is that "The FQDN needs to return a single A record for the external resource".</p>
]]></description><pubDate>Wed, 15 Oct 2025 12:19:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45591266</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45591266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45591266</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>> And I don't understand how a tool like this fits into formal risk analysis and where it presents an optimum solution for those risks.<p>Seems it does not fit your risk analysis?</p>
]]></description><pubDate>Wed, 15 Oct 2025 09:49:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45590104</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45590104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45590104</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>> Most people should start with a single-zone setup and just accept that there's a risk associated with zone failure. If you have a single-zone setup, you have a node group in that one zone, you have the managed database in the same zone, and you're done.<p>I don't disagree, but there is one issue with this approach and that is that RDS is a multi AZ service by itself. That means that when a maintenance event occur on your insaance, AWS will start a new instance in a new zone, and fail over to that one.<p>You could of course manually failover RDS afterwards to your primary zone. Not sure if that is better than manually scaling up a node pool if a zone fails.<p>> So you are presuming that, when RDS automatically fails over to zone b to account for zone a failure, that you will certainly be able to scale up a full scale production environment in zone b as well, in spite of nearly every other AWS customer attempting more or less the same strategy;<p>Thats up to the user to decide via the Kyverno policy. We used the preferredDuringSchedulingIgnoredDuringExecution affinity setting to instruct the scheduler to attempt to schedule the pods in the optimal zone.<p>I believe the only way to be 100% sure that you have compute capacity available in your AWS account is the use EC2 On-Demand Capacity Reservations (<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html" rel="nofollow">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capa...</a>).  If your current zone is at full capacity, and for some reason the nodes your VMs are running on dies, that capacity is lost, and you wont get it back either.</p>
]]></description><pubDate>Wed, 15 Oct 2025 06:18:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45588675</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45588675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45588675</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>Are you thinking about already-cached container images on the host level ? Not sure how AZP fits in here?<p>Since you mentioned it, what I've done before when it comes to improving CI builds, is to use karpenter + local SSD mounts with very large instance types in an idle timeout of ~1h. This allowed us to have very performant build machines at a low cost. The first build of the day took a while to get going, but for the price-benefit perspective it was great.</p>
]]></description><pubDate>Wed, 15 Oct 2025 06:10:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45588622</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45588622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45588622</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>AWS publish their own metrics for cross-AZ and internal-AZ latency: <a href="https://eu-central-1.console.aws.amazon.com/nip/" rel="nofollow">https://eu-central-1.console.aws.amazon.com/nip/</a> (Network Manager > Infrastructure Performance)<p>> In general the goal should be to deploy as much of the stack in one zone as possible<p>Agree. The can be a few downsides one has to consider if you have to fail over to another zone. Worst case, there isn't sufficient capacity available when you fail over if everyone else is asking for capacity at the same time. If one uses e.g. karpenter, you should be able to be very diverse in the instance selection process, so that you get at least some capacity, but maybe not the preferred.</p>
]]></description><pubDate>Wed, 15 Oct 2025 06:07:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=45588598</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45588598</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45588598</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>That was the origin for this solution. A client app had to issue millions of small SQL queries where the first query had to complete before the second query could be made. Millions of MS adds up.<p>Lowest possible latency would of course be running the client code on the same physical box as the SQL server, but thats hard to do.</p>
]]></description><pubDate>Wed, 15 Oct 2025 05:59:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45588548</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45588548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45588548</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>I would LOVE to pitch something else I'm working on that is solving this problem in EKS, cross zone data transfer.<p>It's a plugin that enables traffic re-direction for any service that is using an IP in any given VPC. If you have say multiple RDS Reader instances, it will first attempt to use local AZ instances first, but the other instances are available if local instances are non-functional. So you do not loose HA or failover features.<p>The plugin does not require any reconfiguration on your apps. It works similar to Topology Aware Routing (<a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/" rel="nofollow">https://kubernetes.io/docs/concepts/services-networking/topo...</a>) in Kubernetes, but it works for services outside of Kubernetes. The plugin even works for non-Kubernetes setup as well.<p>This AZP solution is fine for services that is have one IP or primary instance, like RDS Writer instance. It does not work for anything that is "stateless" and multi-AZ, like RDS Read-only instances or ALBs.</p>
]]></description><pubDate>Wed, 15 Oct 2025 05:57:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45588526</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45588526</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45588526</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>I was surprised to. Of course it makes sense when you look at it hard enough, two seperate DCs won't have the same latency than internal DC communication. It might have the same physical wire-speed, but physical distance matter.</p>
]]></description><pubDate>Tue, 14 Oct 2025 20:14:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45584235</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45584235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45584235</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>The nice thing about this solution, its not limited to RDS. I used RDS as an example as many are familiar with it and are known to the fact that it will change AZ during maintenance events.<p>Any hostname for a service in AWS that can relocate to another AZ (for whatever reason), can use this.</p>
]]></description><pubDate>Tue, 14 Oct 2025 19:49:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583982</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45583982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583982</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>> Kyverno requirement makes it limited.<p>You don't have to use Kyverno. You could use a standard mutating webhook, but you would have to generate your own certificate and mutate on every Pod.CREATE operations. Not really a problem but, it depends.<p>> There is no "automatic-zone-placement-disabled"<p>True. Thats why I choose to use preferredDuringSchedulingIgnoredDuringExecution instead of requiredDuringSchedulingIgnoredDuringExecution. In my case, where this solutions originated from, Kubernetes was already a multi AZ solution where there was always at least one node in each AZ. It was nice if the Pod could be scheduled into the same AZ, but it was not a hard requirement,<p>> No automatic look up of IPs and Zones.
Yup, it would generate a lot of extra "stuff" to mess with: IAM Roles, how to lookup IP/subnet information from multi account AWS setup with VPC Peerings. In our case it was "good enough" with a static approach. Subnet/network topology didnt change frequently enough to add another layer of complexity.<p>> What if we only have one node in specific zone?<p>Thats why we defaulted to preferredDuringSchedulingIgnoredDuringExecution and not required.</p>
]]></description><pubDate>Tue, 14 Oct 2025 19:47:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583958</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45583958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583958</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>As it stands now, it doesn't. Unless you modify the Kyverno Policy to be of a background scanning.<p>I would create a similar policy where Kyverno at intervals would check the Deployment spec to see if the endpoint is changed, and alter the affinity rules. It would then be a traditional update of the Deployment spec to reflect the desire to run in another AZ, if that made sense?</p>
]]></description><pubDate>Tue, 14 Oct 2025 18:26:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583189</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45583189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583189</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>Totally agree!<p>This service is published more as a concept to be built on top of, than a complete solution.<p>You wouldn't even need IAM rights to read RDS information, you need subnet information. As subnets are zonal, it does not if the service is RDS or Redis/ElastiCache. The IP returned from the hostname lookup, at the time your pod is scheduled, determines which AZ that Pod should (optimally) be deployed to.<p>Where this solution was created, was in a multi AWS account environment. Doing describe subnets API calls across multiple accounts is a hassle. It was "good enough" to have a static mapping of subnets, as they didn't change frequently.</p>
]]></description><pubDate>Tue, 14 Oct 2025 18:17:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583091</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45583091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583091</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>Agree, Kubernetes isn't for everyone. This solution came from an specific issue with a client which had ad hoc performance problems when a Pod was placed in the "in-correct" AZ. So this solution was created to place the Pods in the most optimal zone when they were created.</p>
]]></description><pubDate>Tue, 14 Oct 2025 18:09:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583021</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45583021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583021</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>Yes I get that. But are we talking HA for this lookup service that I've made?<p>If yes, that's a simple update of the manifest to have 3 replicas with ab affinity setting to spread that out over different AZ. Kyverno would use the internal Service object this service provide to have a HA endpoint to send queries to.<p>If we are not talking about this AZP service, I don't understand what we are talkin about.</p>
]]></description><pubDate>Tue, 14 Oct 2025 17:28:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45582637</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45582637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45582637</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>I'm not sure I follow. Are you talking about the AZP service, or ... ?</p>
]]></description><pubDate>Tue, 14 Oct 2025 16:09:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45581753</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45581753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45581753</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>If the AZP deployment fails, yes your correct there is no hints anywhere. If the lookup to AZP fails for whatever reason, it would be noted in the Kyverno logs. And based on if you -require- this policy to take affect or not, you have to decide if it you want pods to fail or not in the scheduling step. In most cases, you don't want to stop scheduling :)</p>
]]></description><pubDate>Tue, 14 Oct 2025 16:08:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45581747</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45581747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45581747</guid></item><item><title><![CDATA[New comment by toredash in "Automatic K8s pod placement to match external service zones"]]></title><description><![CDATA[
<p>Hi HN,<p>I wanted to share something I've worked a bit to solve regarding Kubernetes: its scheduler has no awareness of the network topology for external services that workloads communicate with. If a pod talks to a database (e.g AWS RDS), K8s does not know it should schedule it in the same AZ as the database. If placed in the wrong AZ, it leads to unnecessary cross-AZ network traffic, adding latency (and costs $).<p>I've made a tool I've called "Automatic Zone Placement", which automatically aligns Pod placements with their external dependencies.<p>Testing shows that placing the pod in the same AZ resulted in a ~175-375% performance increase. Measured with small, frequent SQL requests. It's not really that strange, same AZ latency is much lower than cross-AZ. Lower latency = increased performance.<p>The tool has two components:<p>1) A lightweight lookup service: A dependency-free Python service that takes a domain name (e.g., your RDS endpoint) and resolves its IP to a specific AZ.<p>2 ) A Kyverno mutating webhook: This policy intercepts pod creation requests. If a pod has a specific annotation, the webhook calls the lookup service and injects the required nodeAffinity to schedule the pod onto a node in the correct AZ.<p>The goal is to make this an automatic process, the alternative is to manually add a nodeAffinity spec to your workloads. But resources moves between AZ, e.g. during maintenance events for RDS instances. I built this with AWS services in mind, the concept is generic enough to be used for on-premise clusters to make scheduling decisions based on rack, row, or data center properties.<p>I'd love some feedback on this, happy to answer questions :)</p>
]]></description><pubDate>Wed, 08 Oct 2025 05:21:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45512352</link><dc:creator>toredash</dc:creator><comments>https://news.ycombinator.com/item?id=45512352</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45512352</guid></item></channel></rss>