<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: chris_marino</title><link>https://news.ycombinator.com/user?id=chris_marino</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 22:52:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=chris_marino" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by chris_marino in "An Update on Heroku"]]></title><description><![CDATA[
<p>Salesforce, like every large enterprise software company, has a formal (and strict) End of Life process. It starts with an announcement like this indicating End of Sales, then once the contract obligations are met, they can end support, then EoL.<p>There is no way they can avoid this kind of public notice.</p>
]]></description><pubDate>Fri, 06 Feb 2026 22:50:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46919272</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=46919272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46919272</guid></item><item><title><![CDATA[New comment by chris_marino in "An Update on Heroku"]]></title><description><![CDATA[
<p>This news from Heroku does not come as any surprise to the people that were there (as I was). Lots of moving parts and second guessing (that I won't share), but one thing I will say is: Incentives matter.<p>The seeds of this outcome were planted years ago when sales comp plans changed.  When a sales rep can hit their target by simply converting the way an existing customer gets billed, none of them look for new business. Don't need new leads. Don't need to win competitive deals. But finding new customers and losing opportunities are the only things that signal/drive innovation. But from a budgeting perspective, why increase investment in a product that already hits/exceeds their sales targets?<p>Over time sales targets get met, but the product doesn't advance.  By the time all existing customers that can convert have converted, the product is no longer competitive. Like bankruptcy, it comes gradually, then suddenly.</p>
]]></description><pubDate>Fri, 06 Feb 2026 21:58:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46918705</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=46918705</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46918705</guid></item><item><title><![CDATA[New comment by chris_marino in "IPvlan overlay-free Kubernetes Networking in AWS"]]></title><description><![CDATA[
<p>I knew that. What I didn't know was if either of these could apply network policy to these endpoints. Guessing that since they each require their own CNI, there will be probs. So, whether the CNI uses iptables, or not, not clear how network policy API can be enforced.</p>
]]></description><pubDate>Tue, 05 Dec 2017 19:55:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=15855197</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=15855197</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15855197</guid></item><item><title><![CDATA[New comment by chris_marino in "IPvlan overlay-free Kubernetes Networking in AWS"]]></title><description><![CDATA[
<p>>The ARP table might be bigger, but thats a different issue.<p>But this is the problem that most designs are trying to solve. Large L2s are notoriously fragile. 1,000 nodes, 50-100 pods/node is a lot of ARPs. And sometimes you <i>want</i> partitions between endpoints for security/isolation.<p>I agree with you about static assignment of addresses. But that's why (most) CNIs work with a controller of some kind for IPAM.<p>IMO, the problem complexity is hard to compress. You need to distribute/manage MAC addresses, routes, and/or state. Different designs would favor one over another.</p>
]]></description><pubDate>Tue, 05 Dec 2017 19:37:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=15855020</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=15855020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15855020</guid></item><item><title><![CDATA[New comment by chris_marino in "IPvlan overlay-free Kubernetes Networking in AWS"]]></title><description><![CDATA[
<p>It all about trade offs. We've built a CNI for k8s and have looked into all of the techniques described. It seems that Lyft's design is a direct reflection of their requirements.<p>To the extent your requirement match theirs, this could be a good alternative.  The most significant in my mind is that it's meant to be used in conjunction with Envoy.  Envoy itself has its own set of design tradeoffs as well.<p>For example, Lyft currently uses 'service-assigned EC2 instances'. Not hard to see how this starting point would influence the design. The Envoy/Istio model of proxy per pod also reflects this kind of workload partitioning. Obviously, a design for a small number of pods (each with their own proxy) per instance is going to be very different from one that needs to handle 100 pods (and their IPs), or more, per instance.<p>Another is that k8s network policy can't be applied since the 'Kubernetes Services see connections from a node’s source IP instead of the Pod’s source IP'. But I don't think this CNI is intended to work with any other network policy API enforcement mechanism. Romana (the project I work on) and the other CNI providers that use iptables to enforce network policy rely on seeing the pod's source IP.<p>Again, this might be fine if you're running Envoy. On the other hand, L3 filtering on the host might be important.<p>Also, this design requires that 'CNI plugins communicate with AWS networking APIs to provision network resources for Pods'. This may or may not be something you want your instances to do.<p>FWIW, Romana lets you build clusters larger than 50 nodes without an overlay or more 'exotic networking techniques' or 'massive' complexity. It does it via simple route aggregation, completely standard networking.</p>
]]></description><pubDate>Tue, 05 Dec 2017 16:50:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=15852988</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=15852988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15852988</guid></item><item><title><![CDATA[New comment by chris_marino in "IPvlan overlay-free Kubernetes Networking in AWS"]]></title><description><![CDATA[
<p>> Just divide up whatever IP network you're using (e.g. 10/8) and make sure you allocate "enough" to each rack/whatever.<p>Easier said than done. Most datacenters are bit more deliberate about allocating addresses and hand them out in non-contiguous CIDRs. The VLAN mentality is still very prevalent. Getting a /20 at a time is pretty common.<p>Using overlapping IPs puts you right back into the overlay model.<p>>Assuming everything is nice and hierarchical, you can easily aggregate an entire rack to a single prefix.<p>Yes, exactly. The trick then becomes how to you ensure that endpoints that get created within the rack get an IP from the prefix?  Romana (the project I work on) does this. It lets you capture your network topology for exactly this reason. This is especially important if/when you must filter routes at ToR.</p>
]]></description><pubDate>Tue, 05 Dec 2017 15:31:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=15852287</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=15852287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15852287</guid></item><item><title><![CDATA[New comment by chris_marino in "IPvlan overlay-free Kubernetes Networking in AWS"]]></title><description><![CDATA[
<p>Both the Lyft and AWS CNIs use ENIs, Romana's CNI does not. But more specifically, vpc-router works along with Romana's IPAM to aggregate routes so that each VPC route can forward traffic for multiple instances. So, instead of one route per instance, you need only 1 routes per n instances. Where n is set by how much aggregation you want (configurable).<p>The net effect is that you can build large clusters without running out of VPC routes and no overlay is needed when traffic crosses AZs.<p>When a route is used to forward traffic for multiple instances, the target instance acts as router and forwards traffic to the final destination instance. This works because instances within an AZ have routes installed on them to the pod CIDRs on the other instances in the zone, so any one of them can perform this forwarding function.<p>Romana only piggybacks routes when there are no more VPC routes available, so for small cluster it's just like kubenet. For large clusters routes it uses all the instances to forward traffic so that none of them become a bottleneck.</p>
]]></description><pubDate>Tue, 05 Dec 2017 14:56:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=15852031</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=15852031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15852031</guid></item><item><title><![CDATA[New comment by chris_marino in "Connection tracking critical for high performance network policy for Kubernetes"]]></title><description><![CDATA[
<p>We ran some performance benchmarks applying network policy to Kubernetes pods. Found that latency was (nearly) independent of the number of iptable rules applied. This is because once the session is set up, connection tracking lets subsequent packets be forwarded right away.</p>
]]></description><pubDate>Mon, 26 Sep 2016 14:42:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=12582200</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=12582200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12582200</guid></item><item><title><![CDATA[Connection tracking critical for high performance network policy for Kubernetes]]></title><description><![CDATA[
<p>Article URL: <a href="http://blog.kubernetes.io/2016/09/high-performance-network-policies-kubernetes.html">http://blog.kubernetes.io/2016/09/high-performance-network-policies-kubernetes.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=12582163">https://news.ycombinator.com/item?id=12582163</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 26 Sep 2016 14:38:03 +0000</pubDate><link>http://blog.kubernetes.io/2016/09/high-performance-network-policies-kubernetes.html</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=12582163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12582163</guid></item><item><title><![CDATA[New comment by chris_marino in "One way to make containers network: BGP"]]></title><description><![CDATA[
<p>Another solution to this problem is Romana [1] (I am part of this effort). It avoids overlays as well as BGP because it aggregate routes. It uses its own IP address management (IPAM) to maintain the route hierarchy.<p>The nice thing about this is that nothing has to happen for a new pod to be reachable. No /32 route distribution or BGP (or etcd) convergence, no VXLAN ID (VNID) distribution for the overlay. At some scale, route and/or VNID distribution is going to limit the speed at which new pods can be launched.<p>One other thing not mentioned in the blog post or in any of these comments is network policy and isolation. Kubernetes v1.3 includes the new network APIs that let you isolate namespaces. This can only be achieved with a back end network solution like Romana or Calico (some others as well).<p>[1] romana.io</p>
]]></description><pubDate>Mon, 18 Jul 2016 01:25:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=12112763</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=12112763</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12112763</guid></item><item><title><![CDATA[New comment by chris_marino in "Comparison of Networking Solutions for Kubernetes"]]></title><description><![CDATA[
<p>Great post!<p>The results of these benchmarks do not surprise me at all. To me, they all fall in to the category of 'less (overhead) is more (performance)'. With VXLAN encap being the obvious example of greatest overhead.<p>I think its also worth mentioning that k8s networking is being enhanced in v1.2 to support isolation and multi-tenancy through ThirdParty resources (back end network solutions). The alternatives included in the benchmarks aren't going to be able to support these kinds of network policy as is.<p>And, unfortunately, things get a more complicated when you want to provide more than simple reachability (which is all that k8s asks for today). The tradeoff is to be able to control the packets with the lowest overhead possible. VXLANs will give you isolation, but at the cost of encapsulation. Stacking bridges and tunnels and distributing VNIDs/routes not only introduces more latency, but becomes another multi-host coordination problem (matching tunnel IDs, etc).<p>We're working on a new way to build cloud native networks that avoids the encap, but still lets you control all the packets.<p>You can learn more at <a href="http://romana.io" rel="nofollow">http://romana.io</a> if you're interested.</p>
]]></description><pubDate>Fri, 19 Feb 2016 16:27:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=11134436</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=11134436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11134436</guid></item><item><title><![CDATA[New comment by chris_marino in "Thought this was impossible? IP address failover across cloud/provider networks"]]></title><description><![CDATA[
<p>There is still the issue of replicating content, but that's something that can be handled in a number of different ways, depending on the requirements.</p>
]]></description><pubDate>Mon, 02 Apr 2012 17:34:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=3789194</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=3789194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=3789194</guid></item><item><title><![CDATA[New comment by chris_marino in "A human readable REST API for software defined networks (github link in text)"]]></title><description><![CDATA[
<p>Cool interface with JSON representations as well....</p>
]]></description><pubDate>Tue, 20 Dec 2011 20:07:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=3374909</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=3374909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=3374909</guid></item><item><title><![CDATA[VEPA: A standard only a network hardware vendor could love]]></title><description><![CDATA[
<p>Article URL: <a href="http://blog.vcider.com/2011/01/vepa/">http://blog.vcider.com/2011/01/vepa/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=2403281">https://news.ycombinator.com/item?id=2403281</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 03 Apr 2011 17:35:23 +0000</pubDate><link>http://blog.vcider.com/2011/01/vepa/</link><dc:creator>chris_marino</dc:creator><comments>https://news.ycombinator.com/item?id=2403281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=2403281</guid></item></channel></rss>