<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gerhardlazu</title><link>https://news.ycombinator.com/user?id=gerhardlazu</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 01 May 2026 23:36:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gerhardlazu" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gerhardlazu in "Ask HN: Who is hiring? (May 2026)"]]></title><description><![CDATA[
<p>Loophole Labs | Senior Systems Engineer | REMOTE (Americas & Europe) | $150k–195k USD + equity | Go, Zig, Rust, C, eBPF, CRIU, Kubernetes<p>We make Architect: a Kubernetes runtime that hibernates idle pods in place and wakes them in 50ms with TCP connections intact. Five engineers. You'd be the 6th.<p>If tracing an x86 instruction in the morning and hunting a control-plane race in the afternoon both sound fun, and you insist on measuring rather than guessing, this is the job.<p>Customers run Architect for workloads where cold starts hurt: real-time voice & video AI agents, long-warming JVM apps, stateful data services that can't be rescheduled cheaply. 1.0 shipped Q1 2026; you'd join mid-way through 2.0. Seed-stage, VC-funded, a few years of runway. Fully distributed across the Americas and Europe.<p>What you'd work on:<p>- Hibernation surface: containerd shims and CRUISE, our Zig-native CRIU replacement.<p>- Control plane: per-node DaemonSet streaming checkpoints; admission controller resizing hibernated pods in place.<p>- Networking and migration: eBPF/XDP at line rate; cross-node live migration to production; cross-cloud next.<p>You're a senior generalist. Years across the stack: assembly to frontends, hardware-near, comfortable in x86. Tests ship with the code, decisions get worked out in writing, and you measure rather than guess. Strong in Go; willing to use Zig, Rust, or C.<p>Bonus: eBPF/XDP, CRIU, Linux kernel internals, containerd, gVisor, live migration, or public writing in kernel/container/eBPF land. Strong systems depth and the willingness to pick up the rest is enough on its own.<p>Apply: <a href="https://loopholelabs.io/careers" rel="nofollow">https://loopholelabs.io/careers</a> - we respond within the week (typically a few days).</p>
]]></description><pubDate>Fri, 01 May 2026 15:06:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47975670</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=47975670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47975670</guid></item><item><title><![CDATA[New comment by gerhardlazu in "Ask HN: How does your CI/CD stack look like today?"]]></title><description><![CDATA[
<p>Another <a href="https://dagger.io">https://dagger.io</a> fan here. Have been using it since late 2021 to continuously deploy a Phoenix app to Fly.io: <a href="https://github.com/thechangelog/changelog.com/pull/395">https://github.com/thechangelog/changelog.com/pull/395</a>. Every commit goes into production.<p>This is what the GHA workflow currently looks like: <a href="https://github.com/thechangelog/changelog.com/blob/c7b8a57b28ee4b747163dc7d56fb82162faedfac/.github/workflows/ship_it.yml">https://github.com/thechangelog/changelog.com/blob/c7b8a57b2...</a><p>FWIW, you can see how everything fits together in this architecture diagram: <a href="https://github.com/thechangelog/changelog.com/blob/master/INFRASTRUCTURE.md">https://github.com/thechangelog/changelog.com/blob/master/IN...</a></p>
]]></description><pubDate>Mon, 31 Jul 2023 10:49:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=36940842</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=36940842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36940842</guid></item><item><title><![CDATA[New comment by gerhardlazu in "Fly.io Postgres cluster down for 3 days, no word from them about it"]]></title><description><![CDATA[
<p>I really like the work that you're doing Thomas, this is the right approach. FWIW, <a href="https://fly.io/blog/carving-the-scheduler-out-of-our-orchestrator/">https://fly.io/blog/carving-the-scheduler-out-of-our-orchest...</a> is one of my favourite posts on your blog.<p>For everyone else reading this, we have been running <a href="https://changelog.com" rel="nofollow noreferrer">https://changelog.com</a> on Fly.io since April 2022. This is what our architecture currently looks like: <a href="https://github.com/thechangelog/changelog.com/blob/master/INFRASTRUCTURE.md">https://github.com/thechangelog/changelog.com/blob/master/IN...</a><p>After 15 months & more than 100 million requests served by our Phoenix + PostgreSQL app running on Fly.io, I would be hard pressed to find a reason to complain.
- Some deploys failed, and re-running the pipeline fixed it.
- Early July 2023, 9k requests from Frankfurt returned 503s. Issue lasted 10 seconds. 
- While experimenting with machines, after many creations & deletions, one volume could not be deleted. Next day, the volume was gone.<p>That's about it after 15 months of running production workloads on Fly.io.<p>We mention about our Fly.io experience often in our Kaizen pod episodes, which we publish every ~2 months: <a href="https://changelog.com/topic/kaizen" rel="nofollow noreferrer">https://changelog.com/topic/kaizen</a>. For anyone curious, this is the episode in which we announced the migration: <a href="https://changelog.com/shipit/50" rel="nofollow noreferrer">https://changelog.com/shipit/50</a>. There is a detailed PR which goes with it: <a href="https://github.com/thechangelog/changelog.com/pull/407">https://github.com/thechangelog/changelog.com/pull/407</a>. We've been talking about our migration plan from apps v1 (Nomad) to apps v2 (flyd) recently: <a href="https://changelog.com/friends/2#transcript-138" rel="nofollow noreferrer">https://changelog.com/friends/2#transcript-138</a><p>I'm sorry to hear that many of you didn't have the best experience. I know that things will continue improving at Fly.io. My hope is that one day, all these hard times will make for great stories. This gives me hope: <a href="https://community.fly.io/t/reliability-its-not-great/11253">https://community.fly.io/t/reliability-its-not-great/11253</a><p>Keep improving.</p>
]]></description><pubDate>Sun, 23 Jul 2023 12:29:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=36834726</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=36834726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36834726</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>changelog.com used to be WordPress, then became a Phoenix app because it needed features that were hacky to implement & then manage in WP. It's more of a podcasting platform these days rather than a CMS.<p>The code in this repo tells the truth about what it is, and even shows how it works: <a href="https://github.com/thechangelog/changelog.com" rel="nofollow">https://github.com/thechangelog/changelog.com</a></p>
]]></description><pubDate>Sun, 27 Dec 2020 17:28:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=25552563</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25552563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25552563</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>I've never heard of ObjectiveFS, thanks for the share! I am more inclined to try out Rook, OpenEBS or Longhorn as open source alternatives.<p>I've just added YugabyteDB to my explore list.</p>
]]></description><pubDate>Sun, 27 Dec 2020 17:20:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=25552498</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25552498</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25552498</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>For what it's worth, Rook, OpenEBS or Longhorn are worth exploring.</p>
]]></description><pubDate>Sun, 27 Dec 2020 17:17:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=25552476</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25552476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25552476</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>My first Supermicro just turned 9 and it's still running strong, with a fresh install of Ubuntu 20.04 & k3s over the holidays. The second Supermicro turned 5, and has been running FreeBSD all this time like a champ. They are both loft guardians.<p>A bunch of bare metal hosts run on Scaleway / Online, and different VMs & managed services run in Digital Ocean, Linode, AWS & GCP. I sometimes spin the odd bare metal instance on Equinix Metal (former Packet).<p>A diverse fleet means that there's always something new to learn and try out. A single large host would make me anxious, as no internet provider or power grid is 100% reliable and available. Also, software upgrades sometimes fail, and things get messed up all the time, which is when I find it most efficient to just start from scratch. A single host makes that less convenient.<p>Every approach has its pros and cons, which is why my main workstation is a 20 Xeon W with 64GB RAM & 1TB NVME : ). Yes, there is a backup workstation which doubles up as a mobile one meaning that it can work without power or hard internet for almost a day. Options are good ; )</p>
]]></description><pubDate>Sun, 27 Dec 2020 17:16:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=25552471</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25552471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25552471</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>> Does this imply there is a cloud abstract layer that should come<p>crossplane.io comes closest afaik<p>> And is k8s the simplest possible abstraction? And if not - what is?<p>If you are asking about the simplest possible abstraction for container scheduling and orchestration, then I believe Nomad from HashiCorp or Docker Swarm are simpler. As for managed solutions with wide adoption in all types of environments and the largest investment to date, I am not aware of anything on par with K8S.</p>
]]></description><pubDate>Sun, 27 Dec 2020 16:53:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=25552283</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25552283</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25552283</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>We are both! I would also add lazy to that paradox. My surname is a letter off, and that's at close as it gets : )<p>The devil is in the details, there is more to it than dynamic & static content, we are using Fastly, otherwise we couldn't serve all the traffic that we do.<p>The best part is that it's all public - <a href="https://github.com/thechangelog/changelog.com" rel="nofollow">https://github.com/thechangelog/changelog.com</a> - and we welcome contributions, especially those that simplify our setup without compromising on resiliency and availability. I'm looking forward to yours ; )</p>
]]></description><pubDate>Sun, 27 Dec 2020 00:40:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=25547722</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25547722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25547722</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>K8S is an API that the majority is agreeing on, which is rare. There is a lot of amazing tooling, a staggering amount of ongoing innovation, all built on solid concepts: declarative models, emitted metrics (the /proc equivalent, but with larger scope) and versioned infrastructure as data (a.k.a. GitOps).<p>For someone that is known as the King of Bash (self-proclaimed) - <a href="https://speakerdeck.com/gerhardlazu/how-to-write-good-bash-code-using-tdd" rel="nofollow">https://speakerdeck.com/gerhardlazu/how-to-write-good-bash-c...</a> - and after a decade of Puppet, Chef, Ansible and oh wow that sweet bash <a href="https://github.com/gerhard/deliver" rel="nofollow">https://github.com/gerhard/deliver</a> - even if all my workstations and work servers (yup, all running k3s) are provisioned with Make (bash++), I still think that K8S is the better approach to running production infrastructure. The advantage to using simple and well-defined components (e.g. external-dns, ingress-nginx, prometheus-operator etc.) that adhere to a universal API, and are maintained by many smart people all around the world, is a better proposition than scripting in my opinion.<p>At the end of the day, I'm in it for the shared mindset, great conversations and a genuine desire to do better, which I have not seen before K8S & the wider CNCF. I will go on a limb here and assume that I love scripting just as much as you do, but go beyond this aspect and you will discover that it's more to it than "thin install scripts that deploy containers" (which are not just glorified jails or unikernels).</p>
]]></description><pubDate>Sun, 27 Dec 2020 00:30:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=25547664</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25547664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25547664</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>The primary reason behind the move was not wanting to manage CI. Since there were no options for a managed Concourse in 2018, we migrated to Circle, one of the Changelog sponsors at the time.<p>Concourse worked well for us, we didn't have any issues that were being enough to remember. You may be interested in this screenshot that captured the changelog.com pipeline from 2017: <a href="https://pipeline.gerhard.io/images/small-oss.png" rel="nofollow">https://pipeline.gerhard.io/images/small-oss.png</a><p>I missed the simple Concourse pipeline view at first, but CircleCI improved by leaps and bounds in 2020, and the new Circle pipeline view equivalent is even better (compared to Concourse, clicking on jobs always works): <a href="https://app.circleci.com/pipelines/github/thechangelog/changelog.com/251/workflows/5836f363-060a-4bc5-8b9a-29988e59f0ad" rel="nofollow">https://app.circleci.com/pipelines/github/thechangelog/chang...</a><p>The Circle feature which I didn't expect to like as much as I do today, is the dashboard view (list of all pipeline/workflow runs). This is something that Concourse is still missing: <a href="https://app.circleci.com/pipelines/github/thechangelog" rel="nofollow">https://app.circleci.com/pipelines/github/thechangelog</a><p>My favourite Circle 2020 feature is the Insights: <a href="https://app.circleci.com/insights/github/thechangelog/changelog.com/workflows/build/overview" rel="nofollow">https://app.circleci.com/insights/github/thechangelog/change...</a>. Yup, we were one of the first ones to ask for it in 2019.<p>In 2021, I expect us to spend one migration credit on GitHub Actions, as a Circle replacement. Argo comes as a second close, but that requires an innovation credit which is more precious to us. Because we are already using GitHub Actions for some automation, it would make sense to consolidate, and also leverage the GitHub Container Registry, as a migration from Docker Hub. Watch <a href="https://github.com/thechangelog/changelog.com" rel="nofollow">https://github.com/thechangelog/changelog.com</a> to see what happens : )</p>
]]></description><pubDate>Sun, 27 Dec 2020 00:01:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=25547518</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25547518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25547518</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>Yes, it does make sense to move static files to object storage, especially the mp3s. There is some ffmpeg-related refactoring that we need to do before we can do this though, and it's not a quick & easy task, so we have been deferring it since it's not that high priority, and there are simpler solutions to this particular problem (i.e. improved CDN caching).<p>Other static files such as css, js, txt make sense to remain bundled with the app image, which is stateless and a prime candidate for horizontal scaling. Also, CDN caching makes small static files that change infrequently a non issue, regardless of their origin.<p>The managed Postgres service from Linode's 2021 roadmap is definitely something that we are looking forward to, but the simplest thing might be to provision Postgres with local volumes instead. We are already using a replicated Postgres via the Crunchy PostgreSQL Operator, so I'm looking forward to trying this approach out first.<p>CockroachDB is on my list of cool tech to experiment with, but that will use an innovation token, and we only have a few left for 2021, so I don't want to spend them all at once.</p>
]]></description><pubDate>Sat, 26 Dec 2020 23:38:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=25547394</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25547394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25547394</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>I use Digital Ocean, Scaleway, Linode, Google, Cloudflare & Amazon on a daily basis, and I have experienced networking issues on all providers this year. It's all public, some even wrote lengthy post-mortems, most have been posted on HN.<p>When failures happen, it's always a series of unfortunate incidents. When we've hit issues with Linode, we reached out and worked on what we can improve in our changelog.com setup, and discussed about the improvements that we can expect on the Linode end. Our common interest is a more resilient system, which requires a healthy collaboration, and Linode has been a great technology partner for us. Expect to see these write-ups on changelog.com as soon as these improvements have shipped, and we have hard data to support the claims ; )<p>I'm sorry to hear that things have not been as smooth for you on Linode. I hope that you will find an infra provider that you will be able to rely on and work with as we do. Not all collaborations will work out, and that's OK. It's also OK to be annoyed, fed up with the way things are and look for something different, something more suitable for you. My only ask is that you share your migration story with the changelog.com community. That is something that I would want to hear about.</p>
]]></description><pubDate>Sat, 26 Dec 2020 23:23:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=25547286</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25547286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25547286</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>That was my favourite part too!<p>Yes, we could have mitigated that entirely with CDN stale caching, but it was good to see what happens today, and then iterate towards better Fastly integration.</p>
]]></description><pubDate>Sat, 26 Dec 2020 18:03:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=25545125</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25545125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25545125</guid></item><item><title><![CDATA[New comment by gerhardlazu in "New setup for 2020"]]></title><description><![CDATA[
<p>Block storage is an area that we are working with Linode to improve. That's the random read/write performance, as measured by fio.<p>We have mostly sequential reads & writes (mp3 files) that peak at 50MB/s, then rely on CDN caching (Fastly makes us happy in this respect).<p>CDN caching is something that we are currently improving, which will make things quicker and more reliable.<p>The focus is on reality vs the ideal, and the path that we are taking to improving not just changelog.com, but also our sponsors' products. No managed K8S or IaaS is perfect, but we enjoy the Linode partnership & collaboration ;)</p>
]]></description><pubDate>Sat, 26 Dec 2020 18:01:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=25545105</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=25545105</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25545105</guid></item><item><title><![CDATA[New comment by gerhardlazu in "Show HN: Amqphosting, Managed RabbitMQ service"]]></title><description><![CDATA[
<p>For a stable RabbitMQ cluster, you want dedicated RabbitMQ hosts with sufficient CPU, disk & network throughput for your workload. Most RabbitMQ users don't know what their workload is, or what their hardware boundaries are. We, the RabbitMQ team, should make this easier - and we will, in due course.<p>A good default cluster is 3 x r4.large with 100GB GP2 for RABBITMQ_MNESIA_BASE & pause_minority. For queues that need HA, a good default is ha-mode: exactly, ha-params: 2, ha-sync-mode: automatic. As for the Erlang version, we recommend 19.3.6.2 which has important fixes relevant for RabbitMQ. Today we recommend RabbitMQ 3.6.11, and 3.6.12 as soon as it ships.<p>In the past 6 months, I have been focusing on RabbitMQ stability and operability on AWS, GCP & vSphere. Can you tell me more about your RabbitMQ deployment lobster_johnson? This will help: <a href="https://s3-eu-west-1.amazonaws.com/rabbitmq-share/help-us-understand-your-rmq-deployment/Help.Us.Understand.Your.RMQ.Use-Case-2017.08.14.pdf" rel="nofollow">https://s3-eu-west-1.amazonaws.com/rabbitmq-share/help-us-un...</a><p>I wouldn't mind moving this discussion to rabbitmq-users mailing list, so that it can benefit more in the RabbitMQ community.<p>Thanks, Gerhard</p>
]]></description><pubDate>Mon, 28 Aug 2017 10:05:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=15115545</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=15115545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15115545</guid></item><item><title><![CDATA[New comment by gerhardlazu in "Concourse CI"]]></title><description><![CDATA[
<p>Disclaimer: I work for Pivotal, on the RabbitMQ team. We push Concourse to its limits every day. We work closely with Jenkins, GoCD, Travis & Concourse. They all have their limitations.<p>All things will break horribly if the conditions are right. It's unreasonable to assume that the things which work in [insert your current CI] will work in Concourse. It's still a new and relatively immature product, but it works well in most cases.<p>Half the secret to a good Concourse experience is not upgrading it in-place - stand up fresh deployments. The other half is gradually transitioning between Concourse deployments, because bad versions have been and will continue to be released - mistakes are only human. As long as you share the Concourse vision and are willing to keep up with the pace of change - not everyone can or wants to - then it's an amazing CI.<p>Concourse still makes me excited, even after many years of hard lessons, because it is a genuinely innovative approach to building better software. Most miss this, and I understand why, but give it time - the ideas behind it will mature and become the norm.<p>Even though Concourse can work really well, it's not always the best choice. Make it better if you can & want to, use something else if it's easier. There is no right or wrong, just preferences : )</p>
]]></description><pubDate>Wed, 19 Jul 2017 12:02:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=14803652</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=14803652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=14803652</guid></item><item><title><![CDATA[New comment by gerhardlazu in "Fucking Shell Scripts"]]></title><description><![CDATA[
<p>I'm all Ansible & Docker on my personal projects / servers, what a blast! <a href="http://gerhard.lazu.co.uk/ansible-docker-the-path-to-continuous-delivery-1" rel="nofollow">http://gerhard.lazu.co.uk/ansible-docker-the-path-to-continu...</a><p>More focus on the "why", less on the "how" <a href="http://thechangelog.com/ansible-docker/" rel="nofollow">http://thechangelog.com/ansible-docker/</a></p>
]]></description><pubDate>Sat, 15 Mar 2014 07:17:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=7403924</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=7403924</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=7403924</guid></item><item><title><![CDATA[New comment by gerhardlazu in "Fucking Shell Scripts"]]></title><description><![CDATA[
<p>And one more thing: the point of a system management tool is to have no dependencies. If you need to install or use anything other than the provisioner to setup a vanilla distro, you've been hustled. Bash is guaranteed to work everywhere and that's what most love about it - myself including. But Ansible works on my RaspberryPies, on my FreeBSD storage servers and on my production Debians, Ubuntus and ArchLinuxes with no crutches or aids. I had bash scripts before, was really close to open sourcing them, but then I realised there is a much better way ; )</p>
]]></description><pubDate>Fri, 14 Mar 2014 23:53:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=7402777</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=7402777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=7402777</guid></item><item><title><![CDATA[New comment by gerhardlazu in "Fucking Shell Scripts"]]></title><description><![CDATA[
<p>After 4 years of heavy Cheffing, I completely agree with most opinions expressed here. I'm pretty sure Puppet is just about the same, I have 0 experience with it though, I won't claim otherwise.<p>I got frustrated enough with the Capistrano, Fabric, Puppi lot that I wrote a pure bash deployment tool with a fitting name htpps://github.com/gerhard/deliver.<p>Ansible on the other hand is something else though. There is some learning curve, agreed, but it's not as bad as awk or sed. And seriously, if you know your bash, you will know both awk and sed. I consider my shell scripting to be above average, and I've attempted a Docker orchestration pure bash tool <a href="https://github.com/cambridge-healthcare/dockerize" rel="nofollow">https://github.com/cambridge-healthcare/dockerize</a>, but Ansible just makes the same job easier. It's not everyone's cup of tea, but before you dismiss it, give it a real chance. I should know because I have initially dismissed it thinking that it's too complex, yet another Chef circus, yada yada, but trust me - it's worth it ; )</p>
]]></description><pubDate>Fri, 14 Mar 2014 23:37:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=7402740</link><dc:creator>gerhardlazu</dc:creator><comments>https://news.ycombinator.com/item?id=7402740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=7402740</guid></item></channel></rss>