<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jillesvangurp</title><link>https://news.ycombinator.com/user?id=jillesvangurp</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 09:55:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jillesvangurp" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jillesvangurp in "An AI Vibe Coding Horror Story"]]></title><description><![CDATA[
<p>I think the issue here is less about AI misbehaving and more about people doing things they should not be doing without thinking too hard about the consequences.<p>There are going to be a lot of accidents like this because it's just really easy to do. And some people are inevitably going to do silly things.<p>But it's not that different from people doing stupid things with Visual Basic back in the day. Or responding to friendly worded emails with the subject "I love you". Putting CDs/USB drives in work PCs with viruses, worms, etc.<p>That's what people do when you give the useful tools with sharp edges.</p>
]]></description><pubDate>Tue, 14 Apr 2026 09:23:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47763222</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47763222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47763222</guid></item><item><title><![CDATA[New comment by jillesvangurp in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>I'm wrong to not be fatalistic?! You lost me here.<p>A lot of people seem to be wasting a lot of energy insisting it is all going to end in tears because <fill in reasons>. All I'm doing here is pointing out that people like this come out of the woodwork with pretty much every big change in society and then people adapt and things are society fails to collapse.<p>I'm not arguing there won't be changes and that they won't be disruptive to some people. Because they will and people will need to adjust. But I am arguing that a lot of the dystopian outcomes are as unlikely to happen with this particular change as they have been with previous rounds of changes. I just don't see a basis for it. I do see a lot of people who want this to be true mainly because they are afraid of having to adapt.<p>> already incredibly unstable fragile world<p>There are a lot of people arguing that things are better than ever by most metrics you might want to apply for that. The reason you might feel stressed about the news is that dystopian headlines sell better and you are being influenced by those. That's also why the Y2K got a lot more attention than it deserved in the media and then a lot of people indeed freaked out over that. Of course a lot of that got caught up in people believing for other reasons we are all doomed and that the apocalypse was coming. And it made for amusing headlines. So, it got a lot more attention than it deserved. And then the clock ticked over and society failed to collapse.</p>
]]></description><pubDate>Tue, 14 Apr 2026 06:37:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762039</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47762039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762039</guid></item><item><title><![CDATA[New comment by jillesvangurp in "DaVinci Resolve – Photo"]]></title><description><![CDATA[
<p>Darktable does a lot of things that are conceptually similar to what DaVinci Resolve is likely doing here.<p>One of the big things Darktable has been pushing for a few years is moving from the now deprecated display-referred workflow to a scene-referred one. The key idea is that you keep the image in something closer to the original scene as captured by the camera for as long as possible, instead of rendering it early into output-referred display space such as sRGB. With raw files that matters, because many editing operations behave very differently depending on where in the pipeline they happen.<p>That is a bit different from how tools like Adobe Lightroom tend to work. The main problem with display-referred workflows is not just reduced precision, but that you can end up clipping information and applying nonlinear transforms too early. Once that happens, later edits are working against damage that has effectively already been baked into the pipeline. So subtle tone mapping tweaks can push colors out of gamut, for example. There are a lot of ways to deal with that obviously and Adobe does a nice job of balancing tradeoffs. But they do remove a lot of choice and control from the process.<p>The UX tradeoff in Darktable is that module order matters a lot and there are a lot of different modules that do similar things in different ways. You can adjust modules in any order you like, but the processing order itself is usually best left alone. That is a leaky abstraction: it is hard to explain why the order matters unless you already understand what the pipeline is doing. And of course Darktable now allows reordering because there are sometimes valid reasons to do that. But that also means users can easily make things worse if they start changing the order without understanding the consequences.<p>But for simple editing, Darktable is actually really nice these days. I have some auto applied modules with rules for camera type and a few other things. Mostly it looks alright without me doing much. One of its strong points is rule based application of particular edits based on camera or lens. With my Fuji, it needs a little exposure correction because it under exposes intentionally to protect highlights for example.</p>
]]></description><pubDate>Tue, 14 Apr 2026 04:40:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47761321</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47761321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47761321</guid></item><item><title><![CDATA[New comment by jillesvangurp in "GitHub Stacked PRs"]]></title><description><![CDATA[
<p>It's easier to pile on a lot of changes with AI assisted workflows. And reviewing all that is definitely a challenge just because of the volume of changes. I've actually stopped pretending I can review everything in detail because it makes me a bottleneck in the process. Anything that makes reviewing easier is welcome.<p>To me, stacked PRs seems overly complicated. It seems to boil down to propagating git rebases through stacks of interdependent branches.<p>I'm fine with that as long as I don't have to deal with people force pushing changes and routinely rewriting upstream history. It's something you probably should do in your own private fork of a repository that you aren't sharing with anyone. Or if you are, you need to communicate clearly. But if the goal is to produce a stack of PRs that in the end merge cleanly, stacked PRs might be a good thing.<p>As soon as you have multiple collaborators working on a feature branch force pushing can become a problem and you need to impose some rules. Because otherwise you might end up breaking people's local branches and create work for them. The core issue here is that in many teams, people don't actually fork the main repository and have push access to the main repository. Which emulates the central repository model that people were used to twenty years ago. Having push access is not normal in most OSS projects. I've actually gotten the request from some rookie developers that apparently don't get forking to "please give me access to your repository" on some of my OSS projects.<p>A proper pull request (whether stacked or not) to an OSS project needs to be clean. If you want to work on some feature for weeks you of course need mechanisms to stay on top of up stream changes. OSS maintainers will probably reject anything that looks overly messy to merge. That's their job.</p>
]]></description><pubDate>Tue, 14 Apr 2026 04:07:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47761122</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47761122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47761122</guid></item><item><title><![CDATA[New comment by jillesvangurp in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>I try to not be fatalistic. As I was trying to argue, it's historically inaccurate and it doesn't actually change the outcome. Clinging to the past has never really worked that well.<p>As for rich people, they get richer and richer until people correct them. Sometimes violently. The current concentration of wealth in particularly the US seems more related to political changes since about the Reagan era than to any recent innovations related to technology.</p>
]]></description><pubDate>Mon, 13 Apr 2026 17:45:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47755501</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47755501</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47755501</guid></item><item><title><![CDATA[New comment by jillesvangurp in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>Change is a constant in history. Stuff happens, and then we adjust. Big changes may result in short term confusion, anger, etc. All the classic signs of the five stages of grief basically.<p>If you step back a little, a lot of people simply don't see the forest for the trees and they start imagining bad outcomes and then panic over those. Understandable but not that productive.<p>If you look at past changes where that was the case you can see some patterns. People project both utopian and dystopian views and there's a certain amount of hysteria and hype around both views. But neither of those usually play out as people hope/predict. The inability to look beyond the status quo and redefine the future in terms of it is very common. It's the whole cars vs. faster horses thing. I call this an imagination deficit. It usually sorts itself out over time as people find out different ways to adjust and the rest of society just adjusts itself around that. Usually this also involves stuff few people predicted. But until that happens, there's uncertainty, chaos, and also opportunity.<p>With AI, there's going to be a need for some adjustment. Whether people like it or not, a lot of what we do will likely end up being quite easy to automate. And that raises the question what we'll do instead.<p>Of course, the flip side of automating stuff is that it lowers the value of that stuff. That actually moderates the rollout of this stuff and has diminishing returns. We'll automate all the easy and expensive stuff first. And that will keep us busy for a while. Ultimately we'll pay less for this stuff and do more of it. But that just means we start looking for more valuable stuff to do and buy. We'll effectively move the goal posts and raise the ambition. That's where the economical growth will come from.<p>This adjustment process is obviously going to be painful for some people. But the good news is that it won't happen overnight. We'll have time to learn new things and figure out what we can do that is actually valuable to others. Most things don't happen at the speed the most optimistic person wants things to happen. Just looking at inference cost and energy, there are some real constraints on what we can do at scale short term. And energy cost just went up by quite a lot. Lots of new challenges where AI isn't the easy answer just yet.</p>
]]></description><pubDate>Mon, 13 Apr 2026 14:53:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47752895</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47752895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47752895</guid></item><item><title><![CDATA[New comment by jillesvangurp in "Android now stops you sharing your location in photos"]]></title><description><![CDATA[
<p>Pretty good. I played a bit with gpt-4 a year or so ago by feeding it random screenshots from Google street view. It will pick up a lot of subtle hints from what otherwise looks like generic streets. I imagine more recent models might be better at this now.</p>
]]></description><pubDate>Mon, 13 Apr 2026 13:27:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47751667</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47751667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47751667</guid></item><item><title><![CDATA[New comment by jillesvangurp in "The economics of software teams: Why most engineering orgs are flying blind"]]></title><description><![CDATA[
<p>If you want to understand economics, I recommend watching some of Don Reinertsen's videos on Lean 2.0. He goes into a few concepts quite deeply that are quite intuitive.<p>Cost of delay: calculating the cost of delaying by a few weeks in terms of lost revenue (you aren't shipping whatever it is you are building), total life value of the product (your feature won't be delivering value forever), extra cost in staffing. You can slap a number on it. It doesn't have to be a very accurate number. But it will give you a handle on being mindful that you are delaying the moment where revenue is made and taking on team cost at the cost of other stuff on your backlog.<p>Option value: calculating the payoff for some feature you add to your software as having a non linear payoff. It costs you n when it doesn't work out and might deliver 10*n in value if it does. Lean 1.0 would have you stay focused and toss out the option for that potential 10x payoff. But if you do a bit of math, there probably is a lot of low hanging fruit that you might want to think about picking because it has a low cost and a potential high payoff. In the same way variability is a good thing because it gives you the option to do something with it later. A little bit of overengineering can buy you a lot of option value. Whereas having tunnel vision and only doing what was asked might opt you out of all that extra value.<p>A bad estimation is better than no estimation: even if you are off by 3x, at least you'll have a number and you can learn and adapt over time. Getting wildly varying estimates from different people means you have very different ideas about what is being estimated. Do your estimates in time. Because that allows you to slap a dollar value on that time and do some cost calculations. How many product owners do you know that actually do that or even know how to do that?<p>Don't run teams at 100% capacity. Work piles up in queues and causes delays when teams are pushed hard. The more work you pile on the worse it gets. Worse, teams start cutting corners and take on technical debt in order to clear the queue faster. Any manufacturing plant manager knows not to plan for more than 90% capacity. It doesn't work. You just end up with a lot of unfinished work blocking other work. Most software managers will happily go to 110%. This causes more issues than it solves. Whenever you hear some manager talking about crunch time, they've messed up their planning.<p>Stretching a team like that will just cause cycle times to increase when you do that. Also, see cost of delay. Queues aren't actually free. If you have a lot of work in progress with inter dependencies, any issues will cause your plans to derail and cause costly delays. It's actually very risky to do that if you think about it like that. If you've ever been on a team that seemingly doesn't get anything done anymore, this might be what is going on.<p>I like this back of the envelope math; it's hard to argue with.<p>I used to be a salaried software engineer in a big multinational. None of us had any notion of cost. We were doing stuff that we were paid to do. It probably cost millions. Most decision making did not have $ values on them. I've since been in a few startups. One where we got funded and subsequently ran out of money without ever bringing in meaningful revenue. And another one that I helped bootstrap where I'm getting paid (a little) out of revenue we make. There's a very direct connection between stuff I do and money coming in.</p>
]]></description><pubDate>Mon, 13 Apr 2026 07:57:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749087</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47749087</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749087</guid></item><item><title><![CDATA[New comment by jillesvangurp in "One neat trick to end extreme poverty"]]></title><description><![CDATA[
<p>It used to be that agriculture was most of our economies. These days it's a relatively small part of the economy. The good news with that is that the burden of making sure the entire planet is well fed is not that high anymore. In the west, the food we throw away could easily feed the rest of the planet.<p>A couple of trillionaires could probably fund most of it. People like Bill Gates actually have done quite a bit on this front, which whatever else you think of the man is quite admirable.<p>However, just giving people money or food isn't a long term solution. Empowering people to earn a living and grow or buy food is a much more structural fix. Mostly it boils down to ending local conflicts and wars. In the eighties, Live Aid was a big campaign against hunger in places like Ethiopia. It's doing somewhat better these days. But it still has a lot of conflict. But, it also has economic growth and that has slowly been pulling people out of poverty there.<p>Another fix is restoring land. There's a big project to stop the Sahara from expanding further south that involves a green wall. It's a successful project where people dig simple U shaped trenches to capture rain water. Instead of flooding away with top soil, the water now stays and turns land back into usable farm land. This project has been running for a few years. The local population seems to get it and is now enthusiastically implementing it all over the place. They can grow food, sell it on local markets, and graze their live stock. The Sahel is also a region where poverty has fueled a lot of conflict over land and resources. So, it's a double success in the sens that it takes away some of the root causes for that kind of conflict.</p>
]]></description><pubDate>Sun, 12 Apr 2026 06:51:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47736799</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47736799</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47736799</guid></item><item><title><![CDATA[New comment by jillesvangurp in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>> Does a digitally encoded version resemble a copyrighted work in some shape or form? </snark><p>Well that's different because an encoded image or video clearly intends to reproduce the original perfectly and the end result after decoding is (intentionally) very close to form of the original. Which makes it a clear cut case of being a copy of the original.<p>The reason so many cases don't get very far is that mostly judges and lawyers don't think like engineers. Copyright law predates most modern technology. So, everything needs to be rephrased in terms of people copying stuff for commercial gain. The original target of the law was people using printing presses to create copies of books written by others. Which was hugely annoying to some publishers who thought they had exclusive deals with authors. But what about academics quoting each other? Or literary reviews. Or summaries. Or people reading from a book on the radio? This stuff gets complicated quickly. Most of those things were settled a long time ago. Fair use is a concept that gets wielded a lot for this. Yes its a copy but its entirely reasonable for the copy holder to be doing what they are doing and therefore not considered an infringement.<p>The rest is just centuries of legal interpretation of that and how it applies to modern technology. Whether that's DJs sampling music or artists working in visual imagery into their art works. AI is mostly just more of the same here. Yes there are some legally interesting aspects with AI but not that many new ones. Judges are unlikely to rethink centuries of legal interpretations here and are more likely to try to reconcile AI in with existing decisions. Any changes to the law would have to be driven by politicians; judges tend to be conservative with their interpretations.</p>
]]></description><pubDate>Sat, 11 Apr 2026 11:46:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47729756</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47729756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47729756</guid></item><item><title><![CDATA[New comment by jillesvangurp in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>AIs are not human and therefore their output is a human authored contribution and only human authored things are covered by copyright. The work might hypothetically infringe on other people's copyright. But such an infringement does not happen until a human decides to create and distribute a work that somehow integrates that generated code or text.<p>The solution documented here seems very pragmatic. You as a contributor simply state that you are making the contribution and that you are not infringing on other people's work with that contribution under the GPLv2. And you document the fact that you used AI for transparency reasons.<p>There is a lot of legal murkiness around how training data is handled, and the output of the models. Or even the models themselves. Is something that in no way or shape resembles a copyrighted work (i.e. a model) actually distributing that work? The legal arguments here will probably take a long time to settle but it seems the fair use concept offers a way out here. You might create potentially infringing work with a model that may or may not be covered by fair use. But that would be your decision.<p>For small contributions to the Linux kernel it would be hard to argue that a passing resemblance of say a for loop in the contribution to some for loop in somebody else's code base would be anything else than coincidence or fair use.</p>
]]></description><pubDate>Fri, 10 Apr 2026 21:28:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47723875</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47723875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47723875</guid></item><item><title><![CDATA[New comment by jillesvangurp in "We've raised $17M to build what comes after Git"]]></title><description><![CDATA[
<p>The article is about a $17M funding round for GitButler. Which I assume has some revenue plan that you might qualify as SAAS. Correct me if I'm wrong.</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:33:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714802</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47714802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714802</guid></item><item><title><![CDATA[New comment by jillesvangurp in "Moving from WordPress to Jekyll (and static site generators in general)"]]></title><description><![CDATA[
<p>Probably; if it has an API you can probably get coding tools to use it. But the point of using agentic coding tools is that they are really good at working with code and files. And it tends to be a lot faster and easier. And you can build tests and browser tests around that as well.<p>In my view, a CMS is intended for people doing stuff. If you transition that stuff to AI Agents, why keep the CMS around? And if AI does all/most of the coding, it's not such a big leap for non technical people to get their hands dirty anymore.</p>
]]></description><pubDate>Fri, 10 Apr 2026 05:31:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714022</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47714022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714022</guid></item><item><title><![CDATA[New comment by jillesvangurp in "Moving from WordPress to Jekyll (and static site generators in general)"]]></title><description><![CDATA[
<p>Yes, we just launched FORMATION XYZ a few days ago. Thanks! Fully running on hugo and I used the search library as well because it seems to work really well.<p>Interesting facts:<p>- the domain was registered a few weeks ago<p>- most of the heavy lifting was done by our non technical CEO by prompting codex; not by me.<p>- he got a bit carried away with some features and he was able to pull off a little robot (powered by vector search and embeddings), audio transcriptions, and a few other nice features. He has a product/ux background and a good eye for detail but no coding skills whatsoever.<p>- we use a lot of skills and guard rails to guide content generation, SEO optimization, etc. Our SEO agent does competitive analysis on a schedule, figures out optimal SEO phrases and maintains a list of approved SEO language.<p>- our content generation skills guard against typical AI slop smells, weaves in SEO language where possible, and uses a sub agent to act as harsh critic on content. AI slop only happens if you let it happen. You can see on the querylight documentation site that I have a bit more of that there. I need to improve the skills for that one.<p>If you want to get your feet wet with this, I would just recommend doing it and start with simple changes. Use Claude Code, Codex, or whatever you prefer.<p>One of the first successes I had with another website was "add the logo for company x to the logo wall". It went off and figured out the website, got the logo svg, figured out where to put it and hooked it up in the right place. For me that was a oh "it can do that now" kind of moment. A lot of content changes are like that.</p>
]]></description><pubDate>Fri, 10 Apr 2026 05:27:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47713995</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47713995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47713995</guid></item><item><title><![CDATA[New comment by jillesvangurp in "We've raised $17M to build what comes after Git"]]></title><description><![CDATA[
<p>Why are investors still investing in SAAS products like this? I've heard some investors made rather blunt statements about such investments being a very hard sell to them at this point. Clearly somebody believes differently here.<p>We have AI now. AI tools are pretty handy with Git. I've not manually resolved git conflicts in months now. That's more or less a solved problem for me. Mostly codex creates and manages pull requests for me. I also have it manage my GitHub issues on some projects. For some things, I also let it do release management with elaborate checklists, release prep, and driving automation for package deployment via github actions triggered via tags, and then creating the gh release and attaching binaries. In short, I just give a thumbs up and all the right things happen.<p>To be blunt, I think a SAAS service that tries to make Git nicer to use is a going to be a bit redundant. I don't think AI tools really need that help. Or a git replacement. And people will mostly be delegating whatever it is they still do manually with Git pretty soon. I've made that switch already because I'm an early adopter. And because I'm lazy and it seems AI is more disciplined at following good practices and process than I am.</p>
]]></description><pubDate>Fri, 10 Apr 2026 04:09:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47713523</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47713523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47713523</guid></item><item><title><![CDATA[New comment by jillesvangurp in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>I've started thinking of these systems as legacy systems. We have them. They are important and there's a lot of data in them. But they aren't optimal any more.<p>How we access them and where data lives is essentially an optimization problem. And AI changes what is optimal. Having data live in some walled garden with APIs designed to keep people out (most SAAS systems) is arguably sub optimal at this point. Sorting out these plumbing issues is actually a big obstacle for people to do productive things via agentic tools with these systems.<p>But a good way to deal with this is to apply some system thinking and figure out if you still need these systems at all. I've started replacing a lot of these things with simple coder friendly solutions. Not because I'm going to code against these things but because AI tools are very good at doing that on my behalf. If you are going to access data, it's nicer if that data is stored locally in a way that makes it easy to access that data. MCP for some SAAS thing is nice. A locally running SQL database with the data is nicer. And a lot faster to access. Processing data close to where it is stored is optimal.<p>As for MCP. I think it's not that important. Most agentic coding tools switch effortlessly between protocols and languages. In the end MCP is just another RPC protocol. Not a particularly good or optimal one even. If you had an API or cli already, it's a bit redundant to add MCP. Auth is indeed a key challenge. And largely not solved yet. I don't think MCP adds a whole lot of new elements for that.</p>
]]></description><pubDate>Fri, 10 Apr 2026 03:51:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47713408</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47713408</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47713408</guid></item><item><title><![CDATA[New comment by jillesvangurp in "Many African families spend fortunes burying their dead"]]></title><description><![CDATA[
<p>Social pressure to do stuff like this is enormous in some families. It's not necessarily about the subjects (deceased person or the newly wedded couple) but about re-affirming the status of their relatives. Arguably the dead don't really care. But their nearest relatives definitely do care how they are perceived to be dealing with the death of the deceased. It underlines their importance and status. People come to "pay their respects". There's a whole etiquette around that.<p>In many cultures it used to be (or still is) quite common to treat brides as property. It's more like a financial transaction than a romantic thing. The groom's family "buys" the wife for the husband. Money changes hands sometimes. An elaborate party seals the deal. A lot of royal houses actually have a rich and colorful history with arranged marriages. And inbreeding because they jealously guarded their power by marrying cousins and managing how wealth and power is distributed via inheritance.<p>Of course grief and empathy with the mourning relatives is also very real and genuine and is mixed through this. Same with happiness for a newly wed couple.<p>And some of that empathy translates into people making sure they are there for the mourning family. So, they travel from far. And if everybody is coming, you need to make sure you don't forget to invite everybody else. People will want to be there. And that creates a need for a social gathering. And that in turn results in it becoming a big event. Which then that creates an obligation to make sure that all these people are welcomed properly. They need to be fed, entertained, etc. Or it would look bad on the family.<p>In short, it's all very explainable. But also a bit irrational to put yourself in debt because you are getting married or because somebody you care about passed away. Some people flip this with not wanting to impose on others with either their marriage or deaths. I'm not married and I don't believe in an after life. I've told my relatives to do what pleases them and works for them with my remains when the time comes but that I otherwise don't really care.</p>
]]></description><pubDate>Fri, 10 Apr 2026 03:35:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47713294</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47713294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47713294</guid></item><item><title><![CDATA[New comment by jillesvangurp in "Moving from WordPress to Jekyll (and static site generators in general)"]]></title><description><![CDATA[
<p>I've been using ai coding tools in the last few months with static site generators. This is hugely empowering and completely obsoletes most CMS systems. Especially for more complex publication workflows.<p>I'm using hugo, not jekyll. But I don't think it matters which site generator you pick. The key point is using something that is code driven. And then have AI drive the code changes. Basically all routine site maintenance and updates is now controlled via agentic coding.<p>We use guard rails and skills to impose structure and process. This includes tone checks (and fixing), making sure audio transcriptions are in sync with articles, ensuring everything is tagged correctly, dealing with translations and approved lists of translations of key phrases, SEO checks and much more. I've been dialing in a lot of this in steps. You can start without most of this. But essentially a lot of manual work melts away when you get a bit structured on this. Like the article, we also use vector search embeddings. Our search actually uses the same model and runs it in the browser via web GPU. I also use it for related articles.<p>Also we've been experimenting with using reveal.js for presentations. Same principle. Forget things like Keynote, Canva, etc. Reveal.js is meant for programmers. But if you replace those with agents, non technical people can prompt together some really amazing decks. Replacing applications and UIs with code driven systems removes the need for those applications and UIs. And using AI to drive those code based systems removes the need for having developers in the loop. Our non programmer CEO who was a heavy Canva user is now doing decks and huge website updates this way now. Pretty scary actually. I don't think he'll use Canva again. I'm barely involved beyond setting up some basic plumbing. One party trick he likes is adapting decks to customers by integrating their house colors and visuals. Only takes pointing the AI at their website.<p><a href="https://querylight.tryformation.com/" rel="nofollow">https://querylight.tryformation.com/</a> is a hugo demo site for the search capabilities. It hosts the documentation for the vector (and lexical) search library I use on our websites. The entire documentation site is managed as I describe above.</p>
]]></description><pubDate>Fri, 10 Apr 2026 02:28:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47712893</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47712893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47712893</guid></item><item><title><![CDATA[New comment by jillesvangurp in "Robots eat cars"]]></title><description><![CDATA[
<p>Safety is a great reason to not do something. Utility and enhanced safety are great reasons to override that reflex. A lot has happened since the Therac 25 incident in the medical world with AI, machine learning, robotic neuro surgery, all sorts of computer aided diagnostics, etc. This stuff undeniably saves lives. The incident did inspire some level of scrutiny of course. But compared to modern medical equipment, that machine is from the stone age.<p>Steer by wire (which the article highlights) is common on all modern airbus planes for decades. The first ones flew shortly after the Therac incident. Boeing has also started implementing that on their newer models. And of course most of the vtol planes/drones currently starting to operate and progress through certification programs also commonly use steer by wire. Several of these flew without pilots before their first manned test flights. These are computer controlled, pilot directed pretty much by default with that part being optional by design.<p>Beyond Tesla, there are now several other manufacturers implementing steer by wire in the car industry. Nio, Lexus, Toyota, Mercedes, and a few others each either already have cars on the road for this or are working on new ones. And while Tesla has received quite a bit of criticism on their FSD system, I don't think there have been a lot of incidents implicating the steer by wire in Cybertrucks. It seems to work and drivers seem to mostly like it once they get used to it. The car is controversial of course. But there's a lot of cool tech inside that is being copied across the industry now.<p>The implied warning "we should be careful with this stuff because Therac-25" is a bit of a cliche at this point. Yes, we need lots of checks and balances when using automation in safety critical systems. And that has been common for decades.</p>
]]></description><pubDate>Fri, 10 Apr 2026 02:13:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47712800</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47712800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47712800</guid></item><item><title><![CDATA[New comment by jillesvangurp in "I imported the full Linux kernel git history into pgit"]]></title><description><![CDATA[
<p>I would recommend using guard rails to guide tone, phrasing, etc. This helps prevent whole categories of bad phrasing. It also helps if you provide good inputs for what you actually want to write about and don't rely too much on it just filling empty space with word soup. And iterate on both the guard rails and the text.</p>
]]></description><pubDate>Thu, 09 Apr 2026 12:37:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702912</link><dc:creator>jillesvangurp</dc:creator><comments>https://news.ycombinator.com/item?id=47702912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702912</guid></item></channel></rss>