<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sebastos</title><link>https://news.ycombinator.com/user?id=sebastos</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 07:06:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sebastos" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sebastos in "Show HN: I built a Cargo-like build tool for C/C++"]]></title><description><![CDATA[
<p>The tough truth is that there already is a cargo for C/C++: Conan2. I know, python, ick. I know, conanfile.py, ick. But despite its warts, Conan fundamentally CAN handle every part of the general problem. Nobody else can. Profiles to manage host vs. target configuration? Check. Sufficiently detailed modeling of ABI to allow pre-compiled binary caching, local and remote? Check, check, check. Offline vs. Online work modes? Check. Building any relevant project via any relevant build system, including Meson, without changes to the project itself? Check. Support for pulling build-side requirements? Check. Version ranges? Check. Lockfiles? Check. Closed-source, binary-only dependencies? Check.<p>Once you appreciate the vastness of the problem, you will see that having a vibrant ecosystem of different competing package managers sucks. This is a problem where ONE standard that can handle every situation is incalculably better than many different solutions which solve only slices of the problem. I don't care how terse craft's toml file is - if it can't cross compile, it's useless to me. So my project can never use your tool, which implies other projects will have the same problem, which implies you're not the one package manager / build system, which means you're part of the problem, not the solution. The Right Thing is to adopt one unilateral standard for all projects. If you're remotely interested in working on package managers, the best way to help the human race is to fix all of the outstanding things about Conan that prevent it from being the One Thing. It's the closest to being the One Thing, and yet there are still many hanging chads:<p>- its terribly written documentation<p>- its incomplete support for editable packages<p>- its only nascent support for "workspaces"<p>- its lack of NVIDIA recipes<p>If you really can't stand to work on Conan (I wouldn't blame you), another effort that could help is the common package specification format (CPS). Making that a thing would also be a huge improvement. In fact, if it succeeds, then you'd be free to compete with conan's "frontend" ergonomics without having to compete with the ecosystem.</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:26:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711072</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47711072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711072</guid></item><item><title><![CDATA[New comment by sebastos in "Eniac, the First General-Purpose Digital Computer, Turns 80"]]></title><description><![CDATA[
<p>The ENIAC team’s decision to spin off and incorporate was  surely pushed along by how they got screwed multiple times by the academics - Goldstine and Von Neumann, plus the university itself. It’s easier to celebrate the free publishing of ideas if your name is at least going to be on the paper.<p>It seems like you’re not trying particularly hard to avoid the idea of “monetizing” the computer to sound pejorative. It was the creation of the computer _industry_ that transformed the world and established the import of computers, was it not? You were never getting there without monetization. What good is the spread of ideas if someone, somewhere doesn’t eventually start selling computers? This grates especially hard given that the academics were the ones who acted unscrupulously by lifting their ideas and publicizing them without permission or credit. (“Leaked” is a charitable way to say “deliberately disseminated without caution”).<p>I agree the stored program is important, but the stored program is of ENIAC vintage, even if it wasn’t implemented on it. And Eckert and Mauchly definitively came to this idea before the involvement of Von Neumann. The thing is, they had an obligation to finish the machine they had promised to build for the army before pursuing such a major redesign. So all they COULD do was informally collect their ideas for a 2.0. Von Neumann arrives, absorbs what they’re up to, synthesizes it (including ‘the big idea’ that ENIAC was missing), and the rest is history. That synthesis is published without their names, and that is why we talk about the Von Neumann architecture. Look, I’m sure it’s true that the crispness of that paper can be attributed to Von Neumann, but it’s a non-sequitur to assume that Eckert and Mauchly’s ideas were jumbled. They were at least organized enough to be building a working machine in the background, and if we’re going to argue that the important thing was promulgation of enough information for others to replicate, than the practicum is more important than mathematical tidiness.<p>In fact, if we’re talking about how the ideas spread, the paper is frankly overblown. The Moore school lectures were really what caused the Cambrian explosion of electronic computing. There, you can find Eckert and Mauchly utterly central to the elucidation of how to build a general purpose electronic computer. And hey look, there they are, deliberately sharing the ideas out to interested practitioners, in a more pragmatic and direct way than the paper.<p>What I’m building to here is that E&M starting a company was not evidence that they were just out to make a buck. On the contrary, what it shows is that they had _foresight_ about what the next interesting chapter was bound to be. With the Moore School Lecrures, the ‘publishing of the ideas’ stage was over - the next step was to begin building more machines that could do more computation for more users. And while there was plenty that happened afterward to refine the theoretical model, they were absolutely correct that that’s where the action was. In fact, I think that if you look at what many of these proposed fathers of computing did next, it’s an excellent litmus test of how central they actually were. Some of the sillier ones like Atanasoff just forget about their supposed invention and go on with life - that’s a tell that they weren’t that interested in general-purpose, high speed computing. Whereas E&M’s follow-on work was to advance the field even in the face of great setback. This also completely deconstructs the idea that they were just thinking about artillery, or just thinking about weather. They were thinking about _computing_, and their careers afterwards demonstrate this.</p>
]]></description><pubDate>Sat, 21 Mar 2026 09:29:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47465508</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47465508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47465508</guid></item><item><title><![CDATA[New comment by sebastos in "Eniac, the First General-Purpose Digital Computer, Turns 80"]]></title><description><![CDATA[
<p>Wait, where are you thinking the von Neumann paper which came from?</p>
]]></description><pubDate>Thu, 19 Mar 2026 11:40:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47437710</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47437710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47437710</guid></item><item><title><![CDATA[New comment by sebastos in "PeppyOS: A simpler alternative to ROS 2 (now with containers support)"]]></title><description><![CDATA[
<p>I was sad to see you guys shut down - I think you were on to something with deterministic faster-than-realtime replay. Not surprised it was hard to find paying customers, but for what it's worth, my engineering self thought that you guys were solving the right problem. As far as I can tell, it's still not solved, and the shocking truth is that everyone is just Living That Way.<p>The other thing that is important is how to provide a more query-like interface to tease out the data you actually want your node to react to, yet in a way that will be deterministic. You need to guide users away from introducing non-determinism, which can be tricky because innocent things like a message buffer with a max size can lead to such situations.<p>I have talked with one of the key people at Xronos (<a href="https://www.xronos.com/" rel="nofollow">https://www.xronos.com/</a>), who are trying to attack related problems. Still, even they aren't quite as pre-occupied with _replay_, which is crucial.<p>I think the sad truth is that the second evolution of all this frameworking simply hasn't come together convincingly enough, and in one place, for it to gather momentum. It turned out to be hard. And now that it has taken too long, it's my bet that ROS2 and all of its imitators will get lapped by holistic deep approaches. Not the stupid stuff happening with these fake humanoid robot companies mind you, but still - something holistic and deep. Something coming out of the predictive coding research e.g., or world models, etc. Training in simulated environments with generative systems is going to lead to behavior so much more sophisticated than gluing together all of our little services. Roboticists have their own version of the bitter lesson coming soon.</p>
]]></description><pubDate>Wed, 11 Mar 2026 23:21:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47343866</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47343866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47343866</guid></item><item><title><![CDATA[New comment by sebastos in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>The real answer is that once LLMs passed a "casual" application of the Turing test, it just made us realize that the "casual Turing test" is not particularly interesting. It turns out to be too easy to ape human behavior over short time frames for it to be a good indicator of human-like intelligence.<p>Now, you could argue that this right here is the aforementioned moving of the goalposts. After all, we're deciding that the casual Turing test wasn't interesting precisely after having seen that LLMs could pass it.<p>However, in my view, the Turing test _always_ implied the "rigorous" Turing test, and it's only now that we're actually flirting with passing it that it had to be clarified what counts as a true Turing test. As I see it, the Turing test can still be salvaged as a criteria for genera intelligence, but only if you allow it to be a no-holds-barred, life-depends-on-it test to exhaustion. This would involve allowing arbitrarily long questioning periods, for instance. I think this is more in the spirit of the original formulation, because the whole idea is to pit a machine against all of human intelligence, proving it has a similar arsenal of adaptability at its disposal. If it only has to passingly fool a human for brief periods, well... I'm afraid that just doesn't prove much. All sorts of stuff briefly fools humans. What requires intelligence is to consistently anticipate and adapt to all lines of questioning in a sustained manner until the human runs out of ideas for how to differentiate.</p>
]]></description><pubDate>Mon, 09 Mar 2026 05:51:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47305288</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47305288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47305288</guid></item><item><title><![CDATA[New comment by sebastos in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Firstly, the models that pass the Math Olympiad aren’t the same models as the ones you’re saying “pass the Turing test”. Secondly, nothing actually passes the Turing test. They pass a vibes check of “hey that’s pretty good!” but if your life depended on it, you could easily find ways to sniff out an LLM agent. Thirdly, none of these models learn in real time, which is an obviously essential feature.<p>We’ll know AGI when we see it, and this ain’t it. This complaining about changing goalposts is so transparently sour grapes from people over-invested in hyping the current LLM paradigm.</p>
]]></description><pubDate>Sun, 08 Mar 2026 19:37:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300460</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47300460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300460</guid></item><item><title><![CDATA[New comment by sebastos in "Sub-$200 Lidar could reshuffle auto sensor economics"]]></title><description><![CDATA[
<p>Before you can learn how far away an object is, you must decide: which laser return corresponds to which object? In fact, what counts as an object? Where does a tree stop and become a fallen tree branch? Is that object moving towards me? Is the apparent velocity of this point represent the fact that the object is moving, or that it's rotating, or that it's flexing, or dividing, or all 4? Is that object moving towards me but that's ok because it's a car that's going to stay in its lane? What's a lane? What's my laser return for where the lane is? Should I stop at this intersection? What's my laser return for whether the light is red? Am I in the blind spot of the car in front of me? Is he about to shift into my lane because he doesn't see me? What laser return do I get to tell me whether his indicator is on?<p>The problem of understanding what is happening in front of you while driving is preposterously more complicated than just a point cloud of distances. That is .01% of the problem. To solve the remaining 99.99%, you need interpretation of photons and sound waves into a semantic understanding that gives you predictive power to guess how the physical world will evolve and avoid breaking the rules of the road. Show me a mechanized way of understanding the causes of how the physical structure of the world is about to evolve, and I'll show you something that is imitating a human brain, however poorly. The cameras give you _plenty_ of data to determine 3D structure, at a higher resolution than the laser, without being emissive, for cheaper. It's a completely reasonable approach to focus your limited computational hardware on interpreting the data you have instead of adding more modalities with their own limitations that (according to nature) are demonstrably unnecessary.<p>The world is more complicated than slogans and pitchforks and Elon Bad.</p>
]]></description><pubDate>Tue, 24 Feb 2026 20:27:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47142456</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47142456</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47142456</guid></item><item><title><![CDATA[New comment by sebastos in "Sub-$200 Lidar could reshuffle auto sensor economics"]]></title><description><![CDATA[
<p>Then you deeply underestimate how difficult the problem is, and deeply misunderstand where all the effort has been spent in developing autonomous vehicles.</p>
]]></description><pubDate>Mon, 23 Feb 2026 16:07:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47124224</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47124224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47124224</guid></item><item><title><![CDATA[New comment by sebastos in "A16z partner says that the theory that we’ll vibe code everything is wrong"]]></title><description><![CDATA[
<p>Insightful points!<p>It would be interesting if, with all the anxiety about vibe coding becoming the new normal, its only lasting effect is the emergence of smaller B2B companies that quickly razzle dazzle together a bespoke replacement for Concur, SAP, Workday, the crappy company sharepoint - whatever. Reminds me of what people say Palantir is doing, but now supercharged by the AI-driven workflows to stand up the “forward deployed” “solution” even faster.</p>
]]></description><pubDate>Sun, 22 Feb 2026 04:02:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47108025</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47108025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47108025</guid></item><item><title><![CDATA[New comment by sebastos in "Halt and Catch Fire: TV’s best drama you’ve probably never heard of (2021)"]]></title><description><![CDATA[
<p>Great take.<p>There's a line in the first season that runs as an undercurrent through the whole show ("Computers aren't the thing. They're the thing that gets you to the thing"). Joe originally says this to make the viewer think about technology, evoking the dawn of the personal computer and subsequently the internet. But later on, you're invited to re-interpret that statement as being about people: computers and technology were the thing that got the main characters to work together. It's the -people- that are the thing.<p>Part of what makes the show so good is that it's one of the few renditions in TV / movies of the joy of engineering something, and the constant tension that comes from working with great people. Great people inspire you, but they also challenge you. The show does a  great job of portraying realistic conflicts that arise between different personality types and roles, as well as cleverly exposing the limitations of those personalities. With just Gordon, you'll get a stable and well engineered product but it won't be revolutionary. Joe has the vision but he can't actually _do_ the substantive part. Cameron has great substance and technical ability, but she's impractical and inflexible. Donna is responsible, effective, and clear-eyed - but unchecked, purely rational decisions erode the soul of a company into nothing. These differences frustrate our characters, and yet there can be no success without them.<p>I think many of us spend our whole careers chasing those rare moments where the right people are in the room solving problems, butting heads, but ultimately doing things they could never do all by themselves.</p>
]]></description><pubDate>Thu, 19 Feb 2026 17:58:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47076788</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=47076788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47076788</guid></item><item><title><![CDATA[New comment by sebastos in "The Codex app illustrates the shift left of IDEs and coding GUIs"]]></title><description><![CDATA[
<p>But invoking No True Scotsman would imply that the focus is on gatekeeping the profession of programming. I don’t think the above poster is really concerned with the prestige aspect of whether vibe bros should be considered true programmers. They’re more saying that if you’re a regular programmer worried about becoming obsolete, you shouldn’t be fooled by the bluster. Vibe bros’ output is not serious enough to endanger your job, so don’t fret.</p>
]]></description><pubDate>Thu, 05 Feb 2026 15:11:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46900499</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46900499</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46900499</guid></item><item><title><![CDATA[New comment by sebastos in "Actors: A Model of Concurrent Computation [pdf] (1985)"]]></title><description><![CDATA[
<p>We’re using the C++ Actor Framework (CAF) to provide the actor system implementation, and then we ended up using a stupid old protobuf to describe the compute graph. Protobuf doubles as a messaging format and a schema with reflection, so it lets us receive pipeline jobs over gRPC and then inflate them with less boilerplate (by C++ standards, anyway).<p>Related to what you were saying, the protobuf schema has special dedicated entries for the entrance and exit nodes, so only the top level pipeline has them. Thus the recursive aspect (where nodes can themselves contain sub-graphs) applies only to the processor-y bit in the middle. That allowed us to encourage the side effects to stay at the periphery, although I think it’s still possible in principle. But at least the design gently guides you towards doing it that way.<p>After having created our system, I discovered the Reactor framework (e.g. Lingua Franca). If I could do it all over, I think I would have built using that formalism, because it is better suited for making composable dataflows. The issue with the actor model for this use case is that actors generally know about each other and refer to each other by name. Composable dataflows want the opposite assumption: you just want to push data into some named output ports, relying on the orchestration layer above you to decide who is hooked up to that port.<p>To solve the above problem, we elected to write a rather involved subsystem within the inflation layer that stitches the business actors together via “topic” actors. CAF also provides a purpose-built flows system that sits over top of the actors, which allows us to write the internals of a business actors in a functional reactive-x style. When all is said and done, our business actors don’t really look much like actors - they’re more like MIMO dataflow operators.<p>When you zoom out, it also becomes obvious that we are in many ways re-creating gstreamer. But if you’ve ever used gstreamer before, you may understand why “let’s rest our whole business on writing gstreamer elements” is too painful a notion to be entertained.</p>
]]></description><pubDate>Mon, 02 Feb 2026 13:52:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46856038</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46856038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46856038</guid></item><item><title><![CDATA[New comment by sebastos in "Actors: A Model of Concurrent Computation [pdf] (1985)"]]></title><description><![CDATA[
<p>Hmm, you think?<p>I’m currently engineering a system that uses an actor framework to describe graphs of concurrent processing. We’re going to a lot of trouble to set up a system that can inflate a description into a running pipeline, along with nesting subgraphs inside a given node.<p>It’s all in-process though, so my ears are perking up at your comment. Would you relax your statement for cases where flexibility is important? E.g. we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?</p>
]]></description><pubDate>Mon, 02 Feb 2026 05:04:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46852623</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46852623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46852623</guid></item><item><title><![CDATA[New comment by sebastos in "We (As a Society) Peaked in the 90s"]]></title><description><![CDATA[
<p>As Douglas Adams and xkcd #1227 have pointed out, the older generations have complained about this sort of thing since Plato. However, I do not believe this observation settles the matter, because it does not seriously contend with the null hypothesis: that we really have been steadily enshittifying the human experience since Plato.<p>Who has the right of it? Do the new generations simply not know what they are missing. Or is there something in human nature that makes us unavoidably crotchety as we get older and, thus, not to be taken too seriously? In my opinion, it is simply an open mystery.<p>On the one hand, many tangible, measurable things have improved over the last 2000 years, or, indeed, since the 90's. Steven Pinker has made this point somewhat convincingly by looking at unambiguously positive things like reduced infant mortality.<p>On the other hand, every single generation can give detailed accounts of how much more real and alive and authentic the world was a few decades ago. The accounts have similarities across the generations, but they are also rooted in specifics. To argue that we're all mis-remembering or failing to appreciate what the new decade has to offer is to insist on a rather fantastic level of self-doubt. If our entire lived experience is this untrustworthy, it kind of makes it impossible to rule on anything - good OR bad. Why should we default to trusting the younger generation?<p>I think the surrounding technological context of our age has brought this long-simmering matter to a boil. Now that our electronic communication is so sophisticated that we can essentially build "anything", it starts to re-focus our attention from "CAN we build it" to "SHOULD we build it". This question about digital society is complementary to the broader, long-standing civilizational question. Have the trillions of hours the human race has expended shaping our society resulted in _better_ life, or just life with a deeper tech tree?<p>One novelty of our time is how certain human enterprises play out at 10x speed in cyberspace. This lets us watch the entire lifecycle as they Rise and Fall, over and over. This lends perspective, and allows patterns to emerge. Indeed, this is exactly how Doctorow came to coin the term enshittification. If there's any truth to the life-really-is-getting-worse theory, you'd want to find some causal mechanism - some constant factor that explains why we've been driving things in the wrong direction so consistently. Digital life lets us see enough trials to start building such an account. You can imagine starting to understand the "physics" of why all human affairs eventually lead to an Eternal September. Wherever brief pockets of goodness pop up, they are like arbitrage opportunities: they tautologically attract more and more people trying to harvest the goodness until it's pulverized - a tragedy of the commons. Perhaps some combination of population growth and the inevitable depletion of Earth's natural resources lead to such a framework.<p>Whatever you think about it, I mostly just wish people would acknowledge that it is an unresolved debate and treat it as such. It is critical to understanding what it is worth spending our time on, and it is the kernel of many comparatively superficial disagreements (e.g. the red-blue culture war in US politics).</p>
]]></description><pubDate>Mon, 02 Feb 2026 00:46:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46851046</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46851046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46851046</guid></item><item><title><![CDATA[New comment by sebastos in "The C-Shaped Hole in Package Management"]]></title><description><![CDATA[
<p>Ok - it sounds like you’re right, but I think despite your clarification I remain confused. Isn’t the linked post all about how those two things always have a mingling at the boundary? Like, suppose I want to develop and distribute a c++ user-space application in a cross platform way. I want to manage all my dependencies at the language level, and then there’s some collection of system libraries that I may or may not decide to rely on. How do I manage and communicate that surface area in a cross platform and scalable way? And what does this feel like for a developer - do you just run tests for every supported platform in a separate docker container?</p>
]]></description><pubDate>Tue, 27 Jan 2026 21:20:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46786987</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46786987</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46786987</guid></item><item><title><![CDATA[New comment by sebastos in "The C-Shaped Hole in Package Management"]]></title><description><![CDATA[
<p>>(which it rarely is)<p>You're saying it's _rare_ for developers to want to advance a dependency past the ancient version contained in <whatever the oldest release they want to support> is?<p>Speaking for the robotics and ML space, that is simply the opposite of a true statement where I work.<p>Also doesn't your philosophy require me to figure out the packaging story for every separate distro, too? Do you just maintain multiple entirely separate dependency graphs, one for each distro? And then say to hell with Windows and Mac? I've never practiced this "just use the system package manager" mindset so I don't understand how this actually works in practice for cross-platform development.</p>
]]></description><pubDate>Tue, 27 Jan 2026 17:18:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46783036</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46783036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46783036</guid></item><item><title><![CDATA[New comment by sebastos in "The C-Shaped Hole in Package Management"]]></title><description><![CDATA[
<p>Check it in and build it yourself using the common build system that you and the third party dependency definitely definitely share, because this is the C/C++ ecosystem?</p>
]]></description><pubDate>Tue, 27 Jan 2026 17:08:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782889</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46782889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782889</guid></item><item><title><![CDATA[New comment by sebastos in "The C-Shaped Hole in Package Management"]]></title><description><![CDATA[
<p>I find this sentiment bewildering. Can you help me understand your perspective? Is this specifically C or C++? How do you manage a C/C++ project across a team without a package manager? What is your methodology for incorporating third party libraries?<p>I have spent the better half of 10 years navigating around C++'s deplorable dependency management story with a slurry of Docker and apt, which had better not be part of everyone's story about how C is just fine. I've now been moving our team to Conan, which is also a complete shitshow for the reasons outlined in the article: there is still an imaginary line where Conan lets go and defers to "system" dependencies, with a completely  half-assed and non-functional system for communicating and resolving those dependencies which doesn't work at all once you need to cross compile.</p>
]]></description><pubDate>Tue, 27 Jan 2026 17:02:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782779</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46782779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782779</guid></item><item><title><![CDATA[New comment by sebastos in "The Holy Grail of Linux Binary Compatibility: Musl and Dlopen"]]></title><description><![CDATA[
<p>Genuine question - are there examples (research? old systems?) of the interface to the operating system being exposed differently than a library? How might that work exactly?</p>
]]></description><pubDate>Mon, 26 Jan 2026 21:47:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46772017</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46772017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46772017</guid></item><item><title><![CDATA[New comment by sebastos in "The Holy Grail of Linux Binary Compatibility: Musl and Dlopen"]]></title><description><![CDATA[
<p>But if I get to Bring My Own Dependencies, then I know the exact versions of all my dependencies. That makes testing and development faster because I don’t have to expend effort testing across many different possible platforms. And if development is just generally easier, then maybe it’s easier to react expediently to security notices and release updates as necessary.. .</p>
]]></description><pubDate>Mon, 26 Jan 2026 21:39:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46771941</link><dc:creator>sebastos</dc:creator><comments>https://news.ycombinator.com/item?id=46771941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46771941</guid></item></channel></rss>