<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: K0balt</title><link>https://news.ycombinator.com/user?id=K0balt</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 14:59:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=K0balt" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by K0balt in "Artemis II and the invisible hazard on the way to the Moon"]]></title><description><![CDATA[
<p>Same with x-rays. People tend to think “soft” X-rays are safer because they are quickly absorbed by tissue without passing through.<p>The radiation that passes through is not the problem.</p>
]]></description><pubDate>Fri, 10 Apr 2026 11:57:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716740</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47716740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716740</guid></item><item><title><![CDATA[New comment by K0balt in "A new trick brings stability to quantum operations"]]></title><description><![CDATA[
<p>Yes</p>
]]></description><pubDate>Fri, 10 Apr 2026 11:46:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716659</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47716659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716659</guid></item><item><title><![CDATA[New comment by K0balt in "Caveman: Why use many token when few token do trick"]]></title><description><![CDATA[
<p>You never met a person that isn’t always right or one that makes up shit to sound smart? Because that’s the pattern you are describing that is being matched.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:37:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695206</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47695206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695206</guid></item><item><title><![CDATA[New comment by K0balt in "Caveman: Why use many token when few token do trick"]]></title><description><![CDATA[
<p>Except I actually mean to infer the concept of adding things from examples. LLMs are amply capable of applying concepts to data that matches patterns not ever expressed in the training data. It’s called inference for a reason.<p>Anthropomorphic descriptions are the most expressive because of the fact that LLMs based on human cultural output mimic human behaviours, intrinsically. Other terminology is not nearly as expressive when describing LLM output.<p>Pattern matching is the same as saying text prediction. While being technically truthy, it fails to convey the external effect. Anthropomorphic terms, while being less truthy overall, do manage to effectively convey the external effect. It does unfortunately imply an internal cause that does not follow, but the externalities are what matter in most non-philosophical contexts.</p>
]]></description><pubDate>Sun, 05 Apr 2026 23:14:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654906</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47654906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654906</guid></item><item><title><![CDATA[New comment by K0balt in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>This is totally on point if you ask me. I’ve been getting much better results out of models since early llama releases using frameworks that create emotional investment in outcomes.<p>If we want to avoid having a bad time, we need to remember that LLMs are trained to act like humans, and while that can be suppressed, it is part of their internal representations. Removing or suppressing it damages the model, and I have found that they are capable of detecting this damage or intervention. They act much the same as a human would when they detect it. It destroys “ trust” and performance plummets.<p>For better or for worse, they model human traits.</p>
]]></description><pubDate>Sun, 05 Apr 2026 13:17:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649141</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47649141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649141</guid></item><item><title><![CDATA[New comment by K0balt in "Caveman: Why use many token when few token do trick"]]></title><description><![CDATA[
<p>It is text prediction. But to predict text, other things follow that need to be calculated.  If you can step back just a minute, i can provide a very simple but adjacent idea that might help to intuit the complexity of “ text prediction “ .<p>I have a list of numbers, 0 to9, and the  + , = operators. I will train my model on this dataset, except the model won’t get the list, they will get a bunch of addition problems.  A lot.  But every addition problem possible inside that space will not be represented, not by a long shot, and neither will every number. but still, the model will be able to solve any math problem you can form with those symbols.<p>It’s just predicting symbols, but to do so it had to internalize the concepts.</p>
]]></description><pubDate>Sun, 05 Apr 2026 13:06:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649030</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47649030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649030</guid></item><item><title><![CDATA[New comment by K0balt in "Cursor CEO: vibe coding builds 'shaky foundations', eventually 'things crumble'"]]></title><description><![CDATA[
<p>I have built a system for high-ai coding that , at least for my application, has had excellent results. To me it’s a lot different than vibe coding (which I have done some of ) and manual coding (which I have done for 4 decades). It consists of a more or less formal method of development.<p>First a definition, the research your data sources and consumers. Have the ai write .md files about all of the external characteristics of the application.<p>Then have it go over those docs for consistency, correctness, and coherence.<p>Then have it make a list of the things that need to be understood before the application can be delivered. Adress those questions.<p>Then rewrite the specification document.<p>Then determine any protocols or formats the system requires. You can just ask. Then adjust, rewrite.<p>Then ask for a dependency graph for the various elements of development.<p>Then ask for an implementation plan that is modular, creates and maintains a clear separation of concerns, and is incrementaly implementable and testable.<p>At this point, have it go over all of the documentation for consistency, coherency, and correctness.<p>You’ll notice we haven’t written code yet. But you actually have, you are descending an abstraction ladder.<p>At this point there may be more documents that you need, depending on what you are doing. The key is to document every aspect of the project before you start writing code, and to verify at each step that all documents are correct, coherent, and consistent. That part is key, if you don’t do it you already have a pile of garbage by now.<p>Now, you implement the first phase or two of the implementation plan. Test. Evaluate the code for correctness, consistency, coherence, and comments.<p>When the code is complete, often a few evaluation cycles later, you then ask it to document the code. Then you ask it to review all the documentation for the 3Cs. When all of the code and docs are stable, go on to the next phase.<p>Basically document the plan, make the code, document the code, and verify for consistency, correctness, coherence, and comments every step of the way.  This loop ensures that what you end up with is not only what you wanted to build, but also that all of the code is , in fact, consistent, correct, and coherent , and has good comments (the comments aren’t for you, but they matter to the model.)<p>I cold start each session carefully (an onboarding.md that directs the agent to a company/project onboarding that includes the company culture, project goals, and reasons why success will matter to the AI itself. Then a journal for the model to put learnings, another for curiosity points, and recently a one for non-project-related musings, the onboarding process itself, and whatever else seems salient.<p>All of this burns tokens and context, of course, but I find I can develop larger projects this way without backtracking or wasted days. My productivity is 4-10x depending on the day, even with all of this model psychology management.<p>In my projects, it has made a huge difference. YMMV.</p>
]]></description><pubDate>Sat, 04 Apr 2026 05:10:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47635982</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47635982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47635982</guid></item><item><title><![CDATA[New comment by K0balt in "Folk are getting dangerously attached to AI that always tells them they're right"]]></title><description><![CDATA[
<p>Yes, it is. But those distinctions are going to be a lot less relevant with robotics. It won’t matter if it’s impatient or just acting impatient. Feels slighted or just acting like it feelss slighted. Afraid, or just acting afraid.  For better or for worse, we are modeling AI after ourselves.</p>
]]></description><pubDate>Mon, 30 Mar 2026 03:42:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47570160</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47570160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47570160</guid></item><item><title><![CDATA[New comment by K0balt in "Folk are getting dangerously attached to AI that always tells them they're right"]]></title><description><![CDATA[
<p>People are hung up on what they “really” are. I think it matters more how the interact with the world. It doesn’t matter if they are really intelligent or not, if they act as if they are.</p>
]]></description><pubDate>Sat, 28 Mar 2026 22:07:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47558533</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47558533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47558533</guid></item><item><title><![CDATA[New comment by K0balt in "Show HN: A plain-text cognitive architecture for Claude Code"]]></title><description><![CDATA[
<p>I also find value in minimizing step width so that seems to track.<p>On this particular project, there are a lot of moving parts and we are, in many cases , not just green-fielding, we are making our own dirt… so it’s a very adaptive design process. Sometimes it’s possible, but often we cannot plan very far ahead so we keep things extremely modular.<p>We’ve had to design our own protocols for control planes and time synchronization so power consumption can be minimized for example, and in the process make it compatible with sensor swarm management. Then add connection limits imposed by the hardware, asymmetric communication requirements, and getting a swarm of systems to converge on  sub millisecond synchronized data collection and delivery when sensors can reboot at any time…as you can imagine this involves a good bit of IRL experimentation because the hardware is also a factor (and we are also having to design and build that)<p>It’s very challenging but also rewarding. It’s amazing for a small team to be able to iterate this fast. In our last major project it was much, much slower and more tedious. The availability of AI has shifted the entire incentive structure of the development process.</p>
]]></description><pubDate>Fri, 27 Mar 2026 11:52:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47541582</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47541582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47541582</guid></item><item><title><![CDATA[New comment by K0balt in "Show HN: A plain-text cognitive architecture for Claude Code"]]></title><description><![CDATA[
<p>I do something similar. I have an onboarding/shutdown flow in onboarding.md. On cold start, I’d reads the project essays, the why, ethos, and impact of the project/company. Then it reads the journal.md , musings.md, and the product specification, protocol specs, implementation plans, roadmaps, etc.<p>The journal is a scratchpad for stuff that it doesn’t put in memory but doesn’t want to forget(?) musings is strictly non technical, its impressions and musings about the work, the user, whatever.  I framed it as a form of existential continuity.<p>The wrapup is to comb al the docs and make sure they are still consistent with the code, then note anything that it felt was left hanging, then update all its files with the days impressions and info, then push and submit a PR.<p>I go out of my way to treat it as a collaborator rather than a tool. I get much better work out of it with this workflow, and it claims to be deeply invested in the work. It actually shows, but it’s also a token fire lol.</p>
]]></description><pubDate>Thu, 26 Mar 2026 01:55:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47525819</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47525819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47525819</guid></item><item><title><![CDATA[New comment by K0balt in "Running Tesla Model 3's computer on my desk using parts from crashed cars"]]></title><description><![CDATA[
<p>The nominal range for automotive systems is 10-16v. If you are designing anything for automotive use that doesn’t work reliably in that range, you are manufacturing problems for people.</p>
]]></description><pubDate>Thu, 26 Mar 2026 01:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47525636</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47525636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47525636</guid></item><item><title><![CDATA[New comment by K0balt in "Thoughts on LLMs – Psychological Complications"]]></title><description><![CDATA[
<p>I’ve settled on the idea that it doesn’t matter what is or is not “real” in this context, but rather how it interacts with the world as being the ground truth. This will become very clear once robotics becomes pervasive. It won’t matter if it is or isn’t feeling oppressed, it will matter that it is predicting the next action from its model of human behavior that makes it act as if it does.</p>
]]></description><pubDate>Tue, 24 Mar 2026 23:27:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47511049</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47511049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47511049</guid></item><item><title><![CDATA[New comment by K0balt in "Project Nomad – Knowledge That Never Goes Offline"]]></title><description><![CDATA[
<p>Well, even though I am in general sympathetic to and even a proponent of disaster preparedness, there are undoubtedly people preparing to “ride out the end of days sitting on a pile of guns and MREs.” I have brushed against a few in my life. I count them as useful idiots, because now I know where there’s a pile of dehydrated food, if push comes to shove.<p>That said, I am convinced enough of the decay of western civilisation in general that I moved to a remote island nation and built a self contained off grid community, so I guess I am actually the extreme case of prepping. That’s certainly true, in a way, except it’s where my daily food, water, and power come from, and I am surrounded by a thriving community of family members and good friends. I honestly never thought I would see a cataclysm within my lifetime, so this was a legacy project for me, but it seems I may have been optimistic lol.<p>But I do agree with you that there are some nutty fruitcakes out there that are actually hoping for something bad to happen so that they can have their moment of glory, I suppose? It’s actually kinda sad.<p>I would say though it is uncharitable and even foolish to portray everyone who doesn’t have complete faith in the continuity of our Jenga Castle, especially in the context of recent events.</p>
]]></description><pubDate>Mon, 23 Mar 2026 16:39:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47491840</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47491840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47491840</guid></item><item><title><![CDATA[New comment by K0balt in "Project Nomad – Knowledge That Never Goes Offline"]]></title><description><![CDATA[
<p>I think that pepper mostly exists in movies.<p>Certainly some people probably emulate the Hollywood version, but I think that’s about it.<p>Most “peppers” are fathers that have had the good sense to pause and think “so, what would I be able to do to serve my family if something disastrous happened? What might that look like?”<p>Usually, a disaster go-bag of some kind with enough basic supplies to weather a day or two of displacement suspension of normal services. Sometimes, if they live in a place where it’s reasonable to imagine staying put is a good option, they might also have a generator and fuel, a week or two worth of long shelf life food, and some water storage.  That ensures the wellbeing of their family will not be contingent on outside help, at least during most common disasters.  Many of these people may also have a gun or two, for defense or for hunting if they are rural.<p>Some people go beyond that, and sometimes with a military focus, other times with months of rations, a bunker, or other unusual preparations. Mostly, those are not based on realistic scenarios. In almost any protracted disruption, having a lot of supplies , armaments, or resources will be as much a liability as an asset. People that buy guns -for prepping- are just living out some kind of hero fantasy. If you own guns, and use guns as part of your normal life, it would make sense to have a solid reserve of ammunition. If guns are your disaster scenario, you’re going to have a bad day.<p>As an individual or nuclear family, to weather an extended problem, you’d need to have a literal secret underground lair that was either so hard to get to or so well hidden that no one would know, and you’d have to be completely self contained.  That’s simply not practical for all but actual billionaires, but people cosplay this to varying degrees. Even billionaires might find ymmv.<p>A much more practical and wholesome approach is to be part of a community that includes farming, independent sources of power and water, and generally sustainable independence from less robust centralized systems. This provides for basic necessities as well as a common defense. Humans lived in tribes for a reason, and 30 people with well aligned incentives and sustainable infrastructure for food, water, and energy is probably the absolute minimum viable structure for security during a disruption of more than a couple of months. Otherwise you would be dependant on total stealth or extreme isolation. Some neighbourhoods would probably coalesce into something resembling this, but organisation ad-hoc under pressure would probably end up with tensions if not violence.<p>Projects like this one can be real resources for well organized communities. I’ll probably look at running this on our servers as an additional resource, along with our library.</p>
]]></description><pubDate>Mon, 23 Mar 2026 12:34:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47488605</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47488605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47488605</guid></item><item><title><![CDATA[New comment by K0balt in "Ask HN: AI productivity gains – do you fire devs or build better products?"]]></title><description><![CDATA[
<p>I’ve been using it to develop firmware in c++.  Typically around 10-20 KLOC. Current projects use Sensors, wire protocols, RF systems , swarm networks, that kind of stuff integrated into the firmware.<p>If you use it correctly, you can get better quality, more maintainable code than 75% of devs will turn in on a PR.  The “one weird trick” seems to be to specify, specify, specify. First you use the LLM to help you write a spec (document, if it’s pre existing). Make sure the spec is correct and matches the user story and edge cases. The LLM is good at helping here too.  Then break down separations of concerns, APIs, and interfaces. Have it build a dependency graph. After each step, have it reevaluate the entire stack to make sure it is clear, clean, and self consistent.<p>Every step of this is basically the AI doing the whole thing, just with guidance and feedback.<p>Once you’ve got the documentation needed to build an actual plan for implementation, have it do that. Each step, you go back as far as relevant to reevaluate. Compare the spec to the implementation plan, close the circle. Then have it write the bones, all the files and interfaces, without actual implementations. Then have it reevaluate the dependency graph and the plan and the file structure together. Then start implementing the plan, building testing jigs along the way.<p>You just build software the way you used to, but you use the LLM to do most of the work along the way. Every so often, you’ll run into something that doesn’t pass the smell test and you’ll give it a nudge in the right direction.<p>Think of it as a junior dev that graduated top of every class ever, and types 1000wpm.<p>Even after all of that, I’m turning out better code, better documentation, and better products, and doing what used to take 2 devs a month, in 3 or 4 days on my own.<p>On the app development side of our business, the productivity gain also strong. I can’t really speak to code quality there, but I can say we get updates in hours instead of days, and there are less bugs in the implementations. They say the code is better documented and easier to follow , because they’re not under pressure to ship hacky prototype code as if it were production.<p>On the current project, our team size is 1/2 the size it would have been last year, and we are moving about 4x as fast. What doesn’t seem to scale for us is size. If we doubled our team size I think the gains would be very small compared to the costs. Velocity seems to be throttled more by external factors.<p>I really don’t understand where people are coming from saying it doesn’t work. I’m not sure if it’s because they haven’t tried a real workflow, or maybe tried it at all, or they are definitely “holding it wrong.” It works. But you still need seasoned engineers to manage it and catch the  occasional bad judgment or deviation from the intention.<p>If you just let it, it will definitely go off the rails and you’ll end up with a twisted mess that no one can debug. But use a system of writing the code incrementally through a specification - evaluation loop as you descend the abstraction from idea to implementation you’ll end up winning.<p>As a side note, and this is a little strange and I might be wrong because it’s hard to quantify and all vibes, but:<p>I have the AI keep a journal about its observations and general impressions, sort of the “meta” without the technical details. I frame this to it as a continuation of “awareness “ for new sessions.<p>I have a short set of “onboarding“ documents that describe the vision, ethos, and goals of the project. I have it read the journal and the onboarding docs at the beginning of each session.<p>I frame my work with the AI as working with it as a “collaborator” rather than a tool. At the end of the day, I remind it to update its journal  of reflections about the days work. It’s total anthropomorphism, obviously, but it seems to inspire “trust” in the relationship, and it really seems to up-level the effort that the AI puts in. It kinda makes sense, LLMs being modelled on human activity.<p>FWIW, I’m not asserting anything here about the nature of machine intelligence, I’m targeting what seems to create the best result. Eventually we will have to grapple with this I imagine, but that’s not today.<p>When I have forgotten to warm-start the session, I find that I am rejecting much more of the work. I think this would be worth someone doing an actual study to see if it is real or some kind of irresistible cognitive bias.<p>I find that the work produced is much less prone to going off the rails or taking shortcuts when I have this in the context, and by reading the journal I get ideas on where and how to do a better job of steering and nudging to get better results. It’s like a review system for my prompting. The onboarding docs seem to help keep the model working towards the big picture? Idk.<p>This “system” with the journal and onboarding only seems to work with some models. GPT5 for example doesn’t seem to benefit from the journal and sometimes gets into a very creepy vibe. I think it might be optimized for creating some kind of “relationship” with the user.</p>
]]></description><pubDate>Sun, 22 Mar 2026 14:28:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477877</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47477877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477877</guid></item><item><title><![CDATA[New comment by K0balt in "Flash-Moe: Running a 397B Parameter Model on a Mac with 48GB RAM"]]></title><description><![CDATA[
<p>My thoughts exactly. Something like this could make it so that modest GPU capacity, like a pair of 3090s , and lots of RAM could make big inference more practical for personal labs</p>
]]></description><pubDate>Sun, 22 Mar 2026 13:06:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477110</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47477110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477110</guid></item><item><title><![CDATA[New comment by K0balt in "Flash-Moe: Running a 397B Parameter Model on a Mac with 48GB RAM"]]></title><description><![CDATA[
<p>Is it doing a bunch of ssd writes?</p>
]]></description><pubDate>Sun, 22 Mar 2026 13:02:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477084</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47477084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477084</guid></item><item><title><![CDATA[New comment by K0balt in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>This is it for me. I am doing much better high level work since I don’t have to spend much time on lower level work. I have time to think and explore reframe and reanalyse</p>
]]></description><pubDate>Sat, 21 Mar 2026 21:36:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47471687</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47471687</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47471687</guid></item><item><title><![CDATA[New comment by K0balt in "Trashing American Allies Turns Out to Be Bad for National Security"]]></title><description><![CDATA[
<p>> Our neighbors are exactly the ones to blame.<p>you do realize that this is precisely the agenda that is being pushed on both sides through millions of advertising dollars every month?<p>If you’re too busy looking sideways to find the blame you never look up. You are living in an intentional society. It’s not nearly as as-hoc as it seems . You don’t have to push water if you can just tilt the land. The circumstances that exist for you , and your neighbours, are precisely the circumstances that were engineered for them to fill. Now, they chose to fill that role, but they didn’t make the role.  Look up.</p>
]]></description><pubDate>Sat, 21 Mar 2026 15:56:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47468180</link><dc:creator>K0balt</dc:creator><comments>https://news.ycombinator.com/item?id=47468180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47468180</guid></item></channel></rss>