<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ramity</title><link>https://news.ycombinator.com/user?id=ramity</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 10:56:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ramity" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ramity in "Ask HN: Are there examples of 3D printing data onto physical surfaces?"]]></title><description><![CDATA[
<p>I run a business that makes 3d printed braille molds that are used to repeatably emboss paper. I haven't considered the molds being offline storage, but I suppose they are. I mostly operate with the assumption of a shelf life of 10 years for the PETG molds, but ink free, embossed paper has excellent lifespan if stored correctly.<p>I guess you could consider it an "offline datastore as a service." It would be a pretty good offline storage of keys with a way to request a paper copy. Certainly issues of trust and physical security but wrapping it with encryption would be easy. Also benefit from your government's legal protections for mail. There might actually be a usecase here.<p>Couple fast facts:<p>- Current 26 * 32 = 832 cells * 6 dot braille = 4992 bits/mold/page<p>- Possible 28 * 34 = 952 cells * 6 dot braille = 5712 bits/mold/page<p>- Maybe some more headroom, but that's what is possible with current spacings</p>
]]></description><pubDate>Sun, 15 Feb 2026 02:09:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47020417</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=47020417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47020417</guid></item><item><title><![CDATA[New comment by ramity in "How to wrangle non-deterministic AI outputs into conventional software? (2025)"]]></title><description><![CDATA[
<p>Let me first start off by saying I and many others have stepped in this pitfall. This is not an attack, but a good faith attempt to share painfully acquired knowledge. I'm actively using AI tooling, and this comment isn't a slight on the tooling but rather how we're all seemingly putting the circle in the square hole and it fits.<p>Querying an LLM to output its confidence in its output is a misguided pattern despite being commonly applied by many. LLMs are not good at classification tasks as the author states. They can "do" it, yes. Perhaps better than random sampling can, but random sampling can "do" it as well. Don't get too tied to that example. The idea here is that if you are okay with something getting the answer wrong every so often, LLMs might be your solve, but this is a post about conforming non-deterministic AI into classical systems. Are you okay if your agentic agent picks the red tool instead of the blue tool 1%, 10%, etc of the time? If so, you're never not going to be wrangling, and that's the reality often left unspoken when integrating these tools.<p>While tangential to this article, I believe its worth stating that when interacting with an LLM in any capacity, remember your own cognitive biases. You often want the response to work, and while generated responses may look good and fit your mental model, it requires increasingly obscene levels of critical evaluation to see through the fluff.<p>For some, there will be inevitable dissonance reading this, but consider that these experiments are local examples. Its lack of robustness will become apparent with large scale testing. The data spaces these models have been trained on are unfathomably large in both quantity and depth, but under/over sampling bias will be ever present (just to name one).<p>Consider the the following thought experiment: You are an applicant for a job submitting your resume with knowledge it will be fed into an LLM. Let's confine your goal into something very simple. Make it say something. Let's oversimplify for the sake of the example and say complete words are tokens. Consider "collocations". [Bated] breath, [batten] down, [diametrically] opposed, [inclement] weather, [hermetically] sealed. Extend this to contexts. [Oligarchy] government, [Chromosome] biology, [Paradigm] technology, [Decimate] to kill. With this in mind, consider how each word of your resume "steers" the model's subsequent response, and consider how the data each model is trained on can subtly influence its response.<p>Now let's bring it home and tie the thought experiment into confidence scoring in responses. Let's say its reasonable to assume that the results of low accuracy/low confidence models are less commonly found on the internet than higher performing ones. If that can be entertained, extend the argument to confidence responses. Maybe the term "JSON" or any other term used in the model input is associated with high confidences.<p>Alright, wrapping it up. The end point here is that the model output provided confidence value is not the likelihood of the answer provided in the response but rather the most likely value following the stream of tokens in the combined input and output. The real sampled confidence values exist closer to code, but they are limited to each token. Not series of tokens.</p>
]]></description><pubDate>Fri, 16 Jan 2026 19:31:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46650997</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=46650997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46650997</guid></item><item><title><![CDATA[New comment by ramity in "VPN location claims don't match real traffic exits"]]></title><description><![CDATA[
<p>I see I was mistaken, but I'm tempted to continue poking holes. Trying a different angle, though it may be a stretch, but could a caching layer within the VPN provider cause these sort of "too fast" RTTs?<p>Let's say you're a global VPN provider and you want to reduce as much traffic as possible. A user accesses the entry point of your service to access a website that's blocked in their country. For the benefit of this thought experiment, let's say the content is static/easily cacheable or because the user is testing multiple times, that dynamic content becomes cached. Could this play into the results presented in this article? Again, I know I'm moving goalposts here, but I'm just trying to be critical of how the author arrived at their conclusion.</p>
]]></description><pubDate>Sat, 13 Dec 2025 22:27:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46258788</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=46258788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46258788</guid></item><item><title><![CDATA[New comment by ramity in "VPN location claims don't match real traffic exits"]]></title><description><![CDATA[
<p>Thanks for your informative reply. I see now I was approaching this incorrectly. I was considering drawing conclusions from a high RTT rather than a RTT so small it would be impossible to have gone the distance.</p>
]]></description><pubDate>Sat, 13 Dec 2025 22:15:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46258696</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=46258696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46258696</guid></item><item><title><![CDATA[New comment by ramity in "VPN location claims don't match real traffic exits"]]></title><description><![CDATA[
<p>Contrasting take: RTT and a service providing black box knowledge is not equivalent to knowledge of the backbone. To assume traffic is always efficiently routed seems dubious when considering a global scale. The supporting infrastructure of telecom is likely shaped by volume/size of traffic and not shortest paths. I'll confess my evaluation here might be overlooking some details. I'm curious on others' thoughts on this.</p>
]]></description><pubDate>Sat, 13 Dec 2025 21:45:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46258420</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=46258420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46258420</guid></item><item><title><![CDATA[New comment by ramity in "iPhone Typos? It's Not Just You – The iOS Keyboard Is Broken [video]"]]></title><description><![CDATA[
<p>35m ago edit: Apple uses many predictive systems for typing. My sentiment in pointing out just slide to type might be misguided as it does not exist in a vacuum. I'd love to see these tests redone with slide to type disabled. I'm leaving the original comment below for reference.<p>Slide to type. This "issue" is at most 6 years old for iOS users.<p>Turn off slide to type if you do not use it. Slide to type does key resizing logic. This is the direct cause of this issue. Please upvote this comment for visibility.<p>Please reply if you think I'm wrong. I see this get posted frequently enough I'm actually losing it.<p>Please refer to <a href="https://youtu.be/hksVvXONrIo?si=XD7AKa8gTl85_rJ6&t=72" rel="nofollow">https://youtu.be/hksVvXONrIo?si=XD7AKa8gTl85_rJ6&t=72</a> (timestamp 1:12) to see that slide to type is enabled.</p>
]]></description><pubDate>Thu, 11 Dec 2025 16:13:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46233219</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=46233219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46233219</guid></item><item><title><![CDATA[New comment by ramity in "WiFi-3D-Fusion – Real-time 3D motion sensing with Wi-Fi"]]></title><description><![CDATA[
<p>I didn't see any reference to a sender or actively blasting RF from the same access point. I think the approach relies on other signal sources creating reflections to a passively monitoring access point and attempting to make sense of that.</p>
]]></description><pubDate>Tue, 26 Aug 2025 02:29:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45021638</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=45021638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45021638</guid></item><item><title><![CDATA[New comment by ramity in "WiFi-3D-Fusion – Real-time 3D motion sensing with Wi-Fi"]]></title><description><![CDATA[
<p>5GHz WiFi has a wavelength of ~6cm and 2.4GHz ~12.5cm. Anything achieving smaller is a result of interferometry or a non WiFi signal. Mentioning this might not add much substance to the conversation, but it felt worth adding.</p>
]]></description><pubDate>Tue, 26 Aug 2025 02:17:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45021557</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=45021557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45021557</guid></item><item><title><![CDATA[New comment by ramity in "WiFi-3D-Fusion – Real-time 3D motion sensing with Wi-Fi"]]></title><description><![CDATA[
<p>I'm interested but am also incredibly dubious. Not because it seems impossible but the opposite. On one hand, an open source repo like this making an approach for hackable extension should be praised, but the "Why Built WiFi-3D-Fusion" section[0] gives me very, very bad vibes. Here's some excerpts I especially take issue with:<p>> "Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death."<p>> "I refuse to accept 'impossible.'"<p>WiFi sensing is an established research domain that has long struggled with line of sight requirements, signal reflection, interference, etc. This repo has the guise of research, but it seems to omit the work of the field it resides in. It's one thing to detect motion or approximately track a connected device through space, but "burning buildings, collapsed tunnels, deep underground" are exactly the kind of non-standardized environments where WiFi sensing performs especially poorly.<p>I hate to judge so quickly based on a readme, but I'm not personally interested in digging deeper or spinning up an environment. Consider this before aligning with my sentiment.<p>[0] <a href="https://github.com/MaliosDark/wifi-3d-fusion/blob/main/README.md#-why-built-wifi-3d-fusion" rel="nofollow">https://github.com/MaliosDark/wifi-3d-fusion/blob/main/READM...</a></p>
]]></description><pubDate>Tue, 26 Aug 2025 02:10:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45021507</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=45021507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45021507</guid></item><item><title><![CDATA[The Great Generalist Extinction]]></title><description><![CDATA[
<p>Article URL: <a href="https://wryco.com/posts/1754939732-the-great-generalist-extinction">https://wryco.com/posts/1754939732-the-great-generalist-extinction</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44869791">https://news.ycombinator.com/item?id=44869791</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 11 Aug 2025 21:40:54 +0000</pubDate><link>https://wryco.com/posts/1754939732-the-great-generalist-extinction</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44869791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44869791</guid></item><item><title><![CDATA[Auditing AI]]></title><description><![CDATA[
<p>Article URL: <a href="https://wryco.com/posts/1754462729-auditing-ai">https://wryco.com/posts/1754462729-auditing-ai</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44869788">https://news.ycombinator.com/item?id=44869788</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 11 Aug 2025 21:40:42 +0000</pubDate><link>https://wryco.com/posts/1754462729-auditing-ai</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44869788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44869788</guid></item><item><title><![CDATA[Making Software Harder]]></title><description><![CDATA[
<p>Article URL: <a href="https://wryco.com/posts/1754297319-making-software-harder">https://wryco.com/posts/1754297319-making-software-harder</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44792675">https://news.ycombinator.com/item?id=44792675</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 04 Aug 2025 23:46:56 +0000</pubDate><link>https://wryco.com/posts/1754297319-making-software-harder</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44792675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44792675</guid></item><item><title><![CDATA[New comment by ramity in "A Rust shaped hole"]]></title><description><![CDATA[
<p>I really want to love rust, and I understand the niches it fills. My temporary allegiance with it comes down to performance, but I'm drawn by the crate ecosystem and support provided by cargo.<p>What's so damning to me is how debilitatingly unopinionated it is during situations like error handling. I've used it enough to at least approximate its advantages, but strongly hinting towards including a crate (though not required) to help with error processing seems to mirror the inconvenience of having to include an exception type in another language. I don't think it would be the end of the world if it came with some creature comforts here and there.</p>
]]></description><pubDate>Thu, 17 Jul 2025 11:13:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44591962</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44591962</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44591962</guid></item><item><title><![CDATA[New comment by ramity in "Code and Trust: Vibrators to Pacemakers"]]></title><description><![CDATA[
<p>I'll provide a contrasting, pessimistic take.<p>> How do you write programs when a bug can kill their user?<p>You accept that you will have a hand in killing users, and you fight like hell to prove yourself wrong. Every code change, PR approval, process update, unit test, hell, even meetings all weigh heavier. You move slower, leaving no stone unturned. To touch on the pacemakers example, even buggy code that kills X% of users will keep Y% alive/improve QoL. Does the good outweigh the bad? Even small amounts of complexity can bubble up and lead to unintended behavior. In a corrected vibrator example, what if frequency becomes so large it overflows and leads to burning the user? Youch.<p>The best insight I have to offer is that time is often overlooked and taken for granted. I'm talking Y2K data type, time drift, time skew, special relativity, precision, and more. Some of the most interesting and disturbing bugs I've come across all occurred because of time. "This program works perfectly fine, but after 24 hours it starts infinitely logging." If time is an input, do not underestimate time.<p>> How do we get to a point to `trust` it?<p>You traverse the entire input space to validate the output space. This is not always possible. In these cases, audit compliance can take the form of traversing a subset of the input space deemed "typical/expected" and moving forward with the knowledge that edge cases can exist. Even with a fully audited software, oddities like a cosmic bit flip can occur. What then? At some point, in this beautifully imperfect world, one must settle for good enough over perfection.<p>The astute reading above might be furiously pounding their keyboards mentioning the halting problem. We can't even verifiably prove a particular input will provide an output - moreover an entire space.<p>> I am convinced that open code, specs and (processes) must be requirement going forward.<p>I completely agree, but I don't believe this will outright prevent user deaths. Having open code, specs, etc aids towards accountability, transparency, and external verification. I must express I feel there are pressures against this, as there is monumental power in being the only party able to ascertain the facts.</p>
]]></description><pubDate>Thu, 10 Jul 2025 06:09:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44517811</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44517811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44517811</guid></item><item><title><![CDATA[New comment by ramity in "Learnings from building AI agents"]]></title><description><![CDATA[
<p>elzbardico is pointing out how the author is having the confidence value generated in the output of the response rather than it being the confidence of the output.</p>
]]></description><pubDate>Thu, 26 Jun 2025 16:44:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44389054</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44389054</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44389054</guid></item><item><title><![CDATA[New comment by ramity in "Learnings from building AI agents"]]></title><description><![CDATA[
<p>I too once fell into the trap of having an LLM generate a confidence value in a response. This is a very genuine concern to raise.</p>
]]></description><pubDate>Thu, 26 Jun 2025 16:34:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44388965</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44388965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44388965</guid></item><item><title><![CDATA[Ask HN: Is GPU nondeterminism bad for AI?]]></title><description><![CDATA[
<p>Argument:<p>- GPUs use parallelism<p>- Floating point math is not associative<p>- Rounding error accumulates differently<p>- GPUs generate noisy computations<p>- Known noise vs accuracy tradeoff in data<p>- Noise requires overparameterization/larger network to generalize<p>- Overparameterization prevents the network from fully generalizing to the problem space<p>Therefore, GPU nondeterminism seems bad for AI. Where did I go wrong?<p>Questions:<p>- Has this been quantified? As I understand it, the answer would be situational and tied to other details like network depth, width, architecture, learning rate, etc. At the end of the day, entropy means some sort of noise/accuracy tradeoff, but are we talking magnitudes like 10%, 1%, 0.1%?<p>- Because of the noise/accuracy tradeoff, it seems to hold that one could use a smaller network trained deterministically and achieve the same performance as X bigger network trained non-deterministically. Is this true, even if we're talking only a single neuron of a difference?<p>- If something like the problem space of driving a car is too large to be fully represented into a dataset (consider the atoms of the universe as a hard drive), how can we be sure a dataset is a perfect sampling of the problem space?<p>- Wouldn't overparameterization guarantee the model learns the dataset and not the problem space? Is it incorrect to conceptualize this as using a polynomial of a higher degree to represent another?<p>- Even with perfect sampling, noisy computation seems incompatible when a small amount of noise is capable of causing an avalanche. If this noise is somehow quantified to 1%, couldn't you say the dataset's "impression" left in the network would be 1% larger than it should, maybe spilling over in a sense? Eval data points "very close to" but not included in training datapoints would be more likely to incorrectly evaluate to as the same "nearby" training datapoint. Maybe I'm reinventing edge case and overfitting here, but I don't think overfitting just spontaneously starts happening towards the end of training.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44196107">https://news.ycombinator.com/item?id=44196107</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 05 Jun 2025 21:47:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44196107</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44196107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44196107</guid></item><item><title><![CDATA[New comment by ramity in "If AI is so good at coding where are the open source contributions?"]]></title><description><![CDATA[
<p>I think it's fair to say AI generated code isn't visibly making a meaningful impact in open source. Absence of evidence is not evidence of absence, but that shouldn't be interpreted as a defense to orgs or the fanciful predictions made by tech CEOs. In its current forms, AI feels comparable to piracy where the real impact is fuzzy and companies claim a number is higher or lower depending on the weather.<p>Yes, open source projects would be the main place where these claims could be publicly verifiable, but established open source projects aren't just code--they're usually complex, organic, and ever shifting organizations of people. I'd argue the metric of interacting with a large group of people whom have cultivated their own working process and internal communication patterns is closer to AGI than coding assistant, so maybe the goal posts we're using for AI PRs are too grand. I think it's expected to hear claims from within walled gardens, where processes and teams can be upended at will, that AI is making an unverifiable splash, because they're precisely the environments where AI could be the most disruptive.<p>Additionally, I think we're willfully looking in the wrong places when trying to measure AI impact by looking for AI PRs. Programmers don't flag PRs when they use IntelliJ or confer with X flavor of LLM(tm), and expecting mature open source projects to have AI PRs seems as dubious as expecting then to use blockchain or any other technology that could be construed as disruptive. It just may not be compatible or reasonable with their current process. Calculated change is often incremental and boring, where real progress is only felt by looking away.<p>I made a really simple project that automatically forwards browser console logs to a central server, programmatically pull the file(s) from the trace, and had an LLM consume a templated prompt + error + file. It'd make a PR with what it thought was the correct fix. Sometimes it was helpful. The problem was it needed to do more than code, because the utility of a one shot prompt to PR is low.</p>
]]></description><pubDate>Fri, 16 May 2025 07:36:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44002718</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=44002718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44002718</guid></item><item><title><![CDATA[New comment by ramity in "The great displacement is already well underway?"]]></title><description><![CDATA[
<p>I've been off socials and on forums for 8+ years now for the same reason. I share similar sentiment as Bizzy's sibling reply. I say these things because lately I've been thinking about lot about dead internet theory and how strongly some believe it.<p>One of the most profound realizations I've had lately is that the perception of the medium of communication itself is a well that can be poisoned with artificial interactions. Major empahsis on perception. The meer presence of artifical can immediately taint real interactions; you don't need a majority to poison the well.<p>How many spam calls does it take for you to presume spam? How many linkedin autoreply ai comments does it take to presume all comments are ai? How many emails before you immediately presume phishing? How many rage baiting social posts do you need to see before you believe the entire site is composed of synthetic engagement? How many tinder bots do you need to interact with before you feel the entire app is dead? How many autodeny job application responses until you assume the next one is a ghost job posting? How many interactions with greedy people does it take to presume that it's human nature?</p>
]]></description><pubDate>Mon, 12 May 2025 21:58:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=43967858</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=43967858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43967858</guid></item><item><title><![CDATA[Romeo and Juliet retold in plastic: 3D-Printable Braille Molds]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.printables.com/model/1251777-romeo-and-juliet-by-william-shakespeare-grade-2-br">https://www.printables.com/model/1251777-romeo-and-juliet-by-william-shakespeare-grade-2-br</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43563432">https://news.ycombinator.com/item?id=43563432</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 03 Apr 2025 00:32:08 +0000</pubDate><link>https://www.printables.com/model/1251777-romeo-and-juliet-by-william-shakespeare-grade-2-br</link><dc:creator>ramity</dc:creator><comments>https://news.ycombinator.com/item?id=43563432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43563432</guid></item></channel></rss>