<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: stncls</title><link>https://news.ycombinator.com/user?id=stncls</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 16:09:09 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=stncls" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by stncls in "Huge Binaries"]]></title><description><![CDATA[
<p>> The simplest solution however is to use -mcmodel=large which changes all the relative CALL instructions to absolute JMP.<p>Makes sense, but in the assembly output just after, there is not a single JMP instruction. Instead, CALL <immediate> is replaced with putting the address in a 64-bit register, then CALL <register>, which makes even more sense. But why mention the JMP thing then? Is it a mistake or am I missing something? (I know some calls are replaced by JMP, but that's done regardless of -mcmodel=large)</p>
]]></description><pubDate>Mon, 29 Dec 2025 08:28:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46418592</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=46418592</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46418592</guid></item><item><title><![CDATA[New comment by stncls in "Ask HN: What's the Best Linux Laptop? (August 2025)"]]></title><description><![CDATA[
<p>I don't know about "best", but I'm very happy with my last few Lenovo ThinkPads (X1 carbon, nano, some T-series). Before that, I had some Asus Zenbook, and everything worked as well. All had 4-15hr batteries, more than I expected in their respective eras. I've heard Dell XPS were good too.<p>There is no black magic. The only trick is checking the amazing Arch Linux wiki [1]. It will tell you everything you need to know, things like avoiding the recent Intel MIPI webcams (Linux support <i>is</i> coming, but count a couple of years for out-of-the-box).<p>Regarding Desktop Environments, it depends on your taste. I don't enjoy Gnome, but OSX refugees tend to like it. I've used XFCE, LXDE, LXQt, and recently KDE, and I have only good things to say about all. And tiling DE aficionados are spoiled for choice.<p>Plus, exchanging low-level code with Windows (wsl2) and OSX (largely posix-compatible) has never been easier. The only remaining issue being if you go down to assembly (Aarch64-vs-x86_64), which only crops up if you depend on proprietary applications.<p>To summarize, the picture is quite good nowadays, has been for a while, and is only improving. Of course, the one problem is closed-source apps. If you rely on one, document yourself on its quirks. Otherwise, Arch Linux wiki for hw support, and you're golden.<p>[1] <a href="https://wiki.archlinux.org" rel="nofollow">https://wiki.archlinux.org</a></p>
]]></description><pubDate>Sat, 02 Aug 2025 13:58:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44767685</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=44767685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44767685</guid></item><item><title><![CDATA[New comment by stncls in "Ask HN: Anyone interested in improving scheduling?"]]></title><description><![CDATA[
<p>A good chunk of the day-to-day work of "operations research" consulting shops is scheduling.<p>There is also dedicated software like Timefold, formerly RedHat OptaPlanner [0].<p>[0] <a href="https://timefold.ai/blog/optaplanner-fork" rel="nofollow">https://timefold.ai/blog/optaplanner-fork</a></p>
]]></description><pubDate>Sun, 06 Jul 2025 04:36:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44477878</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=44477878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44477878</guid></item><item><title><![CDATA[New comment by stncls in "Linear Programming for Fun and Profit"]]></title><description><![CDATA[
<p>You're right, but it's very subtle and complicated.<p>In theory, the simplex method is not known to be polynomial-time, and it is likely that indeed it is not. Some variants of the simplex method have been proven to take exponential time in some worst cases (Klee-Minty cubes). What solvers implement could be said to be one such variant ("steepest-edge pricing"), but because solvers have tons of heuristics and engineering, and also because they work in floating-point arithmetic... it's difficult to tell for sure.<p>In practice, the main alternative is interior-point (aka. barrier) methods which, contrary to the simplex method, are polynomial-time in theory. They are usually (but not always) faster, and their advantage tends to increase for larger instances. The problem is that they are converging numerical algorithms, and with floating-point arithmetic they never quite 100% converge. By contrast, the simplex method is a combinatorial algorithm, and the numerical errors it faces should not accumulate. As a result, good solvers perform "crossover" after interior-point methods, to get a numerically clean optimal solution. Crossover is a combinatorial algorithm, like the simplex method. Unlike the simplex method though, crossover is polynomial-time in theory (strongly so, even). However, here, theory and practice diverge a bit, and crossover implementations are essentially simplified simplex methods. As a result, in my opinion, calling iterior-point + crossover polynomial-time would be a stretch.<p>Still, for large problems, we can expect iterior-point + crossover to be faster than the simplex method, by a factor 2x to 10x.<p>There is also first-order methods, which are getting much attention lately. However, in my experience, you should only use that if you are willing to tolerate huge constraint violations in the solution, and wildly suboptimal solutions. Their main use case is when other solvers need too much RAM to solve your instance.</p>
]]></description><pubDate>Fri, 09 May 2025 15:10:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43937670</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=43937670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43937670</guid></item><item><title><![CDATA[New comment by stncls in "Linear Programming for Fun and Profit"]]></title><description><![CDATA[
<p>If this is business critical for you, you may want to switch to a faster solver. Glop is very nice, but it would be reasonable to expect a commercial solver (Gurobi, XPress, COpt) to be 60x faster [1]. By the same measure, the best open source solvers (CLP, HiGHS) are 2-3x faster than Glop.<p>Actually, the commercial solvers are so fast that I would not be surprised if they solved the IP problem as fast as Glop solves the LP. (Yes, the theory says it is impossible, but in practice it happens.) The cost of a commercial solver is 10k to 50k per license.<p>[1] ... this 60x number has very high variance depending on the type of problem, but it is not taken out of nowhere, it comes from the Mittelmann LP benchmarks <a href="https://plato.asu.edu/ftp/lpopt.html" rel="nofollow">https://plato.asu.edu/ftp/lpopt.html</a>        There are also benchmarks for other types of problems, including IP, see the whole list here: <a href="https://plato.asu.edu/bench.html" rel="nofollow">https://plato.asu.edu/bench.html</a></p>
]]></description><pubDate>Fri, 09 May 2025 14:48:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43937395</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=43937395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43937395</guid></item><item><title><![CDATA[New comment by stncls in "Why cryptography is not based on NP-complete problems"]]></title><description><![CDATA[
<p>> Instead, cryptography needs problems that are hard in the average case, like the RSA problem, the discrete logarithm for elliptic curves, and the shortest vector problem for lattices. We don’t technically know whether these are NP-complete<p>But we <i>do</i> know that the shortest vector problem is NP-hard (in Linf norm), and so is its decision version [1]. We don't have a reduction in the other direction, but we know that SVP is as-hard-as-NP-complete-or-harder.<p>This goes against the general argument of the article, but only weakly so, because I think lattice-based systems are less broadly deployed than elliptic curves or factoring (not a crypto person, though).<p>[1] H. Bennett, "The Complexity of the Shortest Vector Problem.", 2022 <a href="https://simons.berkeley.edu/sites/default/files/docs/21271/svp-simons.pdf" rel="nofollow">https://simons.berkeley.edu/sites/default/files/docs/21271/s...</a></p>
]]></description><pubDate>Thu, 13 Feb 2025 10:00:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43034435</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=43034435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43034435</guid></item><item><title><![CDATA[New comment by stncls in "Elementary Functions and Not Following IEEE754 Floating-Point Standard (2020)"]]></title><description><![CDATA[
<p>Floating-point is hard, and standards seem like they cater to lawyers rather than devs. But a few things are slightly misleading in the post.<p>1. It correctly quotes the IEEE754-2008 standard:<p>> A conforming function shall return results correctly rounded for the applicable rounding direction for all operands in its domain<p>and even points out that the citation is from "Section 9. *Recommended* operations" (emphasis mine). But then it goes on to describes this as a "*requirement*" of the standard (it is not). This is not just a mistype, the post actually implies that implementations not following this recommendation are wrong:<p>> [...] none of the major mathematical libraries that are used throughout computing are actually rounding correctly as demanded in any version of IEEE 754 after the original 1985 release.<p>or:<p>> [...] ranging from benign disregard for the standard to placing the burden of correctness on the user who should know that the functions are wrong: “It is following the specification people believe it’s following.”<p>As far as I know, IEEE754 mandates correct rounding for elementary operations and sqrt(), and only for those.<p>2. All the mentions of 1 ULP in the beginning are a red herring. As the article itself mentions later, the standard never cares about 1 ULP. Some people do care about 1 ULP, just because it is something that can be achieved at a reasonable cost for transcendentals, so why not do it. But not the standard.<p>3. The author seems to believe that 0.5 ULP would be better than 1 ULP for numerical accuracy reasons:<p>> I was resounding told that the absolute error in the numbers are too small to be a problem. Frankly, I did not believe this.<p>I would personally also tell that to the author. But there is a much more important reason why correct rounding would be a tremendous advantage: reproducibility. There is always only one correct rounding. As a consequence, with correct rounding, different implementations return bit-for-bit identical results. The author even mentions falling victim to FP non-reproducibility in another part of the article.<p>4. This last point is excusable because the article is from 2020, but "solving" the fp32 incorrect-rounding problem by using fp64 is naive (not guaranteed to always work, although it will with high probability) and inefficient. It also does not say what to do for fp64. We can do correct rounding <i>much</i> faster now [1, 2]. So much faster that it is getting really close to non-correctly-rounded, so some libm may one day decide to switch to that.<p>[1] <a href="https://github.com/rutgers-apl/rlibm">https://github.com/rutgers-apl/rlibm</a><p>[2] <a href="https://github.com/taschini/crlibm">https://github.com/taschini/crlibm</a></p>
]]></description><pubDate>Tue, 11 Feb 2025 08:02:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43010201</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=43010201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43010201</guid></item><item><title><![CDATA[I built an AI company to save my open source project]]></title><description><![CDATA[
<p>Article URL: <a href="https://timefold.ai/blog/how-i-built-an-ai-company-to-save-my-open-source-project">https://timefold.ai/blog/how-i-built-an-ai-company-to-save-my-open-source-project</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43006807">https://news.ycombinator.com/item?id=43006807</a></p>
<p>Points: 14</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 10 Feb 2025 23:43:15 +0000</pubDate><link>https://timefold.ai/blog/how-i-built-an-ai-company-to-save-my-open-source-project</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=43006807</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43006807</guid></item><item><title><![CDATA[New comment by stncls in "FreeBSD for hi-fi audio: real-time processing, equalizer, MPD and FFmpeg"]]></title><description><![CDATA[
<p>I'm ready to believe that pipewire is imperfect (although I have personally experienced no crash, and did not have to configure anything), but the sentence from the original post is:<p>> Therefore, I naturally omit the use and configuration of additional audio layers in the form of a Jack server, PulseAudio or the unfortunate PipeWire from RedHat.<p>I cannot understand this sentiment. I have used all three for years each. If I had to qualify one as "unfortunate", it would certainly not be PipeWire (nor would it be Jack for that matter).<p>> performance is not that great compared to pulseaudio and jack ..<p>Jack makes different trade-offs by default, so for some definition of performance, yes, one could see it as superior to PipeWire. But in my experience PipeWire is better than PulseAudio on all axes (at least: latency, CPU usage, resampling quality).</p>
]]></description><pubDate>Thu, 06 Feb 2025 12:34:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42961769</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=42961769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42961769</guid></item><item><title><![CDATA[New comment by stncls in "Capablanca: Minimum Vertex Cover Solver"]]></title><description><![CDATA[
<p>> Complexity<p>> We present a polynomial-time algorithm achieving an approximation ratio below √2 for MVC, providing strong evidence that P = NP by efficiently solving a computationally hard problem with near-optimal solutions.<p>> This result contradicts the Unique Games Conjecture, suggesting that many optimization problems may admit better solutions, revolutionizing theoretical computer science.<p>Some context: The Unique Games Conjecture (UGC) is a very central unsolved problem in theoretical computer science (TCS), it's open since 2002. It conjectures that, for a series of standard optimization problems including minimum vertex cover, the best known approximation factor (how close you can guarantee to get to optimality with a poly time algorithm) is actually the best <i>possible</i> approximation factor. To disprove the conjecture, one needs a better approximation algorithm for one of those problems. Such an algorithm could be used to solve (to optimality, and in poly time) another set of problems, which are widely believed to be NP-hard. This would <i>not</i> prove P=NP (to be clear: the above quote is not claiming that), but it is true it would revolutionize TCS.<p>The thing is: this is TCS and theoretical complexity. You cannot disprove UGC with just code. You need a mathematical proof. This is not an indictment of the code which may be very good. But such an enormous claim would require pretty intense scrutiny before it would be accepted as true.</p>
]]></description><pubDate>Fri, 31 Jan 2025 06:54:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=42885234</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=42885234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42885234</guid></item><item><title><![CDATA[Experience writing combinatorics paper in collaboration with LLMs (see Appendix)]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2410.00315">https://arxiv.org/abs/2410.00315</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41745754">https://news.ycombinator.com/item?id=41745754</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 04 Oct 2024 21:27:57 +0000</pubDate><link>https://arxiv.org/abs/2410.00315</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=41745754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41745754</guid></item><item><title><![CDATA[New comment by stncls in "CVE-2024-6409: OpenSSH: Possible remote code execution in privsep child"]]></title><description><![CDATA[
<p>No vulnerability name, no website, concise description, neutral tone, precise list of affected distros (RHEL + derivatives and some EOL Fedoras) and even mention of <i>unaffected</i> distros (current Fedoras), plain admission that no attempt was made to exploit. What a breath of fresh air!<p>(I am only joking of course. As a recovering academic, I understand that researchers need recognition, and I have no right to throw stones -- glass houses and all. Also, this one is really like regreSSHion's little sibling. Still, easily finding the information I needed made me happy.)</p>
]]></description><pubDate>Tue, 09 Jul 2024 15:28:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=40917239</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=40917239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40917239</guid></item><item><title><![CDATA[New comment by stncls in "CVE-2024-6409: OpenSSH: Possible remote code execution in privsep child"]]></title><description><![CDATA[
<p>Yes, only RHEL 9 (the current version of RHEL) and its upstreams/downstreams (CentOS Stream 9, Rocky Linux 9, Alma Linux 9,...).<p>Also affected: Fedora 37, 36 and possibly 35, which are all end-of-life (since December 2023 in the case of Fedora 37).<p><i>Not</i> affected: Fedora 38 (also EOL), 39 (maintained) and 40 (current).</p>
]]></description><pubDate>Tue, 09 Jul 2024 15:19:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=40917117</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=40917117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40917117</guid></item><item><title><![CDATA[New comment by stncls in "AI driven 3D bin packer written in F#"]]></title><description><![CDATA[
<p>So the AI part is because the customers state their problem in plain English? In your experience, do they prefer that rather than a GUI?<p>Incidentally, what method do you use to solve the problem once the LLM gives you a formulation?  Or is the LLM itself tasked with solving the problem directly?</p>
]]></description><pubDate>Fri, 28 Jun 2024 12:54:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=40820158</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=40820158</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40820158</guid></item><item><title><![CDATA[New comment by stncls in "In defence of swap: common misconceptions (2018)"]]></title><description><![CDATA[
<p>The article is from 2018 and has had interesting discussion here before [1].<p>My conclusion is: In production, in a datacenter, when code is stable and compute-per-dollar efficiency matters? Yeah, sure, I can believe that swap makes sense.<p>On a dev machine? Not on mine for sure. If you think swap is a net positive on a dev machine, then try pasting the following (wrong) code in a bash prompt on Linux:<p><pre><code>  echo '
  #include <stdlib.h>
  #include <stdio.h>
  #include <string.h>
  
  int main()
  {
    for (int i = 0; ; i++) {
      char *mem = malloc(1 << 30);
      if (mem == NULL)
        return 0;
      memset(mem, 42, 1 << 30);
      printf("%5d GiB\n", i);
    }
  }
  ' | cc -O3 -o crash_my_laptop -x c -
  ./crash_my_laptop
</code></pre>
We can discuss the results in a couple of hours, when you recover control of your machine.
(Note: with no swap and no z-ram, it crashes after 10 seconds; no side effects.)<p>[1]<p><a href="https://news.ycombinator.com/item?id=40582029">https://news.ycombinator.com/item?id=40582029</a><p><a href="https://news.ycombinator.com/item?id=39650114">https://news.ycombinator.com/item?id=39650114</a><p><a href="https://news.ycombinator.com/item?id=38263901">https://news.ycombinator.com/item?id=38263901</a><p><a href="https://news.ycombinator.com/item?id=31104126">https://news.ycombinator.com/item?id=31104126</a><p><a href="https://news.ycombinator.com/item?id=29159755">https://news.ycombinator.com/item?id=29159755</a><p><a href="https://news.ycombinator.com/item?id=23455051">https://news.ycombinator.com/item?id=23455051</a><p><a href="https://news.ycombinator.com/item?id=16145294">https://news.ycombinator.com/item?id=16145294</a><p><a href="https://news.ycombinator.com/item?id=16109058">https://news.ycombinator.com/item?id=16109058</a></p>
]]></description><pubDate>Wed, 05 Jun 2024 08:55:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=40582814</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=40582814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40582814</guid></item><item><title><![CDATA[Open Source Maintenance]]></title><description><![CDATA[
<p>Article URL: <a href="https://anteru.net/blog/2024/open-source-maintenance/">https://anteru.net/blog/2024/open-source-maintenance/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39941271">https://news.ycombinator.com/item?id=39941271</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 05 Apr 2024 11:55:13 +0000</pubDate><link>https://anteru.net/blog/2024/open-source-maintenance/</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=39941271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39941271</guid></item><item><title><![CDATA[New comment by stncls in "Video ad for World Hearing Day has clear GenAI artifacts [video]"]]></title><description><![CDATA[
<p>The music is off as well (second half of piece, I can't quite put my finger on it... Brass section may be out of tune?), but that part sounds more like a poor rendition by humans rather than something a computer would do.</p>
]]></description><pubDate>Wed, 27 Mar 2024 16:15:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=39841147</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=39841147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39841147</guid></item><item><title><![CDATA[Video ad for World Hearing Day has clear GenAI artifacts [video]]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.youtube.com/watch?v=GCjY_UOKT-c">https://www.youtube.com/watch?v=GCjY_UOKT-c</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39841091">https://news.ycombinator.com/item?id=39841091</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 27 Mar 2024 16:11:09 +0000</pubDate><link>https://www.youtube.com/watch?v=GCjY_UOKT-c</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=39841091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39841091</guid></item><item><title><![CDATA[New comment by stncls in "6.2 GHz Intel Core I9-14900KS Review"]]></title><description><![CDATA[
<p>In many cases yes. Some single-threaded workloads are very sensitive to e.g. memory latency. They end up spending most of their time with the CPU waiting on a cache-missed memory load to arrive.<p>Typically, those would be sequential algorithms with large memory needs and very random (think: hash table) memory accesses.<p>Examples: SAT solvers, anything relying on sparse linear algebra</p>
]]></description><pubDate>Mon, 18 Mar 2024 07:00:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=39741076</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=39741076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39741076</guid></item><item><title><![CDATA[New comment by stncls in "Ask HN: Do you also marvel at the complexity of everyday objects?"]]></title><description><![CDATA[
<p>Tap water. I can't stop marveling at the fact that we have (mostly) unlimited, clean, drinkable water on demand and virtually for free.<p>But also many other things, many of which others have mentioned here (cars, mass housing, garbage collection, electronics).<p>So much so that I feel frustration at the fact that in my job, I do not participate in human society making any of these fascinating things possible; and I have decided that my next career move will have to make me part of the supply chain of one such thing, even if I am just the tiniest of links.</p>
]]></description><pubDate>Mon, 18 Mar 2024 06:51:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=39741038</link><dc:creator>stncls</dc:creator><comments>https://news.ycombinator.com/item?id=39741038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39741038</guid></item></channel></rss>