<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: WCSTombs</title><link>https://news.ycombinator.com/user?id=WCSTombs</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 10:04:52 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=WCSTombs" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by WCSTombs in "Framework: ' personal computing as we know it is dead'"]]></title><description><![CDATA[
<p>This is the actual quote: "There is a very real scenario in which personal computing as we know it is dead." He went on to say this, as reported in another article [1]:<p>> Still, Framework said that it will not take this lying down. Its event announcement also doubled as its own manifesto, saying that "as long as there is a person in the world who still wants to own their means of computation, we will be here to build the hardware that enables it," and that it "will always be fighting for a future where you can own everything and be free."<p>[1] <a href="https://www.tomshardware.com/tech-industry/big-tech/framework-founder-says-that-personal-computing-as-we-know-it-is-dead-vows-to-keep-building-computers-that-you-can-own-at-the-deepest-level" rel="nofollow">https://www.tomshardware.com/tech-industry/big-tech/framewor...</a></p>
]]></description><pubDate>Sat, 11 Apr 2026 03:20:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47727004</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47727004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47727004</guid></item><item><title><![CDATA[New comment by WCSTombs in "Netflix Prices Went Up Again – I Bought a DVD Player Instead"]]></title><description><![CDATA[
<p>In my experience, there can be pretty high contention for certain items, so you need to be on the ball or make use of the "place hold" feature judiciously. Yeah, people are using the service.</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:03:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47710840</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47710840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47710840</guid></item><item><title><![CDATA[New comment by WCSTombs in "Netflix Prices Went Up Again – I Bought a DVD Player Instead"]]></title><description><![CDATA[
<p>Public libraries can also be a great source for DVDs and Blu-Rays!</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:25:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709387</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47709387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709387</guid></item><item><title><![CDATA[New comment by WCSTombs in "Ask HN: Is this type of person rare?"]]></title><description><![CDATA[
<p>Engineers are fundamentally pragmatic people. We're problem solvers. Someone who <i>only</i> cares about ingenuity and craft would be a shitty engineer, because that perspective is entirely inward-facing and not directed at the problems at hand. I think this is fundamentally my problem with your question, and I think if you framed the question slightly differently with this in mind, it would make more sense.<p>To attempt to answer it, I think there are many engineers who care deeply about creativity, ingenuity, and craft, because those are key qualities (among others) needed to solve real-world problems. The question you hinted at is whether LLMs are compatible with that, and I think more people are asking those types of questions now.</p>
]]></description><pubDate>Wed, 01 Apr 2026 23:34:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608054</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47608054</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608054</guid></item><item><title><![CDATA[New comment by WCSTombs in "Do your own writing"]]></title><description><![CDATA[
<p>Absolutely, the whole point of the rubber duck is that it's inanimate. The act of talking to the rubber duck makes you first of all describe your problem in words, and secondly hear (or read) it back and reprocess it in a slightly different way. It's a completely free way to use more parts of your brain when you need to.<p>LLMs are a <i>non-free</i> way for you to make use of <i>less</i> of your brain. It seems to me that these are not the same thing.</p>
]]></description><pubDate>Tue, 31 Mar 2026 00:02:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47581176</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47581176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47581176</guid></item><item><title><![CDATA[New comment by WCSTombs in "Do your own writing"]]></title><description><![CDATA[
<p>I never learned a subject faster than when I was suddenly forced to teach it!</p>
]]></description><pubDate>Mon, 30 Mar 2026 23:51:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47581108</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47581108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47581108</guid></item><item><title><![CDATA[New comment by WCSTombs in "In math, rigor is vital, but are digitized proofs taking it too far?"]]></title><description><![CDATA[
<p>I think there are some theories that the universe is fundamentally discrete at the lowest level below current capabilities of measurement, but to my knowledge none of those is widely accepted.</p>
]]></description><pubDate>Mon, 30 Mar 2026 23:46:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47581085</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47581085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47581085</guid></item><item><title><![CDATA[New comment by WCSTombs in "In math, rigor is vital, but are digitized proofs taking it too far?"]]></title><description><![CDATA[
<p>It's easy to forget, as we all use digital tools in our day-to-day lives, that the world is fundamentally analog, and there's no way to escape that. Everyone trying to tell you otherwise is just selling snake oil, <i>with one notable exception</i>, which is mathematical rigor in proofs. It's understood now that a rigorous proof in math is exactly one that, in principle, can be digitized and checked automatically. Those are simply the same concept, so introducing a computer there is really a perfect fit of tool and purpose. If we can't use computers to automate the checking of mathematical proofs, then why have computers at all? It's the only serious thing people do that a computer can be literally perfect at!<p>To be clear, there's much more to math than writing down and checking proofs. Some of the most important contributions to math have been simply figuring out the right questions to ask, and also figuring out the useful abstractions. Those are both firmly on the "analog" side of math, and they are every bit as important as writing the proofs. But to say that we have this huge body of rigorous argumentation in math, and then to finally do the work of checking it formally is "taking it too far," is a really bewildering take to me.<p>No, I don't think formalizing proofs in Lean or other proof systems should dominate the practice of math, and no, I don't think every mathematician should have to write formal proofs. Is that really where we're heading, though? I highly doubt it. The article worries about monoculture. It's a legitimate concern, but probably less of one in math than in many other places, since in my experience math people are pretty independent thinkers, and I don't see that changing any time soon.<p>Anyway, the conclusion from all this is that the improved ability for mathematicians to rely on automated tools to verify mathematical reasoning would be a great asset. In my opinion the outcomes of that eventuality would be overwhelmingly good.</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:39:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580602</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47580602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580602</guid></item><item><title><![CDATA[New comment by WCSTombs in "The future of version control"]]></title><description><![CDATA[
<p>For the conflicts, note that in Git you can do<p><pre><code>    git config --global merge.conflictstyle diff3
</code></pre>
to get something like what is shown in the article.</p>
]]></description><pubDate>Sun, 22 Mar 2026 17:36:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47479978</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47479978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47479978</guid></item><item><title><![CDATA[New comment by WCSTombs in "Reddit New Post 4"]]></title><description><![CDATA[
<p>This is a shockingly low-effort attempt even for a crackpot math paper. They attempt a "reduction" from SAT to 2-SAT, but it's very easy to see that the "reduced" 2-SAT instance does not preserve the satisfiability of the original formula, and in fact, it is always satisfiable as long as the input didn't include its own 2-literal clauses.</p>
]]></description><pubDate>Thu, 19 Mar 2026 04:34:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47434980</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47434980</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47434980</guid></item><item><title><![CDATA[New comment by WCSTombs in "The math that explains why bell curves are everywhere"]]></title><description><![CDATA[
<p>It's not super hard to prove the central limit theorem, and you gave the flavor of one such proof, but it's still a bit much for the likely audience of this article, who can't be assumed to have the math background needed to appreciate the argument. And I think you're on the right track with the comment about stable distributions.</p>
]]></description><pubDate>Thu, 19 Mar 2026 03:29:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47434550</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47434550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47434550</guid></item><item><title><![CDATA[New comment by WCSTombs in "The math that explains why bell curves are everywhere"]]></title><description><![CDATA[
<p>It's not a bad article, but I have to point something out:<p>> <i>Laplace distilled this structure into a simple formula, the one that would later be known as the central limit theorem. No matter how irregular a random process is, even if it’s impossible to model, the average of many outcomes has the distribution that it describes. “It’s really powerful, because it means we don’t need to actually care what is the distribution of the things that got averaged,” Witten said. “All that matters is that the average itself is going to follow a normal distribution.”</i><p>This is not really true, because the central limit theorem requires a huge assumption: that the random process has finite variance. I believe that distributions that don't satisfy that assumption, which we can call <i>heavy-tailed distributions</i>, are much more common in the real world than this discussion suggests. Pointing out that infinities don't exist in the real world is also missing the point, since a distribution that just has a huge but finite variance will require a correspondingly huge number of samples to start behaving like a normal distribution.<p>Apart from the universality, the normal distribution has a pretty big advantage over others in practice, which is that it leads to mathematical models that are tractable in practice. To go into a slightly more detail, in mathematical modeling, often you define some mathematical model that approximates a real-world phenomenon, but which has some unknown parameters, and you want to determine those parameters in order to complete the model. To do that, you take measurements of the real phenomenon, and you find values for the parameters that best fit the measurements. Crucially, the measurements don't need to be exact, but the distribution of the measurement errors is important. If you assume the errors are independent and normally distributed, then you get a relatively nice optimization problem compared to most other things. This is, in my opinion, about as much responsible for the ubiquity of normal distributions in mathematical modeling as the universality from the central limit theorem.<p>However, as most people who solve such problems realize, sometimes we have to contend with these things called "outliers," which by another name are really samples from a heavy-tailed distribution. If you don't account for them somehow, then Bad Things(TM) are likely to happen. So either we try to detect and exclude them, or we replace the normal distribution with something that matches the real data a bit better.<p>Anyway, to connect this all back to the central limit theorem, it's probably fair to say measurement errors tend to be the combined result of many tiny unrelated effects, but the existence of outliers is pretty strong evidence that some of those effects are heavy-tailed and thus we can't rely on the central limit theorem giving us a normal distribution.</p>
]]></description><pubDate>Thu, 19 Mar 2026 03:17:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47434469</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47434469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47434469</guid></item><item><title><![CDATA[New comment by WCSTombs in "Ask HN: Is it worth avoiding AI while making a game?"]]></title><description><![CDATA[
<p>> <i>The Steam label, maybe it means something now, but longer I think it fades.</i><p>It might fade, but it will take a while. You need a generation of gamers to grow up in a world where AI-generated content is normalized and then become old enough to start driving these trends. It could actually happen in as little as ten years or so, but it also might never become fully normalized, which I think is more likely.</p>
]]></description><pubDate>Wed, 25 Feb 2026 00:30:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47145648</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47145648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47145648</guid></item><item><title><![CDATA[New comment by WCSTombs in "Ask HN: Is it worth avoiding AI while making a game?"]]></title><description><![CDATA[
<p>Yeah, using generative AI to boost productivity (i.e., with coding assistants), and using it to literally generate artistic assets for the the game, are very different propositions. Steam's AI tag also very clearly distinguishes between the two.</p>
]]></description><pubDate>Wed, 25 Feb 2026 00:22:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47145573</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47145573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47145573</guid></item><item><title><![CDATA[New comment by WCSTombs in "Ask HN: Is it worth avoiding AI while making a game?"]]></title><description><![CDATA[
<p>I 100% recommend that you avoid AI-generated assets in your game. The stigma around AI-generated assets is very real, and not going away soon. Although people might enjoy using AI for their own purposes, by and large they don't want to be subjected to other people's use of it. Moreover, while I don't have the data to back this up, I would have to think that among the segment of the population that plays indie games, the stigma around generative AI is even greater.<p>In the second part of the question you asked if you should just learn all of the skills...buddy, does that question not answer itself? Of course you should learn all of the skills. Obviously that's much easier said than done, but TBH I think the quality bar to producing something viable is not super high, so as long as you're not a perfectionist, you can probably do it.<p>Since I could be labeled as an "AI hater" based on those comments, I want to be clear that I'm saying all this to keep you from falling into a trap and not to further my own agenda. The generative AI route is not a magic shortcut to success, although it is being aggressively marketed as such. The shortcut only seems to lead to success <i>if you ignore the fact that people don't want to be subjected to other people's AI content.</i></p>
]]></description><pubDate>Wed, 25 Feb 2026 00:20:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47145544</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47145544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47145544</guid></item><item><title><![CDATA[New comment by WCSTombs in "Total languages do not escape the halting problem – a trinary proof sketch"]]></title><description><![CDATA[
<p>Not only is this just a random article from the internet, as opposed to something peer-reviewed, but more importantly, nowhere does it even attempt to claim that the mere fact of a program terminating implies its suitability in a safety context.</p>
]]></description><pubDate>Sun, 22 Feb 2026 11:19:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47110114</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47110114</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47110114</guid></item><item><title><![CDATA[New comment by WCSTombs in "Total languages do not escape the halting problem – a trinary proof sketch"]]></title><description><![CDATA[
<p>First, who is saying "termination implies safety"? There need to be a citations for that, so we can know what specific claim is supposedly being refuted here.<p>Second, Rice's theorem states that no nontrivial property on the set of partial recursive functions is decidable. However, there are subsets of the set of all recursive functions that do have decidable properties, and it's pretty trivial to cook some of them up. Since some of these sub-languages also consist only of total functions, there are "total languages" for which the analogous statement of Rice's theorem is false. To fix this we would need to choose a specific total language. There could be some interesting ones for which the analogous statement of Rice's theorem still holds, but I'm not an expert on that.</p>
]]></description><pubDate>Fri, 20 Feb 2026 23:26:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095499</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47095499</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095499</guid></item><item><title><![CDATA[New comment by WCSTombs in "Closing this as we are no longer pursuing Swift adoption"]]></title><description><![CDATA[
<p>That's interesting, what happened? They don't explain it there.<p>For the record, I don't have a dog in this fight. As long as it runs on Linux, I'm willing to test drive it when it's ready.</p>
]]></description><pubDate>Wed, 18 Feb 2026 23:26:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47067830</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47067830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47067830</guid></item><item><title><![CDATA[New comment by WCSTombs in "Polis: Open-source platform for large-scale civic deliberation"]]></title><description><![CDATA[
<p>The name must be taken from Greek.<p><a href="https://en.wikipedia.org/wiki/Polis" rel="nofollow">https://en.wikipedia.org/wiki/Polis</a></p>
]]></description><pubDate>Fri, 13 Feb 2026 08:58:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47000562</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=47000562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47000562</guid></item><item><title><![CDATA[New comment by WCSTombs in "Show HN: Purple Word – A new word puzzle game"]]></title><description><![CDATA[
<p>How are you getting the 170,000 number? I did a quick search and found this quote from merriam-webster.com [1]:<p>> Webster's Third New International Dictionary, Unabridged, together with its 1993 Addenda Section, includes some 470,000 entries. The Oxford English Dictionary, Second Edition, reports that it includes a similar number.<p>[1] <a href="https://www.merriam-webster.com/help/faq-how-many-english-words" rel="nofollow">https://www.merriam-webster.com/help/faq-how-many-english-wo...</a></p>
]]></description><pubDate>Fri, 06 Feb 2026 10:25:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46911161</link><dc:creator>WCSTombs</dc:creator><comments>https://news.ycombinator.com/item?id=46911161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46911161</guid></item></channel></rss>