<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Misdicorl</title><link>https://news.ycombinator.com/user?id=Misdicorl</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 11:38:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Misdicorl" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Misdicorl in "Using PostgreSQL as a Dead Letter Queue for Event-Driven Systems"]]></title><description><![CDATA[
<p>Your place of last resort with kafka is simply to replay the message back to the same kafka topic since you know it's up. In a simple single consumer setup just throw a retry count on the message and increment it to get monitoring/alerting/etc. Multi consumer? Put an enqueue source tag on it and only process the messages tagged for you. This won't scale to infinity but it scales really really far for really really cheap</p>
]]></description><pubDate>Sun, 25 Jan 2026 19:03:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46757061</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=46757061</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46757061</guid></item><item><title><![CDATA[New comment by Misdicorl in "Return of wolves to Yellowstone has led to a surge in aspen trees"]]></title><description><![CDATA[
<p>One way to measure this that isn't moral judgements is the ecological depth of an environment. If one part of the system is destroyed (e.g. a blight on plant A) how devastating to the rest of the system will that be?<p>One of the hallmarks of human engineered environments is how shallow and fragile they are. Changes, like the reintroduction of wolves, are "good" because they give us deeper and more resilient environments</p>
]]></description><pubDate>Sun, 27 Jul 2025 15:57:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44702235</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=44702235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44702235</guid></item><item><title><![CDATA[New comment by Misdicorl in "Even Tesla's Insurance Arm Is Getting Wrecked"]]></title><description><![CDATA[
<p>The story I've heard is that general motors down fall began when they stopped writing loans for other car manufacturers. They used to under write basically every car loan in the US.<p>I've never looked into how truthful it is, but it smacks of idiotic/arrogant executive tropes so well I almost don't want to discover it's false</p>
]]></description><pubDate>Sat, 10 May 2025 19:38:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43948273</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43948273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43948273</guid></item><item><title><![CDATA[New comment by Misdicorl in "Performance optimization is hard because it's fundamentally a brute-force task"]]></title><description><![CDATA[
<p>I'll throw my hat in the ring for "disagree". Few kinds of work have such clear and unambiguous results as optimization work (its now X% faster!). Few kinds of work have such incredible and detailed tools to guide your hand in finding where to invest your effort (look at <i>this</i> hot loop!).<p>The fact that <i>sometimes</i> optimization work is tricky or requires some pre-thinking, or is even <i>gasp</i> counter-intuitive is such a hilarious way to say "this is hard". That's just table stakes starting points for so-so-so much work.<p>Edit: Heck, even deciding whether to prioritize optimization work or feature work is usually a harder problem than the actual optimization work itself.</p>
]]></description><pubDate>Tue, 29 Apr 2025 17:19:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=43835444</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43835444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43835444</guid></item><item><title><![CDATA[New comment by Misdicorl in "ICE Deports 3 U.S. Citizen Children Held Incommunicado Prior to the Deportation"]]></title><description><![CDATA[
<p>Sometimes the vibes are wrong, and things go haywire. This is why zero tolerance policies have to be instituted in schools. That doesnt mean the general idea is wrong. Strict adherence to written law will always fail justice. The world is too nuanced and too fractal to handle every edge case well.</p>
]]></description><pubDate>Sat, 26 Apr 2025 18:43:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43806093</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43806093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43806093</guid></item><item><title><![CDATA[New comment by Misdicorl in "Watching o3 guess a photo's location is surreal, dystopian and entertaining"]]></title><description><![CDATA[
<p>Would be really interesting to see what it does with clearly <i>wrong</i> EXIF data</p>
]]></description><pubDate>Sat, 26 Apr 2025 18:31:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=43806010</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43806010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43806010</guid></item><item><title><![CDATA[New comment by Misdicorl in "What If We Could Rebuild Kafka from Scratch?"]]></title><description><![CDATA[
<p>Yeah I'd say kafka is not a great technology if your median and 99ths (or 999ths if volume is large enough) are wildly different which sounds like your situation. I use kafka in contexts where 99ths going awry usually aren't key dependent so I don't have the issues you see.<p>I tend to prefer other queueing mechanisms in those cases, although I still work hard to make 99ths and medians align as it can still cause issues (especially for monitoring)</p>
]]></description><pubDate>Fri, 25 Apr 2025 21:30:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43798720</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43798720</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43798720</guid></item><item><title><![CDATA[New comment by Misdicorl in "What If We Could Rebuild Kafka from Scratch?"]]></title><description><![CDATA[
<p>Follow on: If you're using kafka to publish messages to <i>multiple</i> consumers, this is even worse as now you're infecting every consumer with data processing issues from every other consumer. Bad juju</p>
]]></description><pubDate>Fri, 25 Apr 2025 16:43:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43795766</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43795766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43795766</guid></item><item><title><![CDATA[New comment by Misdicorl in "What If We Could Rebuild Kafka from Scratch?"]]></title><description><![CDATA[
<p>I suppose it depends on your message volume. To me, processing 100k messages and then getting a page however long later as the broker (or whatever) falls apart sounds much worse than head of line blocking and seeing the problem directly in my consumer. If I need to not do head of line blocking, I can build whatever failsafe mechanisms I need for the problematic data and defer to some other queueing system (typically, just add an attempt counter and replay the message to the same kafka topic and then if attempts > X, send it off to wherever)<p>I'd rather debug a worker problem than an infra scaling problem every day of the week and twice on Sundays.</p>
]]></description><pubDate>Fri, 25 Apr 2025 16:34:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43795637</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43795637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43795637</guid></item><item><title><![CDATA[New comment by Misdicorl in "I wrote to the address in the GPLv2 license notice (2022)"]]></title><description><![CDATA[
<p>And now imagine how easy dealing with email spam would be if the marginal fiscal cost was not 0 like physical spam. All the technology and tools available and less than 1% of the viable spam surface area</p>
]]></description><pubDate>Fri, 25 Apr 2025 00:03:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43788845</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43788845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43788845</guid></item><item><title><![CDATA[New comment by Misdicorl in "I wrote to the address in the GPLv2 license notice (2022)"]]></title><description><![CDATA[
<p>> the goal should be to ensure that anyone who wants to do a thing can, with as few third party requirements as possible.<p>This is a good starting point, but if you have no barriers then you get abuse problems which is why email is terrible. I remember being horrified in the 90s about attempts to charge 1 cent per email. Now I long for a world where that actually happened.</p>
]]></description><pubDate>Thu, 24 Apr 2025 14:52:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=43783539</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43783539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43783539</guid></item><item><title><![CDATA[New comment by Misdicorl in "JEP draft: Prepare to make final mean final"]]></title><description><![CDATA[
<p>It wouldn't, but it might preclude using (future) optimizations that forgo those de-optimization hooks?</p>
]]></description><pubDate>Mon, 31 Mar 2025 21:46:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=43540337</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43540337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43540337</guid></item><item><title><![CDATA[New comment by Misdicorl in "JEP draft: Prepare to make final mean final"]]></title><description><![CDATA[
<p>I suppose serializing the JVM state itself to avoid the cold start problem might take advantage of this?</p>
]]></description><pubDate>Mon, 31 Mar 2025 21:26:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=43540170</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=43540170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43540170</guid></item><item><title><![CDATA[New comment by Misdicorl in "Physics is unreasonably good at creating new math"]]></title><description><![CDATA[
<p>The past decade is a difficult framing to ask the question in. Notable breakthrough results are usually understood in hindsight and a decade just isn't a lot of time for that context and understanding to develop. Science also doesn't necessarily develop in this way with consistent progress every X timespan. Usually you get lots and lots of breakthroughs all at once as an important paradigm is shattered and a new one is installed. Then observations with tiny differences slowly pile up and a very blurry/messy picture of the problems with the new paradigm takes shape. But none of those things feels like a breakthrough, especially to a layman.<p>That said: I'll submit the first detection of gravitational waves as two black holes merged together in 2020 as meeting the bar of "notable breakthrough in the last decade".</p>
]]></description><pubDate>Wed, 04 Sep 2024 18:07:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=41448680</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=41448680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41448680</guid></item><item><title><![CDATA[New comment by Misdicorl in "The 4-chan Go programmer"]]></title><description><![CDATA[
<p>Ignoring inexperience/incompetence as a reason (which, admittedly, is a likely root cause) domain fuzziness is often a good explanation here. If you aren't extremely familiar with a domain and know the shape of solution you need a-priori all those levels of indirection allow you to keep lots of work "online" while (replacing, refactoring, experimenting) with a particular layer. The intent should be to "find" the right shape with all the indirection in place and then rewrite with a single correct shape without all the indirection. Of course, the rewrite never actually happens =)</p>
]]></description><pubDate>Wed, 28 Aug 2024 20:17:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=41383806</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=41383806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41383806</guid></item><item><title><![CDATA[New comment by Misdicorl in "What is the longest known sequence that repeats in Pi? (homelab)"]]></title><description><![CDATA[
<p>You didn't understand the original claim which is that because Pi has an infinite decimal representation, every subsequence of it has a repeat</p>
]]></description><pubDate>Wed, 28 Aug 2024 19:12:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=41383109</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=41383109</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41383109</guid></item><item><title><![CDATA[New comment by Misdicorl in "What is the longest known sequence that repeats in Pi? (homelab)"]]></title><description><![CDATA[
<p>This isn't true as you can build an infinite sequence that never repeats. An example sequence in binary is (the number of 0s between each 1 increases by 1 every time)<p>01001000100001...</p>
]]></description><pubDate>Wed, 28 Aug 2024 18:47:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=41382831</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=41382831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41382831</guid></item><item><title><![CDATA[New comment by Misdicorl in "This is a teenager"]]></title><description><![CDATA[
<p>The goal can't be to solve every desperation case.  But if the program wouldn't allow individuals living in dangerous and exploitative situations to confidently leave them (financially) Id argue the program was a failure</p>
]]></description><pubDate>Wed, 17 Apr 2024 01:35:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=40059470</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=40059470</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40059470</guid></item><item><title><![CDATA[New comment by Misdicorl in "This is a teenager"]]></title><description><![CDATA[
<p>True. It would be nice to decouple it from children and expand its scope of economic impact dramatically.</p>
]]></description><pubDate>Tue, 16 Apr 2024 21:18:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=40057351</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=40057351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40057351</guid></item><item><title><![CDATA[New comment by Misdicorl in "This is a teenager"]]></title><description><![CDATA[
<p>$1000/month is $12,000/year. Thats far far below poverty levels. It needs to be enough that people can <i>choose</i> to supplement in order to engage with luxury consumption. If people are <i>forced</i> to supplement to just survive, then we need to maintain the minimum wage and a whole host of other weird baggage.</p>
]]></description><pubDate>Tue, 16 Apr 2024 20:46:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=40056975</link><dc:creator>Misdicorl</dc:creator><comments>https://news.ycombinator.com/item?id=40056975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40056975</guid></item></channel></rss>