<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: olooney</title><link>https://news.ycombinator.com/user?id=olooney</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 18:05:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=olooney" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Why the Monty Hall Problem Drives People Crazy]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.oranlooney.com/post/monty-hall/">https://www.oranlooney.com/post/monty-hall/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48147539">https://news.ycombinator.com/item?id=48147539</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 15 May 2026 12:00:51 +0000</pubDate><link>https://www.oranlooney.com/post/monty-hall/</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=48147539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48147539</guid></item><item><title><![CDATA[New comment by olooney in "A Visual Guide to Gemma 4"]]></title><description><![CDATA[
<p>Incredibly detailed! The vision transformer stuff in particular is very useful to know. It's interesting that the token budgets are so much higher (up to 1120) than GPT, which uses 170 tokens per 512x512 tile. I wonder if that will lead to more granular spatial vision, something GPT struggles with.</p>
]]></description><pubDate>Fri, 03 Apr 2026 17:47:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47629730</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=47629730</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47629730</guid></item><item><title><![CDATA[New comment by olooney in "Chinese hackers use Anthropic's Claude"]]></title><description><![CDATA[
<p>This isn't surprising; the majority of programmers are using LLMs, and Claude is pretty good for coding. Penetration testing is also a pretty good fit for an agentic loop - you run a tool, read the output, and decide on your next move, rinse and repeat.<p>In VSCode + GitHub Copilot, agent mode it can propose bash command to run, and when you confirm it runs it in a console and can see the loop, so it can fix errors immediately if any. It tends to go off the rails pretty quickly if things start going badly wrong, but it can complete simple tasks with supervision.<p>Over two years ago, when this LLM stuff was pretty new, I saw a demo that put ChatGPT in a loop with Metasploit that could crack some of the easy HTB challeges automatically - I remember thinking it was the single most irresponsible use of AI I'd ever seen. While everybody else was trying to sandbox these things for safety, this project was just handing it command line access to the tools it would need to break confinement.<p>It seems there's actually a whole bunch of similar tools these days, marketed as "automated penetration testing," such as Cybersecurity AI[1]. I used to think the whole cyberpunk "hackers can get in anywhere if they just type hard enough" trope was stupid because with cryptography the defender always has a huge advantage, but now we're looking at a world where AI is automating attacks at scale, while the defenders are vibe coding slop they have no idea how to secure, so maybe Gibson was right all along.<p>[1]: <a href="https://github.com/aliasrobotics/cai" rel="nofollow">https://github.com/aliasrobotics/cai</a></p>
]]></description><pubDate>Fri, 14 Nov 2025 13:32:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45926548</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45926548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45926548</guid></item><item><title><![CDATA[New comment by olooney in "It's insulting to read AI-generated blog posts"]]></title><description><![CDATA[
<p>I'm pretty sure my mistake was assuming people had read the article and knew the author veered wildly halfway through towards also advocating against using LLMs for proofreading and that you should "just let your mistakes stand." Obviously no one reads the article, just the headline, so they assumed I was disagreeing with that (which I was not.) Other comments that expressed the same sentiment as mine but also quoted that part <i>did</i> manage to get upvoted.<p>This is an emotionally charged subject for many, so they're operating in Hurrah/Boo mode[1]. After all, how can we defend the value of careful human thought if we don't rush blindly to the defense of every low-effort blog post with a headline that signals agreement with our side?<p>[1]: <a href="https://en.wikipedia.org/wiki/Emotivism" rel="nofollow">https://en.wikipedia.org/wiki/Emotivism</a></p>
]]></description><pubDate>Mon, 27 Oct 2025 20:06:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45725658</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45725658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45725658</guid></item><item><title><![CDATA[New comment by olooney in "It's insulting to read AI-generated blog posts"]]></title><description><![CDATA[
<p>Here is a piece I wrote recently on that very subject. Why don't you read that to see if I'm a human writer?<p><a href="https://www.oranlooney.com/post/em-dash/" rel="nofollow">https://www.oranlooney.com/post/em-dash/</a></p>
]]></description><pubDate>Mon, 27 Oct 2025 16:43:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45723148</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45723148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45723148</guid></item><item><title><![CDATA[New comment by olooney in "It's insulting to read AI-generated blog posts"]]></title><description><![CDATA[
<p>I don't see the objection to using LLMs to check for grammatical mistakes and spelling errors. That strikes me as a reactionary and dogmatic position, not a rational one.<p>Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.</p>
]]></description><pubDate>Mon, 27 Oct 2025 16:39:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45723080</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45723080</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45723080</guid></item><item><title><![CDATA[New comment by olooney in "The Missing Semester of Your CS Education (2020)"]]></title><description><![CDATA[
<p>I just don't want them to design a data model with a single `numeric(10,2)` columns for "sale_price", or hard-code their PowerBI report to show the last five years of data using whatever the exchange rate was on the day they wrote the report. You're right - it could be covered in five minutes, but since we don't currently bother, every junior has to learn it the hard way...</p>
]]></description><pubDate>Sat, 25 Oct 2025 12:49:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45703496</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45703496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45703496</guid></item><item><title><![CDATA[New comment by olooney in "What is intelligence? (2024)"]]></title><description><![CDATA[
<p>From a rhetorical perspective, it's an extended "Yes-set" argument or persuasion sandwich. You see it a lot with cult leaders, motivational speakers, or political pundits. The problem is that you have an unpopular idea that isn't very well supported. How do you smuggle it past your audience? You use a structure like this:<p>* Verifiable Fact<p>* Obvious Truth<p>* Widely Held Opinion<p>* Your Nonsense Here<p>* Tautological Platitude<p>This gets your audience nodding along in "Yes" mode and makes you seem credible so they tend to give you the benefit of the doubt when they hit something they aren't so sure about. Then, before they have time to really process their objection, you move onto and finish with something they can't help but agree with.<p>The stuff on the history of computation and cybernetics is well researched with a flashy presentation, but it's not original nor, as you pointed out, does it form a single coherent thesis. Mixing in all the biology and movie stuff just dilutes it further. It's just a grab bag of interesting things added to build credibility. Which is a shame, because it's exactly the kind of stuff that's relevant to my interests[3][4].<p>> "Your manuscript is both good and original; but the part that is good is not original, and the part that is original is not good." - Samuel Johnson<p>The author clearly has an Opinion™ about AI, but instead of supporting they're trying to smuggle it through in a sandwich, which I think is why you have that intuitive allergic reaction to it.<p>[1]: <a href="https://changingminds.org/disciplines/sales/closing/yes-set_close.htm" rel="nofollow">https://changingminds.org/disciplines/sales/closing/yes-set_...</a><p>[2]: <a href="https://en.wikipedia.org/wiki/Compliment_sandwich" rel="nofollow">https://en.wikipedia.org/wiki/Compliment_sandwich</a><p>[3]: <a href="https://www.oranlooney.com/post/history-of-computing/" rel="nofollow">https://www.oranlooney.com/post/history-of-computing/</a><p>[4]: <a href="https://news.ycombinator.com/item?id=45220656#45221336">https://news.ycombinator.com/item?id=45220656#45221336</a></p>
]]></description><pubDate>Sat, 25 Oct 2025 12:42:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45703452</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45703452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45703452</guid></item><item><title><![CDATA[New comment by olooney in "The Missing Semester of Your CS Education (2020)"]]></title><description><![CDATA[
<p>I've been building up a similar list of topics that nearly every programmer will at some point be forced to learn against their will and which are not adequately covered in undergrad:<p>* Text file encodings, in particular Unicode, UTF-8, Mojibake<p>* Time: Time Zones, leap day / seconds, ISO-8601<p>* Locales, i18n, and local date/number formats<p>* IEEE 754 floats: NaN and inf, underflow, overflow, why 0.1 + 0.2 != 0.3, ±0, log1p<p>* Currencies, comma/dot formats, fixed-point decimal representations, and exchange rates<p>* Version strings, dependencies, semantic versioning, backwards compatibility<p>There's another list for web/REST developers, and one for data scientists, but this is the core set.<p>What'd I miss?</p>
]]></description><pubDate>Sat, 25 Oct 2025 11:42:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45703073</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45703073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45703073</guid></item><item><title><![CDATA[A Modest Definition of Human Consciousness]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.oranlooney.com/post/em-dash/">https://www.oranlooney.com/post/em-dash/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45680936">https://news.ycombinator.com/item?id=45680936</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 23 Oct 2025 12:11:24 +0000</pubDate><link>https://www.oranlooney.com/post/em-dash/</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45680936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45680936</guid></item><item><title><![CDATA[New comment by olooney in "Art Appreciation 2525"]]></title><description><![CDATA[
<p>Soon galleries will have signs saying: "All works verified human-made. Please feel something."</p>
]]></description><pubDate>Wed, 22 Oct 2025 19:25:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45673944</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45673944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45673944</guid></item><item><title><![CDATA[New comment by olooney in "What do we do if SETI is successful?"]]></title><description><![CDATA[
<p>What's the original source for this Sagan quote?<p>> carl sagan called METI “deeply unwise and immature"<p>It's repeated <i>ad nauseum</i> online, but always verbatim, just those few words and never a full passage, and never with a citation. In other words, it has all the hallmarks of an apocryphal quote or misattribution.<p>The reason I'm suspicious is because Sagan contributed to the Aricebo message[1], which is <i>literally</i> sending such a radio signal, and the the Voyager disc[2], which is similar. He even wrote an entire sci-fi novel[3] about it.<p>He describes radio contact in generally positive and hopeful terms in his book <i>Cosmos</i>. He of course acknowledges the dangers of encountering a more technologically advanced civilization, but he goes out of his way to contrast the frightening example of the Aztecs with other more peaceful first encounters such as the Tlingit. He also argues that any significantly more advanced species that had survived millions of years would necessarily have achieved zero population growth and would likely be peaceful. You don't have to take my word for it, you can read his own words in the Encyclopedia Galactica chapter of his book on the Internet Archive[4].<p>So, if the quote you cited was true, it would represent a late-in-life and somewhat surprising change of heart from cautious optimism to "dark forest" style paranoia. Personally, I believe it's simply one of the many falsely attributes quotes floating around the Internet.<p>[1]: <a href="https://en.wikipedia.org/wiki/Arecibo_message" rel="nofollow">https://en.wikipedia.org/wiki/Arecibo_message</a><p>[2]: <a href="https://en.wikipedia.org/wiki/Voyager_Golden_Record" rel="nofollow">https://en.wikipedia.org/wiki/Voyager_Golden_Record</a><p>[3]: <a href="https://en.wikipedia.org/wiki/Contact_(novel)" rel="nofollow">https://en.wikipedia.org/wiki/Contact_(novel)</a><p>[4]: <a href="https://archive.org/details/sagancosmos/page/n184/mode/1up" rel="nofollow">https://archive.org/details/sagancosmos/page/n184/mode/1up</a></p>
]]></description><pubDate>Wed, 22 Oct 2025 07:17:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45665804</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45665804</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45665804</guid></item><item><title><![CDATA[Algorithmic Underground]]></title><description><![CDATA[
<p>Article URL: <a href="https://jmsdnns.com/tech/algo-underground/">https://jmsdnns.com/tech/algo-underground/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45658302">https://news.ycombinator.com/item?id=45658302</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 21 Oct 2025 17:06:10 +0000</pubDate><link>https://jmsdnns.com/tech/algo-underground/</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45658302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45658302</guid></item><item><title><![CDATA[Art Appreciation 2525]]></title><description><![CDATA[
<p>A screen flashes up another image. The man in the examination chair stares intently at. A minute goes by, then two. Finally he reaches a tentative hand towards one of two buttons in front of him, the one labelled, "Not AI." A moments hesitation, and then he presses down. The room explodes with simulated confetti and the noise of horns blowing. A large badge appears on the screen, reading, "You have scored 58% accuracy in Art Appreciation! Congratulations, you are now a certified Art Critic!"<p>This is how humans now look at art. They look at a piece that catches their eye, and 100% of their attention is focused on the question, "is it real?" Questions of beauty or affect hardly enter their calculations. Their focus is on subtle shifts in perspective or scale, in the minute details of hands or feet.<p>God knows what they would make of a Rubin.<p>You might think the worst part is when they mistakenly dismiss a real artist's work, the labor of hours and culmination of frenzied dreams, after a single glance at it on the most spurious grounds. Or you might think it's the downstream effect of all these false positives, as artists intentionally avoid anything that might be interpreted as AI generated and thereby put themselves in a box inimical to creativity and free expression. While those things are bad, they aren't the worse part.<p>No, the worse part is when the viewer is done with their judgment, when they have decided that the piece is "real art." Because that is when they turn away, their job done, their curiosity satisfied. They have passed the test, they have solved the puzzle. Now they can return their attention to more interesting tasks, such as scrolling through an endless river of content. On to the next image. Oh wait—is <i>that</i> one AI? Engaged in an endless loop of simple binary discernment, there is no need to sit with the art, to let it works its subtle magic on our emotional affect, to question how we feel and why. Such things are simply not needed for the all-important business of classifying.<p>We can no longer see art.<p>We can no longer engage with it in any meaningful way.<p>50,000 years of humans expressing themselves through art is now over.<p>AI art, as worthlessness as it is, has somehow won the day; and worse, has somehow managed to drag all art into the gutter with in. The mere possibility that art <i>might</i> be AI generated is always there, the AI or Not classification process that we've trained our brains to perform is always running in the background, always that little voice of doubt in the back of our minds, always pulling us away slightly from what's in front of us.<p>Even you, reading this now, are wondering, "did ChatGPT write this? It's kind of a generic anti-AI screed, but the style is kinda different... Ah look! There's an em dash! Got-'em! Or maybe he's just one of those typography nerds..." But you'll never know for sure, and by the time you've made up your mind you'll have already forgotten the message. Oh well. On to the next post.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45646022">https://news.ycombinator.com/item?id=45646022</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 20 Oct 2025 16:47:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45646022</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45646022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45646022</guid></item><item><title><![CDATA[A Random Walk in ℤ⁵]]></title><description><![CDATA[
<p>Article URL: <a href="https://gist.github.com/olooney/d98f8e862a11974f36b3620f517df006">https://gist.github.com/olooney/d98f8e862a11974f36b3620f517df006</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45627525">https://news.ycombinator.com/item?id=45627525</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 18 Oct 2025 14:09:19 +0000</pubDate><link>https://gist.github.com/olooney/d98f8e862a11974f36b3620f517df006</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45627525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45627525</guid></item><item><title><![CDATA[New comment by olooney in "EQ: A video about all forms of equalizers"]]></title><description><![CDATA[
<p>Videos don't do well on Hacker News, but I encourage people to at least watch the first couple minutes of this one. The oscilloscope visual overlay is interesting and the editing is really good.<p>Also, given the topic (audio equalizers) there's no way it could have been a blog post.</p>
]]></description><pubDate>Sat, 18 Oct 2025 11:14:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45626435</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45626435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45626435</guid></item><item><title><![CDATA[New comment by olooney in "You are the scariest monster in the woods"]]></title><description><![CDATA[
<p>This is going to sound a little weird, but I think trust is part of the problem.<p>Canadians have a high-trust culture, but their stock market is historically full of scams[1], and some analysts think its causally related. (It may just be because TSXV is a wild west, or because companies would IPO on NYSE or Nasdaq if they were legit, but it could be the trust thing. Fits my narrative, anyway.)<p>When I look at politics, crypto rug pulls, meme stocks with P/E ratios over 200, Aum[2] and similar cults, or many other modern problems I don't see negotiations breaking down because of a lack of trust; I see a bunch of people placing far <i>too much</i> trust in sketchy leaders and ideas backed by scant evidence. A little skepticism would go a long way.<p>That's why I emphasize <i>robust</i> coordination: more due diligence, more transparency, more fraud detection, more skepticism, more financial literacy, more education in general. There's a cost associated with all this, sure, but it still gets you into a situation where the interaction is a coordination game[3] and the Nash equilibrium is Pareto-efficient. Thus, we fall into the "pit of success" and naturally cooperate in our own best interests.<p>There's nothing wrong with empathy, altruism, or charity, but they are very far from universal. You need to base your society on a firm foundation of robust coordination, and then you can have those things afterwards, as a little treat.<p>[1]: <a href="https://en.wikipedia.org/wiki/Vancouver_Stock_Exchange" rel="nofollow">https://en.wikipedia.org/wiki/Vancouver_Stock_Exchange</a><p>[2]: <a href="https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack" rel="nofollow">https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack</a><p>[3]: <a href="https://en.wikipedia.org/wiki/Coordination_game" rel="nofollow">https://en.wikipedia.org/wiki/Coordination_game</a></p>
]]></description><pubDate>Wed, 15 Oct 2025 23:56:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45599862</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45599862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45599862</guid></item><item><title><![CDATA[New comment by olooney in "You are the scariest monster in the woods"]]></title><description><![CDATA[
<p>I think this is interesting and partially true: humans are scary. But it's important to remember the opposite is true as well: humans are the most <i>cooperative</i> species out there by a wide margin.<p>Eusocial insects and pack animals are in a distant 2nd and 3rd place: they generally don't cooperate much past their immediate kin group. Only humans create vast networks of trade and information sharing. Only humans establish establish complex systems to pool risk, or undertake public works for the common good.<p>In fact, a big part of the reason we <i>are</i> so scary is that ability to coordinate action. Ask any mammoth. Ask the independent city states conquered by Alexander the Great. Ask Napoleon as he faced the coalition force at Waterloo.<p>We are victims of our own success: the problems of the modern world are those of coordination mechanisms so effective and powerful that they become very attractive targets for bad actors and so are under siege, at constant risk of being captured and subverted. In a word, the problem of robust governance.<p>Despite the challenges, it <i>is</i> a solvable problem: every day, through due diligence, attestations, contract law, earnest money, and other such mechanisms people who do not trust each other in the least and have every incentive to screw over the other party are able to successfully negotiate win-win deals for life altering sums of money, whether that's buying a house or selling a business. Every century sees humans design larger, more effective, more robust mechanisms of cooperation.<p>It's slow: it's like debugging when someone is red teaming you, trying to find every weak point to exploit. But the long term trend is the emergence of increasingly robust systems. And it suggests a strategy for AI and AGI: find a way to cooperate with it. Take everything we've learned about coordinating with other people and apply the same techniques. That's what humans are good at.<p>This, I think, is a more useful framing than thinking of humans as "scary."</p>
]]></description><pubDate>Wed, 15 Oct 2025 17:30:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45595886</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45595886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45595886</guid></item><item><title><![CDATA[New comment by olooney in "Ask HN: What are you working on? (October 2025)"]]></title><description><![CDATA[
<p>Yes, I was thinking of `matmul()`, sorry about that! The visualization is everything I hoped:<p><a href="https://x.com/oranlooney/status/1977728062289555967" rel="nofollow">https://x.com/oranlooney/status/1977728062289555967</a></p>
]]></description><pubDate>Mon, 13 Oct 2025 13:34:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45568175</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45568175</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45568175</guid></item><item><title><![CDATA[New comment by olooney in "Ask HN: What are you working on? (October 2025)"]]></title><description><![CDATA[
<p>For me, as for a lot of people, lack of sleep is the big one... if I build up 4+ hours of sleep debt over a week, I'm at risk. So anything you can do to make that easier to log, like integration with a sleep tracker, would be good.<p>Also, a plug for Oliver Sacks's <i>Migraine</i> which taught me a lot about migraine with aura.</p>
]]></description><pubDate>Mon, 13 Oct 2025 12:30:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45567620</link><dc:creator>olooney</dc:creator><comments>https://news.ycombinator.com/item?id=45567620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45567620</guid></item></channel></rss>