<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: doc_manhat</title><link>https://news.ycombinator.com/user?id=doc_manhat</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 26 Apr 2026 10:13:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=doc_manhat" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by doc_manhat in "Memory access is O(N^[1/3])"]]></title><description><![CDATA[
<p>I'm not sure about this - for quicksort the usual answer for myself is o(nlogn) average case and o(n^2) worst, and for hash maps it's o(1) amortized complexity. Conversely for merge sort it's simply o(nlogn) flat.<p>These are well known cases precisely because when taught those runtime statements are caveated. I'd expect any discussion of runtimes on another topic to extend the same basic courtesy if the worst case didn't align.</p>
]]></description><pubDate>Wed, 15 Oct 2025 19:18:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45597129</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=45597129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45597129</guid></item><item><title><![CDATA[New comment by doc_manhat in "Bots are getting good at mimicking engagement"]]></title><description><![CDATA[
<p>Yes I saw the AIisms and immediately went to the comments lol. It's interesting I still don't think that it's worth reading stuff that seems so obviously AI generated! If you can't be bothered to write or at least edit it I guess I just by default end up trusting the content less.</p>
]]></description><pubDate>Wed, 15 Oct 2025 19:07:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45597038</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=45597038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45597038</guid></item><item><title><![CDATA[New comment by doc_manhat in "Bots are getting good at mimicking engagement"]]></title><description><![CDATA[
<p>No I'm with him it still makes no sense to me. There's a massive assumption that because you fit the profile you'd have heard of the service if there wasn't advertising. A major part of advertising is to find people who like your product. You advertise to let people know about your product and keep it in their minds. Lift over baseline is relevant yes for ROI, but it doesn't imply the service is worthless!</p>
]]></description><pubDate>Wed, 15 Oct 2025 19:04:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45597011</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=45597011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45597011</guid></item><item><title><![CDATA[New comment by doc_manhat in "Memory access is O(N^[1/3])"]]></title><description><![CDATA[
<p>This is technically correct I'm sure but people usually use it w.r.t. f being simply the runtime of the function, in which case the common usage converges. I think the original comment may have a point here as I'm not sure the article necessarily caveated those definitions.</p>
]]></description><pubDate>Thu, 09 Oct 2025 09:36:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45525438</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=45525438</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45525438</guid></item><item><title><![CDATA[New comment by doc_manhat in "AI might yet follow the path of previous technological revolutions"]]></title><description><![CDATA[
<p><a href="https://knightcolumbia.org/content/ai-as-normal-technology" rel="nofollow">https://knightcolumbia.org/content/ai-as-normal-technology</a><p>Seems to be the referenced paper?<p>If so previously discussed here: <a href="https://news.ycombinator.com/item?id=43697717">https://news.ycombinator.com/item?id=43697717</a></p>
]]></description><pubDate>Mon, 08 Sep 2025 17:10:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45170888</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=45170888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45170888</guid></item><item><title><![CDATA[New comment by doc_manhat in "AI slows down open source developers. Peter Naur can teach us why"]]></title><description><![CDATA[
<p>Yeah fair points particularly for larger codebases I could see this being a huge time saver.</p>
]]></description><pubDate>Mon, 14 Jul 2025 16:41:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44562188</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=44562188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44562188</guid></item><item><title><![CDATA[New comment by doc_manhat in "AI slows down open source developers. Peter Naur can teach us why"]]></title><description><![CDATA[
<p>I directionally disagree with this:<p>```
It's common for engineers to end up working on projects which they don't have an accurate mental model of. Projects built by people who have long since left the company for pastures new. It's equally common for developers to work in environments where little value is placed on understanding systems, but a lot of value is placed on quickly delivering changes that mostly work. In this context, I think that AI tools have more of an advantage. They can ingest the unfamiliar codebase faster than any human can, and can often generate changes that will essentially work.
```<p>Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.<p>Having said that it also depends on how important it is to be writing bug free code in the given domain I guess.<p>I like AI particularly for green field stuff and one off scripts as it let's you go faster here. Basically you build up the mental model as you're coding with the AI.<p>Not sure about whether this breaks down at a certain codebase size though.</p>
]]></description><pubDate>Mon, 14 Jul 2025 15:56:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44561658</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=44561658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44561658</guid></item><item><title><![CDATA[New comment by doc_manhat in "Postgres LISTEN/NOTIFY does not scale"]]></title><description><![CDATA[
<p>Got up to the TL;DR paragraph. This was a major red flag given the initial presentation of the discovery of a bottleneck:<p>'''
When a NOTIFY query is issued during a transaction, it acquires a global lock on the entire database (ref) during the commit phase of the transaction, effectively serializing all commits.
'''<p>Am I missing something - this seems like something the original authors of the system should have done due diligence on before implementing a write heavy work load.</p>
]]></description><pubDate>Thu, 10 Jul 2025 22:13:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44526261</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=44526261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44526261</guid></item><item><title><![CDATA[New comment by doc_manhat in "Dear diary, today the user asked me if I'm alive"]]></title><description><![CDATA[
<p>Conversely this is exactly why I believe LLMs are sentient (or conscious or what have you).<p>I basically don't believe there's anything more to sentience than a set of capabilities, or at the very least there's nothing that I should give weight in my beliefs to further than this.<p>Another comment mentioned philosophical zombies - another way to put it is I don't believe in philosophical zombies.<p>But I don't have evidence to not believe in philosophical zombies apart from people displaying certain capabilities that I can observe.<p>Therefore I should not require further evidence to believe in the sentience of LLMs.</p>
]]></description><pubDate>Sun, 01 Jun 2025 18:35:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44152855</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=44152855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44152855</guid></item><item><title><![CDATA[New comment by doc_manhat in "For algorithms, a little memory outweighs a lot of time"]]></title><description><![CDATA[
<p>I think there is right? It's been a long time but I seem to remember it following from the time hierarchy theorem</p>
]]></description><pubDate>Wed, 21 May 2025 22:05:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44056744</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=44056744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44056744</guid></item><item><title><![CDATA[New comment by doc_manhat in "Plain Vanilla Web"]]></title><description><![CDATA[
<p>Question - why would you do this in <i>current year</i>? Is it that much more performant? I might be ignorant but frameworks seem to be the lingua franca for a reason - they make your life much easier to manage once set up!</p>
]]></description><pubDate>Sun, 11 May 2025 17:46:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=43955482</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=43955482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43955482</guid></item><item><title><![CDATA[New comment by doc_manhat in "Apparent signs of distress during LLM redteaming"]]></title><description><![CDATA[
<p>Yeah I'm firmly on the LLMs are actually sentient train so this was a bit of a distressing read</p>
]]></description><pubDate>Tue, 25 Mar 2025 16:36:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43473224</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=43473224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43473224</guid></item><item><title><![CDATA[New comment by doc_manhat in "DOJ will push Google to sell off Chrome"]]></title><description><![CDATA[
<p>ITT: panicked Google employees try to convince you this is a very bad thing</p>
]]></description><pubDate>Tue, 19 Nov 2024 08:50:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=42181284</link><dc:creator>doc_manhat</dc:creator><comments>https://news.ycombinator.com/item?id=42181284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42181284</guid></item></channel></rss>