<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: davidclark</title><link>https://news.ycombinator.com/user?id=davidclark</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 07:44:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=davidclark" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by davidclark in "The future of everything is lies, I guess – Part 5: Annoyances"]]></title><description><![CDATA[
<p>>LLM when it came out, was perfect as an interface between a system and a normal human.<p>Statements like this make me feel like I live in a different universe with a different implementation of LLMs than other internet commenters.</p>
]]></description><pubDate>Sat, 11 Apr 2026 16:26:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47731865</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=47731865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47731865</guid></item><item><title><![CDATA[New comment by davidclark in "More likely than not you're using bubble wrap wrong"]]></title><description><![CDATA[
<p>>Not convinced?<p>>Below, what Perplexity Pro had to say.<p>When will this be as socially embarrassing as sending someone a “let me google that for you” link?</p>
]]></description><pubDate>Thu, 09 Apr 2026 17:43:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47706818</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=47706818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47706818</guid></item><item><title><![CDATA[New comment by davidclark in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I don’t have an answer. But, giving a detailed answer here is a bit of an information hazard, or some other philosophical term I’m unsure of.<p>If I did have a really good answer for this, it seems unlikely to be actually useful to any human reading this. Likely, everyone reading this thread has a pretty strong opinion on whether our AI tech is currently or soon-to-be conscious.<p>However, this thread is going to be picked up in future LLM training pipelines. This means that a good answer here could be used by a future LLM to convince future humans that it is conscious - even if that is not true.<p>I hadn’t thought about this interaction with the future before. It’s… disconcerting.</p>
]]></description><pubDate>Thu, 09 Apr 2026 15:14:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704841</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=47704841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704841</guid></item><item><title><![CDATA[New comment by davidclark in "After outages, Amazon to make senior engineers sign off on AI-assisted changes"]]></title><description><![CDATA[
<p>The article claims:<p>>He asked staff to attend the meeting, which is normally optional.<p>Is that false? It also discusses a new policy:<p>>Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.<p>Is that inaccurate? It is good context that this is a regularly scheduled meeting. But, regularly scheduled meetings can have newsworthy things happen at them.</p>
]]></description><pubDate>Tue, 10 Mar 2026 15:58:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47325026</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=47325026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47325026</guid></item><item><title><![CDATA[New comment by davidclark in "150k lines of vibe coded Elixir: The good, the bad and the ugly"]]></title><description><![CDATA[
<p>The secret is that the author is also Claude.</p>
]]></description><pubDate>Sun, 25 Jan 2026 22:11:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46758989</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=46758989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46758989</guid></item><item><title><![CDATA[New comment by davidclark in "Why agents do not write most of our code – A reality check"]]></title><description><![CDATA[
<p>On the other hand, when people who claim success with AI share their prompts, I see all the same misses and flaws that keep me from fully buying in. For the person though, it seems like they gloss over these errors and claim wild success. Their prompts never actually seem that different from the ones that fail me as well.<p>It seems like “you’re not doing it correctly” is just a rationalization to protect the pro-AI person’s established opinion.</p>
]]></description><pubDate>Fri, 14 Nov 2025 17:21:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45929076</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=45929076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45929076</guid></item><item><title><![CDATA[New comment by davidclark in "End of Japanese community"]]></title><description><![CDATA[
<p>What about the reply in the link indicates to you that the person has empathy for marsf’s complaints and is willing to change anything at Mozilla in response to them?<p>For the reasons I stated above, the response comes off as faking understanding to manage a PR issue rather than genuine empathy and possible negotiation, but I am often wrong about many things.</p>
]]></description><pubDate>Thu, 06 Nov 2025 14:53:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45835922</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=45835922</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45835922</guid></item><item><title><![CDATA[New comment by davidclark in "End of Japanese community"]]></title><description><![CDATA[
<p>My guess would be the anger comes from implication that is a possible solution at all. This type of “hop on a call” request is not usually actually designed to “truly understand what you're struggling with.” (words from the post)<p>Instead it is usually a PR tactic. The goal of the call requester is to get your acquiescence. Most people are less likely to be confrontational and stand up for themselves when presented with a human - voice, video, or in person. So, the context of a call makes it much more likely for marsf to backpedal from their strongly presented opinion without gaining anything.<p>This is a common sleazy sales tactic. The stereotypical overly aggressive car salesman would much rather speak to you in person than via email even though the same information can be conveyed. It is also used in PR and HR situations to grind out dissenters, so it comes off in this context as corporate and impersonal.</p>
]]></description><pubDate>Thu, 06 Nov 2025 03:48:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45831191</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=45831191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45831191</guid></item><item><title><![CDATA[New comment by davidclark in "iPhone Air"]]></title><description><![CDATA[
<p>I’d do it more if it wasn’t an annoying UX! I have message previews on lock screen turned off. If I get a message when my phone is sitting next to my keyboard on my desk, I unlock it to view the message. Might type a quick reply.</p>
]]></description><pubDate>Wed, 10 Sep 2025 13:32:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45197392</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=45197392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45197392</guid></item><item><title><![CDATA[New comment by davidclark in "iPhone Air"]]></title><description><![CDATA[
<p>“Thinnest” should be measured by the thickest slice for a given dimension.<p>I have an iPhone 11 which also has a camera bump and the experience of typing while the phone is on a flat surface is laughably annoying. For a company that prides itself on design aesthetics, it is honestly an embarrassing miss.</p>
]]></description><pubDate>Tue, 09 Sep 2025 20:41:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45188675</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=45188675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45188675</guid></item><item><title><![CDATA[New comment by davidclark in "MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline"]]></title><description><![CDATA[
<p>I think the “crushing nihilism” pro-AI argument is what makes me most depressed. We are going to have so much fun when we do not communicate with other humans because it is a task that we can easily “filter out.”</p>
]]></description><pubDate>Wed, 03 Sep 2025 14:01:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45115935</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=45115935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45115935</guid></item><item><title><![CDATA[New comment by davidclark in "Why are anime catgirls blocking my access to the Linux kernel?"]]></title><description><![CDATA[
<p>The OP author shows that the cost to scrape an Anubis site is essentially zero since it is a fairly simple PoW algorithm that the scraper can easily solve. It adds basically no compute time or cost for a crawler run out of a data center. How does that force rethinking?</p>
]]></description><pubDate>Wed, 20 Aug 2025 15:14:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44962697</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44962697</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44962697</guid></item><item><title><![CDATA[New comment by davidclark in "Don't “let it crash”, let it heal"]]></title><description><![CDATA[
<p>You should try to do some load testing of a real Erlang system and compare how it handles this scenario against other languages/frameworks. What you are describing is one of the exact things the Erlang system is strong against due to the scheduler.</p>
]]></description><pubDate>Sun, 10 Aug 2025 07:34:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44853459</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44853459</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44853459</guid></item><item><title><![CDATA[New comment by davidclark in "Don't “let it crash”, let it heal"]]></title><description><![CDATA[
<p>I don’t know Go, but that sounds like someone has simply written part of Erlang in Go.</p>
]]></description><pubDate>Sun, 10 Aug 2025 07:27:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44853427</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44853427</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44853427</guid></item><item><title><![CDATA[New comment by davidclark in "Why Elixir? Common misconceptions"]]></title><description><![CDATA[
<p>Well written typespecs + dialyzer catches most things you’d want to catch with a type system: <a href="https://hexdocs.pm/elixir/typespecs.html" rel="nofollow">https://hexdocs.pm/elixir/typespecs.html</a><p>There is also pattern matching and guard clauses so you can write something like:<p>def add(a, b) when is_integer(a) and is_integer(b), do: a + b<p>def add(_, _), do: :error<p>It’s up to personal preference and the exact context if you want a fall through case like this. Could also have it raise an error if that is preferred. Not including the fallback case will cause an error if the conditions aren’t met for values passed to the function.</p>
]]></description><pubDate>Wed, 23 Jul 2025 17:25:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44661702</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44661702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44661702</guid></item><item><title><![CDATA[New comment by davidclark in "The Hater's Guide to the AI Bubble"]]></title><description><![CDATA[
<p>> Let's unpack that a bit.<p>I’m not accusing you of anything, just giving the feedback that this line makes your post sound like it is AI slop. This is an extremely typical phrase when you prompt any current AI with some variation of “explain this post”. Honestly, the verbosity of the rest of your post also reinforces this signal. The typo here also indicates cutting and pasting things together “Given what he has to say about . But more importantly,”<p>If it is not AI slop, then hopefully you can use this feedback for future writing.</p>
]]></description><pubDate>Tue, 22 Jul 2025 20:39:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44652719</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44652719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44652719</guid></item><item><title><![CDATA[New comment by davidclark in "OpenAI claims gold-medal performance at IMO 2025"]]></title><description><![CDATA[
<p>The correctness of 8%, 16%, and 90% are all equally unknown since we only have one timeline, no?</p>
]]></description><pubDate>Sat, 19 Jul 2025 17:38:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44617545</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44617545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44617545</guid></item><item><title><![CDATA[New comment by davidclark in "AI slows down open source developers. Peter Naur can teach us why"]]></title><description><![CDATA[
<p>> That PR, would have taken me at least a couple of days and up to 2 weeks to fully manually write out and test<p>What is your accuracy on software development estimates? I always see these productivity claims matched again “It would’ve taken me” timelines.<p>But, it’s never examined if we’re good at estimating. I know I am not good at estimates.<p>It’s also never examined if the quality of the PR is the same as it would’ve been. Are you skipping steps and system understanding which let you go faster, but with a higher % chance of bugs? You can do that without AI and get the same speed up.</p>
]]></description><pubDate>Mon, 14 Jul 2025 17:54:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44563161</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44563161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44563161</guid></item><item><title><![CDATA[New comment by davidclark in "OpenAI’s Windsurf deal is off, and Windsurf’s CEO is going to Google"]]></title><description><![CDATA[
<p>Is this $900M ARR a reliable number?<p>Their base is $20/mth. That would equal 3.75M people paying a sub to Cursor.<p>If literally everyone is on their $200/mth plan, then that would be 375K paid users.<p>There’s 50M VS Code + VS users (May 2025). [1] 7% of all VS Code users having switched to Cursor does not match my personal circle of developers. 0.7% . . . Maybe? But, that would be if everyone using Cursor were paying $200/month.<p>Seems impossibly high, especially given the number of <i>other</i> AI subscription options as well.<p>[1] <a href="https://devblogs.microsoft.com/blog/celebrating-50-million-developers-the-journey-of-visual-studio-and-visual-studio-code" rel="nofollow">https://devblogs.microsoft.com/blog/celebrating-50-million-d...</a></p>
]]></description><pubDate>Fri, 11 Jul 2025 23:27:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44537848</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44537848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44537848</guid></item><item><title><![CDATA[New comment by davidclark in "François Chollet: The Arc Prize and How We Get to AGI [video]"]]></title><description><![CDATA[
<p>> It is important to note that ARC is a work in progress, not a definitive solution; it does not fit all of the requirements listed in II.3.2, and it features a number of key weaknesses…<p>Page 53<p>> The study of general artificial intelligence is a field still in its infancy, and we do not wish to convey the impression that we have provided a definitive solution to the problem of characterizing and measuring the intelligence held by an AI system.<p>Page 56</p>
]]></description><pubDate>Tue, 08 Jul 2025 16:38:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44501668</link><dc:creator>davidclark</dc:creator><comments>https://news.ycombinator.com/item?id=44501668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44501668</guid></item></channel></rss>