<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: stephendause</title><link>https://news.ycombinator.com/user?id=stephendause</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 07:36:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=stephendause" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by stephendause in "Ask HN: How do you safely give LLMs SSH/DB access?"]]></title><description><![CDATA[
<p>I don't know; I've never done something like that. If no one else answers, you can always ask Claude itself (or another chatbot). This kind of thing seems tricky to get right, so be careful!</p>
]]></description><pubDate>Wed, 14 Jan 2026 19:49:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46621865</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=46621865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46621865</guid></item><item><title><![CDATA[New comment by stephendause in "Ask HN: How do you safely give LLMs SSH/DB access?"]]></title><description><![CDATA[
<p>There is an example of [dis]allowing certain bash commands here: <a href="https://code.claude.com/docs/en/settings" rel="nofollow">https://code.claude.com/docs/en/settings</a><p>As for queries, you might be able to achieve the same thing with usage of command-line tools if it's a `sqlite` database (I am not sure about other SQL DBs). If you want even more control than the settings.json allows, you can use the claude code SDK.</p>
]]></description><pubDate>Wed, 14 Jan 2026 19:11:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46621106</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=46621106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46621106</guid></item><item><title><![CDATA[Six Principles for More Rigorous Evaluation of Cognitive Capacities]]></title><description><![CDATA[
<p>Article URL: <a href="https://aiguide.substack.com/p/on-evaluating-cognitive-capabilities">https://aiguide.substack.com/p/on-evaluating-cognitive-capabilities</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46621014">https://news.ycombinator.com/item?id=46621014</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 14 Jan 2026 19:07:54 +0000</pubDate><link>https://aiguide.substack.com/p/on-evaluating-cognitive-capabilities</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=46621014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46621014</guid></item><item><title><![CDATA[New comment by stephendause in "The next two years of software engineering"]]></title><description><![CDATA[
<p>Your story sounds similar to mine. There are some parts of programming at which I know I will never excel. I also don't have time in my life to spends lots of hours outside of work developing my skills. I think it's important to realize that the median software engineer is probably not doing these things either. Maybe the top 10% are? Something like that would be my guess. It's okay to not be in the top 10%!</p>
]]></description><pubDate>Mon, 12 Jan 2026 15:55:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46590128</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=46590128</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46590128</guid></item><item><title><![CDATA[New comment by stephendause in "A website to destroy all websites"]]></title><description><![CDATA[
<p>Beauty is in the eye of the beholder. My personal taste for the presentation of a piece of writing is that less is more. I usually find artwork that accompanies a text to be distracting. I love reading work that can stand on its own, invoking images in the mind. I also dislike animations that seem to be made for a certain scroll speed.<p>Having said all of that, I certainly don't think it's bad, nor is it a commentary on the arguments being made. It's just not my cup of tea.</p>
]]></description><pubDate>Fri, 02 Jan 2026 02:17:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46460642</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=46460642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46460642</guid></item><item><title><![CDATA[New comment by stephendause in "How uv got so fast"]]></title><description><![CDATA[
<p>Why? To me, hosting previous versions of an article in a public git repo adds transparency. Or perhaps you are talking about GitHub specifically?</p>
]]></description><pubDate>Sat, 27 Dec 2025 20:03:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46404762</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=46404762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46404762</guid></item><item><title><![CDATA[New comment by stephendause in "Two kinds of vibe coding"]]></title><description><![CDATA[
<p>> - Is the work faster? It sounds like it’s not faster.<p>The author didn't discuss the speed of the work very much. It is certainly true that LLMs can write code faster than humans, and sometimes that works well. What would be nice is an analysis of the productivity gains from LLM-assisted coding in terms of how long it took to do an entire project, start to finish.</p>
]]></description><pubDate>Fri, 19 Dec 2025 12:31:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46325110</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=46325110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46325110</guid></item><item><title><![CDATA[New comment by stephendause in "It's insulting to read AI-generated blog posts"]]></title><description><![CDATA[
<p>This is total speculation, but my guess is that human reviewers of AI-written text (whether code or natural language) are more likely to think that the text with emoji check marks, or dart-targets, or whatever, are correct. (My understanding is that many of these models are fine-tuned using humans who manually review their outputs.) In other words, LLMs were inadvertently trained to seem correct, and a little message that says "Boom! Task complete! How else may I help?" subconsciously leads you to think it's correct.</p>
]]></description><pubDate>Mon, 27 Oct 2025 20:30:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45725917</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=45725917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45725917</guid></item><item><title><![CDATA[New comment by stephendause in "A definition of AGI"]]></title><description><![CDATA[
<p>This is a good insight, but do you know of better ways to measure machines' abilities to solve problems in the "messy real world"?</p>
]]></description><pubDate>Mon, 27 Oct 2025 15:11:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45721901</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=45721901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45721901</guid></item><item><title><![CDATA[New comment by stephendause in "A definition of AGI"]]></title><description><![CDATA[
<p>I think it's not only the potential for self-improvement of AGI that is revolutionary. Even having an AGI that one could clone for a reasonable cost and have it work nonstop with its clones on any number of economically-valuable problems would be very revolutionary.</p>
]]></description><pubDate>Mon, 27 Oct 2025 14:23:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45721333</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=45721333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45721333</guid></item><item><title><![CDATA[New comment by stephendause in "SWE-Bench Pro"]]></title><description><![CDATA[
<p>This is a key question in my opinion. It's one of the things that make benchmarking the SWE capabilities of LLMs difficult. It's usually impossible to know whether the LLM has seen a problem before, and coming up with new, representative problem sets is time-consuming.</p>
]]></description><pubDate>Mon, 22 Sep 2025 17:12:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45336463</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=45336463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45336463</guid></item><item><title><![CDATA[New comment by stephendause in "US High school students' scores fall in reading and math"]]></title><description><![CDATA[
<p>Jonathan Haidt has a lot of good material on this. He is leading the charge in encouraging parents to delay giving their child a phone until high school and not allowing them to have social media accounts until age 16.<p><a href="https://www.goodmorningamerica.com/family/story/author-suggests-guidelines-parents-kids-phones-social-media-108509992" rel="nofollow">https://www.goodmorningamerica.com/family/story/author-sugge...</a></p>
]]></description><pubDate>Tue, 09 Sep 2025 15:54:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45183794</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=45183794</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45183794</guid></item><item><title><![CDATA[New comment by stephendause in "Google debuts device-bound session credentials against session hijacking"]]></title><description><![CDATA[
<p>I could be wrong, but I believe the author is referring to cookies being used for session authentication as opposed to general session management.</p>
]]></description><pubDate>Thu, 28 Aug 2025 14:40:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45052770</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=45052770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45052770</guid></item><item><title><![CDATA[New comment by stephendause in "Babies made using three people's DNA are born free of mitochondrial disease"]]></title><description><![CDATA[
<p>> There is almost no children available for adoption<p>This is not true, at least in the United States. For one thing, there are many children in foster care who want to be adopted. It is also possible, though difficult and expensive, to adopt infants from mothers giving up their children for adoption as well. I am not saying it's an easy option or that everyone should do it, but it is an option.</p>
]]></description><pubDate>Sat, 19 Jul 2025 16:32:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44616911</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=44616911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44616911</guid></item><item><title><![CDATA[Why Are Computers Still So Dumb?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.theatlantic.com/technology/archive/2025/07/why-are-computers-still-so-dumb/683524/">https://www.theatlantic.com/technology/archive/2025/07/why-are-computers-still-so-dumb/683524/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44582117">https://news.ycombinator.com/item?id=44582117</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 16 Jul 2025 13:29:01 +0000</pubDate><link>https://www.theatlantic.com/technology/archive/2025/07/why-are-computers-still-so-dumb/683524/</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=44582117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44582117</guid></item><item><title><![CDATA[New comment by stephendause in "What Trump's Big Beautiful Bill means for Wi-Fi 6E and 7 users: It's not pretty"]]></title><description><![CDATA[
<p>I am interested whether anyone knows how precedented this sort of situation is. It seems like a certain portion of the spectrum was intended to be used for one thing, and now part of it might be sold for another purpose, causing the original users to have to adapt to the reversal. How often has this sort of thing happened?</p>
]]></description><pubDate>Wed, 09 Jul 2025 18:50:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44513555</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=44513555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44513555</guid></item><item><title><![CDATA[New comment by stephendause in "I'm dialing back my LLM usage"]]></title><description><![CDATA[
<p>One point I haven't seen made elsewhere yet is that LLMs can occasionally make you <i>less</i> productive. If they hallucinate a promising-seeming answer and send you down a path that you wouldn't have gone down otherwise, they can really waste your time. I think on net, they are helpful, especially if you check their sources (which might not always back up what they are saying!). But it's good to keep in mind that sometimes doing it yourself is actually faster.</p>
]]></description><pubDate>Wed, 02 Jul 2025 18:17:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44447072</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=44447072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44447072</guid></item><item><title><![CDATA[Cloudflare Introduces Default Blocking of A.I. Data Scrapers]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.nytimes.com/2025/07/01/technology/cloudflare-ai-data.html">https://www.nytimes.com/2025/07/01/technology/cloudflare-ai-data.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44443480">https://news.ycombinator.com/item?id=44443480</a></p>
<p>Points: 429</p>
<p># Comments: 331</p>
]]></description><pubDate>Wed, 02 Jul 2025 13:28:56 +0000</pubDate><link>https://www.nytimes.com/2025/07/01/technology/cloudflare-ai-data.html</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=44443480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44443480</guid></item><item><title><![CDATA[New comment by stephendause in "Programming on 34 Keys (2022)"]]></title><description><![CDATA[
<p>My coworker has a similar setup and loves it. Personally, it feels diametrically opposed to the way that I like to use my keyboard. I don't even like holding Shift to type `{`, `_`, etc when programming. I wish I had dedicated keys for those and other common symbols. I don't mind moving my hands a few inches at all, but for some reason, it feels cumbersome to me to hold down a key to activate another layer. To each their own, of course.</p>
]]></description><pubDate>Sun, 25 May 2025 14:01:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44087906</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=44087906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44087906</guid></item><item><title><![CDATA[New comment by stephendause in "Tesla driver arrested for homicide after running over motorcyclist on Autopilot"]]></title><description><![CDATA[
<p>Yes. You can be judged to be intoxicated even if you are under the limit.</p>
]]></description><pubDate>Wed, 24 Apr 2024 22:31:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=40150656</link><dc:creator>stephendause</dc:creator><comments>https://news.ycombinator.com/item?id=40150656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40150656</guid></item></channel></rss>