<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: philipswood</title><link>https://news.ycombinator.com/user?id=philipswood</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 23:45:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=philipswood" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by philipswood in "Show HN: QuickMailBites – email client that reads your AWS S3 bucket"]]></title><description><![CDATA[
<p>404</p>
]]></description><pubDate>Mon, 06 Apr 2026 20:43:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47666793</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=47666793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47666793</guid></item><item><title><![CDATA[New comment by philipswood in "Ask HN: Identity preservation vs. information transfer in LLMs"]]></title><description><![CDATA[
<p>I'm not sure that "it is clear that Claude does not have those things".<p>I AM sure that it is hard to conclusively show that Claude has experience and consciousness. Even Claude isn't sure about that.<p>But while it is absolutely true that "it is a word calculator" - unless you hold the position that human consciousness isn't neural[1]- I don't see how this is any different from saying saying humans beings are neural activation pattern calculators.<p>If you're sure that your consciousness isn't neural - then fine: Claude isn't made of the right stuff so couldn't possibly be. But state your assumption up-front.<p>If one opens up a person and looks at their nervous system the single neurons look complicated, but not especially mysterious.<p>Given how shockingly little we understand the brain/mind it is hard to be sure that we are certain enough of how we work and given how little we know how LLMs work at any of the many layers above the raw architecture either position can be reasonably held, but not convincingly argued/demonstrated.<p>Feel free to think Claude isn't conscious - I can't prove to you it isn't. And the amount of theory we still need to learn to be able to is vast.<p>But don't expect me to be _certain_ that it isn't and couldn't be - you simply can't show that convincingly either.<p>[1] 
Penrose thinks we have a quantum nature - so sure no classical computer can be then.
Some like Rupert Sheldrake think it's a field phenomenon - very woo maybe Claude has a morphic field as well?
Lots of people are sure we have a supernatural soul/spirit. One then needs to take up Claude's status with the Creator.</p>
]]></description><pubDate>Tue, 10 Mar 2026 13:12:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47322828</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=47322828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47322828</guid></item><item><title><![CDATA[New comment by philipswood in "We do not think Anthropic should be designated as a supply chain risk"]]></title><description><![CDATA[
<p>Which part of what he said is wrong?<p>> A brain is a collection of cells that transmit electrical signals and sodium. ...<p>That it is a collection of cells? Or that they transmit electrical signals and sodium?<p>Or do you feel that he's leaving out something important about how it works (like generated electrical fields or neural quantum effects)?</p>
]]></description><pubDate>Sun, 01 Mar 2026 18:43:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209454</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=47209454</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209454</guid></item><item><title><![CDATA[New comment by philipswood in "We do not think Anthropic should be designated as a supply chain risk"]]></title><description><![CDATA[
<p>Dune quote:<p>> It is said that the Duke Leto blinded himself to the perils of Arrakis, that he walked heedlessly into the pit.<p>> *Would it not be more likely to suggest he had lived so long in the presence of extreme danger he misjudged a change in its intensity?*<p>Be careful of letting your deep, keen insight into the fundamental limits of a thing blind you to its consequences...<p>Highly competent people have been dead wrong about what is possible (and why) before:<p>> The most famous, and perhaps the most instructive, failures of nerve have occurred in the fields of aero- and astronautics. At the beginning of the twentieth century, scientists were almost unanimous in declaring that heavier-than-air flight was impossible, and that anyone who attempted to build airplanes was a fool. The great American astronomer, Simon Newcomb, wrote a celebrated essay which concluded…<p>>>    “The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”<p>>Oddly enough, Newcomb was sufficiently broad minded to admit that some wholly new discovery — he mentioned the neutralization of gravity — might make flight practical. One cannot, therefore, accuse him of lacking imagination; his error was in attempting to marshal the facts of aerodynamics when he did not understand that science. His failure of nerve lay in not realizing that the means of flight were already at hand.</p>
]]></description><pubDate>Sun, 01 Mar 2026 07:12:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47204420</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=47204420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47204420</guid></item><item><title><![CDATA[New comment by philipswood in "Building SQLite with a small swarm"]]></title><description><![CDATA[
<p>I think we can now begin to experimentally test Conway's law and corollaries.<p>Agreed, a flat set of workers configured like this is probably not the best configuration.<p>Can you imagine what an  all human team configured like this would produce?</p>
]]></description><pubDate>Mon, 16 Feb 2026 09:16:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47032755</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=47032755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47032755</guid></item><item><title><![CDATA[New comment by philipswood in "Lena by qntm (2021)"]]></title><description><![CDATA[
<p>Are you sure that guy who wakes up tomorrow after you've gone to sleep is you?<p>Or the one who wakes up after 10,000 sleeps?<p>I'm sure he's going to be quite different...<p>Maybe that dude (the one who woke up after you went to sleep) is another you, but slightly different. And you, you're just gone.</p>
]]></description><pubDate>Fri, 13 Feb 2026 19:13:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47006532</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=47006532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47006532</guid></item><item><title><![CDATA[New comment by philipswood in "Lena by qntm (2021)"]]></title><description><![CDATA[
<p>It might be one of the only reasonable-seeming ways to not die.<p>I can see the appeal.</p>
]]></description><pubDate>Fri, 13 Feb 2026 10:15:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47001067</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=47001067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47001067</guid></item><item><title><![CDATA[New comment by philipswood in "Coding agents have replaced every framework I used"]]></title><description><![CDATA[
<p>To be clear: I'm not engaging with your main point about whether LLMs are usable in software engineering or not.<p>I'm specifically addressing your use of the concept of determinism.<p>An LLM is a set of matrix multiplies and function applications. The only potentially non-deterministic step is selecting the next token from the final output and that can be done deterministically.<p>By your strict use of the definition they absolutely can be deterministic.<p>But that is not actually interesting for the point at hand. The real point has to do with reproducibility, understand ability and tolerances.<p>3blue1brown has a really nice set of videos on showing how the LLM machinery fits together.</p>
]]></description><pubDate>Sun, 08 Feb 2026 08:41:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46932527</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46932527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46932527</guid></item><item><title><![CDATA[New comment by philipswood in "Coding agents have replaced every framework I used"]]></title><description><![CDATA[
<p>Build systems may be deterministic in the narrow sense you use, but significant extra effort is required to make them reproducible.<p>Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.<p>Edit: added second paragraph</p>
]]></description><pubDate>Sat, 07 Feb 2026 17:57:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46925928</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46925928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46925928</guid></item><item><title><![CDATA[New comment by philipswood in "1 kilobyte is precisely 1000 bytes?"]]></title><description><![CDATA[
<p>Yes, tomato's ARE actually a fruit.<p>But really!?<p>I'll keep calling it in nice round powers of two, thank you very much.</p>
]]></description><pubDate>Wed, 04 Feb 2026 03:58:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46881285</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46881285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46881285</guid></item><item><title><![CDATA[New comment by philipswood in "AI will not solve world hunger"]]></title><description><![CDATA[
<p>Strangely enough, even mankind producing more food than it eats isn't enough to stop world hunger either.<p>(A fact that he does mention)</p>
]]></description><pubDate>Thu, 29 Jan 2026 11:12:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46808597</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46808597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46808597</guid></item><item><title><![CDATA[New comment by philipswood in "Everyone is wrong about AI and Software Engineering"]]></title><description><![CDATA[
<p>> Consider what happens when you build software professionally. You talk to stakeholders who do not know what they want and cannot articulate their requirements precisely. You decompose vague problem statements into testable specifications. You make tradeoffs between latency and consistency, between flexibility and simplicity, between building and buying. You model domains deeply enough to know which edge cases will actually occur and which are theoretical. You design verification strategies that cover the behaviour space. You maintain systems over years as requirements shift.<p>I'm not sure why he thinks current LLM technologies (with better training) won't be able to do more and more of this as time passes.</p>
]]></description><pubDate>Sun, 25 Jan 2026 18:08:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46756455</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46756455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46756455</guid></item><item><title><![CDATA[New comment by philipswood in "Seeing Geologic Time: Exponential Browser Testing"]]></title><description><![CDATA[
<p>Yup, I also couldn't figure out after scrolling and skimming two or three pages.<p>Some understandable short sentence or paragraph early on needs to answer the main question the title raises.</p>
]]></description><pubDate>Tue, 13 Jan 2026 04:45:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46597406</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46597406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46597406</guid></item><item><title><![CDATA[New comment by philipswood in "I used Lego to design a farm for people who are blind – like me"]]></title><description><![CDATA[
<p>In a blind culture there probably are no guns at all - so your hypothetical sighted-person-amongst-the-blind would need to be able to make his own.<p>Then again, just throwing rocks might be pretty effective.</p>
]]></description><pubDate>Fri, 09 Jan 2026 05:23:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46550341</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46550341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46550341</guid></item><item><title><![CDATA[New comment by philipswood in "Two AI Agents Walk into a Room"]]></title><description><![CDATA[
<p>Um...<p>> This experiment was inspired by @swyx’s tweet about Ted Chiang’s short story “Understand” (1991). The story imagines a superintelligent AI’s inner experience—its reasoning, self-awareness, and evolution. After reading it and following the Hacker News discussion, ...<p>Umm...
I <3 love <3 Understand by Ted Chiang,
But the story is about super intelligent *humans*.<p>Like Tatja Grimm's World or the movie Limitless.<p>PS:
Referenced tweet for the interested:
<a href="https://x.com/swyx/status/2006976415451996358" rel="nofollow">https://x.com/swyx/status/2006976415451996358</a><p>Ted C</p>
]]></description><pubDate>Sun, 04 Jan 2026 11:37:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46487032</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46487032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46487032</guid></item><item><title><![CDATA[New comment by philipswood in "[dead]"]]></title><description><![CDATA[
<p>I think he's trying to write fiction, not non-fiction!
:p</p>
]]></description><pubDate>Sat, 03 Jan 2026 21:33:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46481829</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46481829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46481829</guid></item><item><title><![CDATA[New comment by philipswood in "LLMs will never be alive or intelligent"]]></title><description><![CDATA[
<p>I'm glad the author spent some time thinking about this, clarifying his thoughts and writing it down, but I don't think he's written anything much worth reading yet.<p>He's mostly in very-confident-but-not-even-wrong kind of territory here.<p>One comment on his note:<p>> As an example, let’s say an LLM is correct 95% of the time (0.95) in predicting the “right” tokens to drive tools that power an “agent” to accomplish what you’ve asked of it. Each step the agent has to take therefore has a probability of being 95% correct. For a task that takes 2 steps, that’s a probability of 0.95^2 = 0.9025 (90.25%) that the agent will get the task right. For a task that takes 30 steps, we get 0.95^30 = 0.2146 (21.46%). Even if the LLMs were right 99% of the time, a 30-step task would only have a probability of about 74% of having been done correctly.<p>The main point that for sequential steps of tasks errors can accumulate and that this needs to be handled is valid and pertinent, but the model used to "calculate" this is quite wrong - steps don't fail probabilistically independently.<p>Given that actions can depend on outcomes of previous step actions and given that we only care about final outcomes and not intermediate failing steps, errors can be corrected. Thus even steps that "fail" can still lead to success.<p>(This is not a Bernoulli process.)<p>I think he's referencing some nice material and he's starting in a good direction with defining agency as goal directed behaviour, but otherwise his confidence far outstrips the firmness of his conceptual foundations or clarity of his deductions.</p>
]]></description><pubDate>Sat, 03 Jan 2026 21:20:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46481710</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46481710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46481710</guid></item><item><title><![CDATA[New comment by philipswood in "Gitmal – a static pages generator for Git repos"]]></title><description><![CDATA[
<p>Cool!<p>I've wanted something like this for a while to use with architectural PlantUML diagrams rendered to SVG with hyperlinks linking to their implementations.<p>I'll give it a spin.</p>
]]></description><pubDate>Tue, 02 Dec 2025 11:16:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46120093</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46120093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46120093</guid></item><item><title><![CDATA[New comment by philipswood in "AWS Amplify Is a Joke"]]></title><description><![CDATA[
<p>LOL.
You're not spoiling the punchline!</p>
]]></description><pubDate>Sat, 29 Nov 2025 17:01:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46089021</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46089021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46089021</guid></item><item><title><![CDATA[New comment by philipswood in "LLM assisted book reader by Karpathy"]]></title><description><![CDATA[
<p>Agreed, but a check of superficial details you've read and a detailed discussion grounded in the text can really help to cement the book in your mind.<p>I've experimented with sessions that start with prompts like:<p>> Hi, Please act as a tutor. I will act as a student. I’m working through the following text of X by Y - trying to engage with it more deeply. I'm specifically trying to let active recall clarify and consolidate my long term memory of it. I also want to make sure the ideas are connected in my memory with my existing concepts and concept maps. So please ask me questions from the text given. Present me with just the questions and then allow me to give answers. Discuss my answer - justify your responses from the text.<p>and found it generally helpful.</p>
]]></description><pubDate>Mon, 24 Nov 2025 14:57:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46034831</link><dc:creator>philipswood</dc:creator><comments>https://news.ycombinator.com/item?id=46034831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46034831</guid></item></channel></rss>