<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: samuellevy</title><link>https://news.ycombinator.com/user?id=samuellevy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 21:23:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=samuellevy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by samuellevy in "Show HN: Open Source TailwindCSS UI Components"]]></title><description><![CDATA[
<p>Funny that you mention "accessible"... Because most of these components are anything but.<p>Modern HTML and CSS are awesome tools on their own, and are able to do so much without needing to rely on massive JavaScript bundles, but you still end up with component libraries that are <div><div><div><div> all the way down.</p>
]]></description><pubDate>Wed, 17 Apr 2024 13:12:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=40064107</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=40064107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40064107</guid></item><item><title><![CDATA[New comment by samuellevy in "Don’t Use VPN Services (2015)"]]></title><description><![CDATA[
<p>That's covered by the very first line of the main body of the article:<p>> Because a VPN in this sense is just a glorified proxy.</p>
]]></description><pubDate>Wed, 16 Aug 2023 09:31:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=37144655</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=37144655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37144655</guid></item><item><title><![CDATA[New comment by samuellevy in "Smart Contract Security Field Guide"]]></title><description><![CDATA[
<p>More than just needing an oracle - the keys and the house are both physical items. There's not really any practical way for a contract on the blockchain to validate that a particular physical item is in fact the item that it purports to be.<p>Are these ACTUALLY the keys to this house? Are they the only set? The original set? Were the locks changed, and this set in the contract is no longer valid?<p>Then putting aside all of that... How do you ENFORCE a "smart contract"? Probably through... Existing contract law. Because that's what it's there for. Smart contracts are just more convoluted paper, and we can do that already with DocuSign or any number of other digital contract options - all of which provide, so far as I can tell, precisely the same level of verification that a smart contract does. The only "advantage" of a smart contract over those platforms is that the history of the "document" is more or less baked into the chain, instead of trusting that the third party platform hasn't modified it... Which they will never have any motivation to do...<p>People have been initialing pages to mark them as read/accepted for more years than I've been alive. In the event of a contract dispute, smart contract or not, it's going to be up to a third party (mediator, judge, etc.) to decide on resolution anyway... At which point even the exact wording of the contract may well be discarded as being unenforceable because _contracts are not above the law_.</p>
]]></description><pubDate>Wed, 26 Jul 2023 23:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=36886549</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36886549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36886549</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Problems for the next decade?"]]></title><description><![CDATA[
<p>How big of a problem are "snake bite victims"? I live in Australia, one of those countries that people would class as "dangerous" with regards to snakes, and... There's only been about 40 deaths in the past 20 years...<p>It's just a really strange thing to ping as a "big problem to solve", and such a bizarrely expensive solution to the problem, too. I think that a much better solution would be improving development and access to antivenin</p>
]]></description><pubDate>Sun, 09 Jul 2023 06:26:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=36652118</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36652118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36652118</guid></item><item><title><![CDATA[New comment by samuellevy in "As a therapist, I know what’s breaking couples up"]]></title><description><![CDATA[
<p>I <i>did</i> read the article, and it only vaguely describes the "problem" with smartphones in one paragraph, then it spends the rest of the article talking about the effects of the pandemic. The problem that it attributes to smartphones is that people will get distracted, then expect instantaneous communication, relationships require time & attention...<p>But the thing that they're blaming smartphones for is nothing new. Communication has always been a difficult thing, and before people had their heads buried in smartphones, they had their heads buried in TV, or newspapers/magazines/books, or they just simply went to bars/pubs.<p>The whole article seems like a vague, "hot take", nothing. It's an opinion backed up with zero research or evidence other than "I'm a couples therapist, trust me, I know."</p>
]]></description><pubDate>Sun, 09 Jul 2023 05:49:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=36651932</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36651932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36651932</guid></item><item><title><![CDATA[New comment by samuellevy in "Why I Hate Frameworks (2005)"]]></title><description><![CDATA[
<p>They have strict specifications about the types of nails and wood that they can hammer, but they're not documented anywhere. If you send them the wrong type of nails or wood, they'll put them both into an industrial shredder and send you back the dust, because technically the nails have now been integrated with the wood.<p>Their free plan will let you hammer in 5 nails per month into a single piece of wood, but you can't use a different piece of wood each month. For $30/month you get 50 nails, and up to 5 pieces of wood, or for $60/month you can get 120 nails and unlimited pieces of wood, and two-factor (they'll call you before they hammer in the nails, and ask where you actually want the nails hammered). If you want to have unlimited nails, you have to contact them for enterprise pricing.<p>They will also sell the measurements of your wood and the nail positions to other carpenters.</p>
]]></description><pubDate>Fri, 07 Jul 2023 22:19:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=36639026</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36639026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36639026</guid></item><item><title><![CDATA[New comment by samuellevy in "Why I Hate Frameworks (2005)"]]></title><description><![CDATA[
<p>Ok.<p>I don't like React.</p>
]]></description><pubDate>Fri, 07 Jul 2023 22:06:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=36638901</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36638901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36638901</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Could you share your personal blog here?"]]></title><description><![CDATA[
<p><a href="https://blog.samuellevy.com/" rel="nofollow noreferrer">https://blog.samuellevy.com/</a> - I haven't posted in a few years, and I really need to upgrade it/clean up everything. It's not remotely mobile friendly.<p>I've had a few relatively popular posts over the years:<p><a href="https://blog.samuellevy.com/post/41-php-is-the-right-tool-for-the-job-for-all-the-wrong-reasons.html" rel="nofollow noreferrer">https://blog.samuellevy.com/post/41-php-is-the-right-tool-fo...</a>
A kind of response to a certain post about PHP that still makes the rounds...<p><a href="https://blog.samuellevy.com/post/46-do-i-look-like-i-give-a-shit-public-license.html" rel="nofollow noreferrer">https://blog.samuellevy.com/post/46-do-i-look-like-i-give-a-...</a>
"Do I Look Like I Give A Shit Public Licence" an alternative to the WTFPL</p>
]]></description><pubDate>Thu, 06 Jul 2023 09:02:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=36613180</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36613180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36613180</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>Ehh... my dog is alive, thinks, and "speaks" in a manner - not a cute term for barking, but he communicates (with relatively high effectiveness) his wants and desires. Maybe not using human words, but he certainly has his own sort of crude language, as does my cat.<p>The problem is that LLMs aren't alive, and they _don't think_. The speaking is arguable.</p>
]]></description><pubDate>Mon, 03 Jul 2023 11:44:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=36571578</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36571578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36571578</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>That's kind of the point, but also kind of not.<p>GPT isn't making true or false outputs. It's just making outputs. The truthiness or falseness of any output is irrelevant because it has no concept of true or false. We're assigning those values to the outputs ourselves, but like... it doesn't know the difference.<p>It's like blaming a die for a high or a low roll - it's just doing rolls. It has no knowledge of a good or a bad roll. GPT is like a Rube Goldberg machine for rolling dice that's _more likely_ to roll the number that you want, but really it's just rolling dice.</p>
]]></description><pubDate>Mon, 03 Jul 2023 11:17:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=36571365</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36571365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36571365</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>Nah, my issue with both terms is that they imply that when the answer is "correct" that's because the LLM "knows" the correct answer, and when it's wrong it's just a brain fart.<p>It doesn't matter if the output is correct or not, the process for producing it is identical, and the model has the exact same amount of knowledge about what it's saying... which is to say "none".<p>This isn't a case of "it's intelligent, but it gets muddled up sometimes". It's more of the case that it's _always_ muddled up, but it's accidentally correct a lot of the time.</p>
]]></description><pubDate>Mon, 03 Jul 2023 11:07:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=36571292</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36571292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36571292</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>Yep, the "chinese room" is the classic thought experiment, but I feel like it fails to get the point across because the characters still represent language, so you could conceivably "learn" the language. I prefer the idea of symbols that aren't inherently language, as it really nails in the idea that it doesn't matter how long you spend, there's not something that you can ever learn to "speak" fluently.</p>
]]></description><pubDate>Mon, 03 Jul 2023 10:57:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=36571218</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36571218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36571218</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>I don't really know anything about AlphaGo. There's more types of "AI" than LLMs, but that's not really the point. You don't need AI for people to lose their jobs... but nobody is losing their jobs to AlphaGo, and in the grand scheme of things it's unlikely that people are going to lose their jobs to GPT, too.</p>
]]></description><pubDate>Mon, 03 Jul 2023 10:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=36570995</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36570995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36570995</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>So humans have a level of knowledge, understanding, and reasoning ability that LLMs simply don't have. I'm writing a response to you right now, and I "know" a certain amount of information about the world. That knowledge has limits, and I can expand it, I can forget it, all sorts of things...<p>"Hallucination" is a term that works well for actual intelligence - when you "know" something that isn't true, and has no path of reasoning, you might have hallucinated the base "knowledge".<p>But that doesn't really work for LLMs, because there's no knowledge at all. All they're doing is picking the next most likely token based on the probabilities. If you interrogate something that the training data covers thoroughly, you'll get something that is "correct", and that's to be expected because there's a lot of probabilities pointing to the "next token" being the right one... but as you get to the edge of the training data, the "next token" is less likely to be correct.<p>As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares. None of them have meaning to you, they're just colours and shapes that are in random seeming sequences, but there's a frequency to them. "Red circle, blue square, gren triangle" is a much more common sequence than "red circle, blue square, black triangle", so if someone hands you a piece of paper with "red circle, blue square", you can reasonably guess that what they want back is a green triangle.<p>Expand the model a bit more, and you notice that "rc bs gt" is pretty common, but if there's a yellow square a few symbols before with anything in between, then the triangle is usually black. Thus the response to the sequence "red circle, blue square" is usually "green triangle", but "black circle, yellow square, grey circle, red circle, blue square" is modified by the yellow square, and the response is "black triangle"... but you still don't know what any of these things _mean_.<p>When you get to a sequence that isn't covered directly by the training data, you just follow the process with the information that you _do_ have. You get "red triangle, blue square" and while you've not encountered that sequence before, "green" _usually_ comes after "red, blue", and "circle" is _usually_ grouped with "triangle, square", so a reasonable response is "green circle"... but we don't know, we're just guessing based on what we've seen.<p>That's the thing... the process is exactly the same whether the sequence has been seen before or not. You're not _hallucinating_ the green circle, you're just picking based on probabilities. LLMs are doing effectively this, but at massive scale with an unthinkably large dataset as training data. Because there's so much data of _humans talking to other humans_, ChatGPT has a lot of probabilities that make human-sounding responses...<p>It's not an easy concept to get across, but there's a fundamental difference between "knowing a thing and being able to discuss it" and "picking the next token based on the probabilities gleaned from inspecting terabytes of text, without understanding what any single token means"</p>
]]></description><pubDate>Mon, 03 Jul 2023 10:06:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=36570893</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36570893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36570893</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>There's definitely a few echo chambers around AI, but it's definitely not something that "just techies" are onto.<p>ChatGPT made some waves at the end of last year. My in-laws were wanting to talk to (at) me about it at Christmas. There's plenty of awareness outside of the tech circles, but most of the discussion (both out and in of the tech world) seems to miss what LLMs actually _are_.<p>The reason why ChatGPT was impressive to me wasn't the "realism" of the responses... It was how quickly it could classify and chain inputs/outputs. It's super impressive tech, but like... It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying. "Hallucinations" is a fun term, but it's not hallucinating information, it's just guessing at the next token to write because that's all it ever does.<p>If it was "intelligent" it would be able to recognise a limitation in its knowledge and _not_ hallucinate information. But it can't. Because it doesn't know anything. Correct answers are just as hallucinatory as incorrect answers because it's the exact same mechanism that produces them - there's just better probabilities.</p>
]]></description><pubDate>Mon, 03 Jul 2023 04:03:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=36568633</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36568633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36568633</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Are people in tech inside an AI echo chamber?"]]></title><description><![CDATA[
<p>You haven't seen them already? The "AI Lawyer", all of the people trying to sell LLMs as search engines, and just generally hundreds of projects that are outright dangerous uses of LLMs but seem like they might be feasible.</p>
]]></description><pubDate>Mon, 03 Jul 2023 03:47:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=36568526</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36568526</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36568526</guid></item><item><title><![CDATA[New comment by samuellevy in "How to Do Great Work"]]></title><description><![CDATA[
<p>There's an interview with Bo Burnham where he puts it clearly...<p>> Don't listen to people who just got very lucky. Taylor Swift telling you to "follow your dreams" is like a lottery winner saying "liquidise your assets, buy Powerball tickets. It works!"<p>And that's the thing. Skill and talent are important, but there's a certain amount of success that's only achievable through luck, or through starting from _so far ahead_ that it's just genuinely out of reach for us mere mortals.<p>Is the experience of those people irrelevant? No, but it's also not actually applicable to most other people.</p>
]]></description><pubDate>Sun, 02 Jul 2023 11:48:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=36560544</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=36560544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36560544</guid></item><item><title><![CDATA[New comment by samuellevy in "Go with PHP"]]></title><description><![CDATA[
<p>0.10 seconds</p>
]]></description><pubDate>Thu, 11 May 2023 19:42:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=35907404</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=35907404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35907404</guid></item><item><title><![CDATA[New comment by samuellevy in "Ask HN: Inherited the worst code and tech team I have ever seen. How to fix it?"]]></title><description><![CDATA[
<p>Yeah, there's a process. It's something that I've done a bunch of times for a bunch of clients.<p>There's so much low-hanging fruit there that's so easy to fix _right now_. No version control? Good news! `git init` is free! PHPCS/PHP-CS-fixer can normalise a lot, and is generally pretty safe (especially when you have git now). Yeah, it's overwhelming, but OP said that the software is already making millions - you don't wanna fuck with that.<p>I've done it, I've written about it, I've given conference talks about it. The real bonus for OP is that the team is small, so there's only a few people to fight over it. It's pretty easy to show how things will be better, but remember that the team are going to resist deleting code not because that they're unaware that it's bad, but because they are afraid to jeporadise whatever stability that they've found.</p>
]]></description><pubDate>Sun, 18 Sep 2022 10:24:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=32886210</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=32886210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32886210</guid></item><item><title><![CDATA[New comment by samuellevy in "I still love PHP and JavaScript"]]></title><description><![CDATA[
<p>> silently take a null instead of a string<p>Oh, so you _haven't_ used any of the recent versions of PHP, then. You're just talking shit with no actual recent experience. Gotcha. Well, thanks for your input.</p>
]]></description><pubDate>Wed, 03 Aug 2022 12:43:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=32330931</link><dc:creator>samuellevy</dc:creator><comments>https://news.ycombinator.com/item?id=32330931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32330931</guid></item></channel></rss>