<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ryan29</title><link>https://news.ycombinator.com/user?id=ryan29</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 17:51:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ryan29" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ryan29 in "AI slop is killing online communities"]]></title><description><![CDATA[
<p>Im not a crypto person, but I was intrigued by Chia.  They generate their coins based on allocating disk space.  So if you have a bit of free space, you can fill it with plots and play the lotto.<p>The intriguing part is that I think it works against scaling.  The incremental cost for me to use the 500GB of free space on my disk is $0, but someone scaling a bot farm has to buy all their space.<p>Real people tend to have a lot more idle capacity than optimized, scaled businesses, so any kind of proof of idle capacity seems like it would disadvantage bot farms.<p>I’ve also thought that proof of collateral spending would be a good system. For example, you buy groceries and the store gives you a token saying you spent $X of real world money.  Those tokens help show you're not a bot.  Keeping that system honest and equitable would be extremely difficult though.<p>Maybe schools could give kids tokens for attendance.  It sounds kind of dumb, but who knows.</p>
]]></description><pubDate>Thu, 07 May 2026 20:11:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=48054243</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=48054243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48054243</guid></item><item><title><![CDATA[New comment by ryan29 in "Anchor Engine – deterministic semantic memory for LLMs, <1GB RAM runs on a phone"]]></title><description><![CDATA[
<p>I think this is going on my list of things I want to try.  I have some feedback, but need to qualify it with a warning that I've barely used any AI beyond simple chat bots.  This is going to be the opposite of the feedback that silentsvn gave you, meaning I have no idea what I'm talking about :-)<p>TLDR; You need a "how to use it" section that explains how to get information in and out of the context.  That's assuming I'm not completely misunderstanding the purpose.<p>I started using Claude Code about a week ago, but my goal is to get something running locally that can help me get things accomplished.  I'm skeptical of the claims that AI can do the work for us, but I'm interested in the idea that we can offload a bunch of cognitive load onto it freeing up brain space for the actual problems we're trying to solve.  Some kind of memory system is the starting point IMO.<p>So here's my feedback.  I skimmed the repo.  You explain what it does and how it does it, but I have no idea what it does or how it does it.  I think your explanations are too technical for people to understand <i>why</i> they'd want something like this and the example makes it look like a simple search engine.  I think you need more of an explain-it-like-I'm-five approach.  I might know enough to be the 5 year old in the conversation, so I'll explain a few issues I've been having and maybe you can tell me if / how your tool helps.<p>Most of this is in the context of using Claude Code.<p>I noticed the amnesia problem immediately, but expected it.  I figured I'd need to take a couple of days to configure the system to remember things and adhere to my preferences, but now I realize that was wildly optimistic.  Regardless, I started making a very naive system that uses markdown files with the goal of getting a better understanding of managing memory and context together.  It tries to limit the current context, but it's naive.  It walks a hierarchy and dumps things into the context.  It's just for me to learn.  I'll be happy if it helps me understand enough to pick a good tool that already exists.<p>The first big problem I hit was that I want what you describe as compounds, mainly chat exports, especially as I'm starting out and just want to "dump" information somewhere.  I want <i>all</i> my chat history as I'm learning something.  I had a big <i>ah-ha</i> moment when  I asked Claude to write our conversation to a markdown file and it told me it couldn't, but offered to output a summary.  I'm losing information in real time as I chat.  I don't know if it's valuable or not because I don't know enough to know what I don't know.<p>I've been getting the most value from chatting with the AI to learn and plan things.  That involves a lot of ideas, right or wrong, and I want to be able to save and retrieve those chats verbatim so I can get back to the <i>exact</i> same context in the future.  I don't know if that's a good or bad idea, but I figure that, if I can retrieve the original context, I can always have the AI summarize it or have it help me create something more well structured once I understand the topic a bit better.  I also think there's probably some value in having a future model re-evaluate that old context.  For example, in the future I can start it with the current refined context (how I implemented things) and have it walk through all that old context to see if there are any novel ideas that might help to solve existing issues.<p>I'm assuming your spec documents are followed by the AI when working on the project.  Is that right?  If so, I wonder if you're underselling that by not giving an ELI5 example of how that works.  For me, that's a hard problem to solve.  I want a semantic search for rules the model needs to apply but I don't <i>really</i> want it to be semantic because they're rules that must be applied.  I need to be able to ask "why isn't the tool following my docker compose spec" and need a deterministic way to answer that.  I think your project does that.<p>Maybe I'm simply lacking knowledge and should be able to understand why I need this kind of tool and, more importantly, how it maps to context management (assuming that's what it does).<p>I'll give you an analogy, at least that applies to me.  Your "how it works" section is like going to driver training and having the instructor start explaining how the car's engine and transmission are built.  People like me need it dumbed down; "Push the gas and turn the wheel.  It's faster than your bicycle."<p>Maybe I'm not the target audience yet, but maybe I am.  I'm already convinced that AI with good memory management is useful.  I'm also unwilling to build that memory using a commercial system like Claude or ChatGPT.  It's vendor lock-in on the level of getting a lobotomy if you lose access to that system and I don't think people are doing a good job of assessing that risk.<p>I'm going to finish building my own crappy memory system and then yours is going to be the first real system I try.  Thanks for sharing it.</p>
]]></description><pubDate>Thu, 12 Mar 2026 23:14:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47358562</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=47358562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47358562</guid></item><item><title><![CDATA[New comment by ryan29 in "Spotify: Our best developers haven't written a single line of code since Dec"]]></title><description><![CDATA[
<p>I wonder how it works and how much heavy lifting "supervising" is doing.  Whenever I try to use AI, the outcome is about the same.<p>It's good at non-critical things like logging or brute force debugging where I can roll back after I figure out what's going on.  If it's something I know well, I can coax a reasonable solution out of it.  If it's something I don't know, it's easy to get it hallucinating.<p>It <i>really</i> goes off the rails once the context gets some incorrect information and, for things that I don't understand thoroughly, I always find myself poisoning the context by asking questions about how things work.  Tools like the /ask mode in Aider help and I suspect it's a matter of learning how to use the tooling, so I keep trying.<p>I'd like to know if AI is writing code their best developers couldn't write on their own or if it's only writing code they could write on their own because that has a huge impact on efficiency gains, right?  If it can accelerate my work, that's great, but there's still a limit to the throughput which isn't what the AI companies are selling.<p>I do believe there are gains in efficiency, especially if we can have huge contexts the AI can recall and explain to us, but I'm extremely skeptical of who's going to own that context and how badly they're going to exploit it.  There are significant risks.<p>If someone can do the work of 10 people with access to the lifetime context of everyone that's worked on a project / system, what happens if that context / AI memory gets taken away?  In my opinion, there needs to be a significant conversation about context ownership before blindly adopting all these AI systems.<p>In the context of Spotify in this article, who owns the productivity increase?  Is it Spotify, Anthropic, or the developers?  Who has the most leverage to capture the gains from increasing productivity?</p>
]]></description><pubDate>Thu, 12 Feb 2026 21:44:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46995692</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=46995692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46995692</guid></item><item><title><![CDATA[New comment by ryan29 in "Show HN: Recall: Give Claude memory with Redis-backed persistent context"]]></title><description><![CDATA[
<p>Who should own the context?<p>Imagine having 20 years of context / memories and relying on them.  Wouldn't you want to own that?  I can't imagine pay-per-query for my real memories and I think that allowing that for AI assisted memory is a mistake.  A person's lifetime context will be irreplaceable if high quality interfaces / tools let us find and load context from any conversation / session we've ever had with an LLM.<p>On the flip side of that, something like a software project should own the context of every conversation / session used during development, right?  Ideally, both parties get a copy of the context.  I get a copy for my personal "lifetime context" and the project or business gets a copy for the project.  However, I can't imagine businesses agreeing to that.<p>If LLMs become a useful tool for assisting memory recall there's going to be fighting over who owns the context / memories and I worry that normal people will lose out to businesses.  Imagine changing jobs and they wipe a bunch of your memory before you leave.<p>We may even see LLM context ownership rules in employment agreements.  It'll be the future version of a non-compete.</p>
]]></description><pubDate>Wed, 08 Oct 2025 16:47:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45518138</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=45518138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45518138</guid></item><item><title><![CDATA[New comment by ryan29 in "Linkwarden: FOSS self-hostable bookmarking with AI-tagging and page archival"]]></title><description><![CDATA[
<p>I'd be interested to hear your thoughts on having a PWA vs regular mobile apps since it looks like you started with a PWA, but are moving to regular apps.  Is that just a demand / eyeballs thing or were there technical reasons?</p>
]]></description><pubDate>Thu, 01 May 2025 21:10:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43863411</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=43863411</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43863411</guid></item><item><title><![CDATA[New comment by ryan29 in "Linkwarden: FOSS self-hostable bookmarking with AI-tagging and page archival"]]></title><description><![CDATA[
<p>I've used <a href="https://historio.us" rel="nofollow">https://historio.us</a> since 2011 and still pay for it to keep access to all the pages I've archived over the years.  The price has been kept low enough that I can't bring myself to cancel it even though I've been using self-hosted <a href="https://archivebox.io/" rel="nofollow">https://archivebox.io/</a> for the last few years.<p>I always include an archived link whenever I reference something in documentation.  That's my main use at the moment.<p>However, I also feel like I've gotten a lot of really good value when trying to learn a new development topic.  Whenever I find something that looks like it <i>might</i> be useful, I archive it and, because everything is searchable, I end up with a searchable index of really high quality content once I actually know what I'm doing.<p>I find it hard to rediscover content via web search these days and there's so much churn that having a personal archive of useful content is going to increase in value, at least in my opinion.</p>
]]></description><pubDate>Thu, 01 May 2025 20:58:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43863274</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=43863274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43863274</guid></item><item><title><![CDATA[New comment by ryan29 in "Zoom outage caused by accidental 'shutting down' of the zoom.us domain"]]></title><description><![CDATA[
<p>The exact pricing isn’t disclosed.  All they do is tell you the price will be “higher”.  Anyone registering a premium domain is getting higher than uniform renewal pricing, so whatever they’re doing right now is considered adequate and that’s just generic ToS in the registration agreement AFAIK.<p>It sounds like you think I’m being deceptive.  Do you know about any registry premium domains where someone has a contractually guaranteed price?<p>Also, based on my own anecdotal experience, ICANN doesn’t interpret 2.10c broadly and they allow the registries to push the boundaries as much as they want.</p>
]]></description><pubDate>Thu, 17 Apr 2025 23:13:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=43723221</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=43723221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43723221</guid></item><item><title><![CDATA[New comment by ryan29 in "Zoom outage caused by accidental 'shutting down' of the zoom.us domain"]]></title><description><![CDATA[
<p>> .com itself is under jurisdiction of USA and operated by Verisign<p>Barely.  The NTIA gave up all their leverage over .com in 2018.  The only thing the US can do at this point is let the cooperative agreement auto-renew to limit price increases.<p>I wouldn't be surprised if the US withdrew from the agreement altogether at this point.  Then .com would fall under the joint control of ICANN and Verisign.</p>
]]></description><pubDate>Thu, 17 Apr 2025 16:38:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43719234</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=43719234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43719234</guid></item><item><title><![CDATA[New comment by ryan29 in "Zoom outage caused by accidental 'shutting down' of the zoom.us domain"]]></title><description><![CDATA[
<p>It's going to be interesting to see what they do.  One of the core arguments when claiming the domain industry enjoys a competitive market is that switching costs are bearable and that switching TLDs is an option if registries increase prices too much.<p>So ICANN has a non-trivial choice to make.  Either they maintain the position that switching costs are bearable and let .io disappear, or they admit that TLD switching is impossible and save .io, which will make it hard to argue the threat of (registrants) TLD switching keeps the industry competitive.</p>
]]></description><pubDate>Thu, 17 Apr 2025 16:26:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=43719091</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=43719091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43719091</guid></item><item><title><![CDATA[New comment by ryan29 in "Zoom outage caused by accidental 'shutting down' of the zoom.us domain"]]></title><description><![CDATA[
<p>> They can't target popular domains for discriminatory pricing.<p>That's not completely accurate.  Section 2.10c of the base registry agreement says the following in relation to the uniform pricing obligations:<p>> The foregoing requirements of this Section 2.10(c) shall not apply for (i) purposes of determining Renewal Pricing if the registrar has provided Registry Operator with documentation that demonstrates that the applicable registrant expressly agreed in its registration agreement with registrar to higher Renewal Pricing at the time of the initial registration<p>Most registrars have blanket statements in their registration agreement that say premium domains may be subject to higher renewal pricing.  For registry premium domains, there are no contractual limits on pricing or price discrimination.  AFAIK, the registries can price premium domains however they want.</p>
]]></description><pubDate>Thu, 17 Apr 2025 16:14:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43718900</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=43718900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43718900</guid></item><item><title><![CDATA[New comment by ryan29 in "ATProto and the ownership of identity"]]></title><description><![CDATA[
<p>I think that once you have domains as an identity, you can solve a lot of problems with the idea of 'just add money'.  If $1000 gets me a gold check mark, it changes the economics of impersonation.  Is it worth it to spend $1000 to get a gold check mark on 'goog1e.com' if a brand monitoring system is going to get that moderated out of existence in a couple of hours?<p>That's also why domain verification systems need to have continuous re-validation with more frequent re-validation for new identities.  For example, if '@goog1e.com' is a new identity, it should be re-validated after 1h, 4h, 8h, 16h (up to a maximum).  Additionally, you could let other validated users with aged accounts trigger a re-validation (with shared rate limits for a target domain).<p>The great thing about domains is that those of us that are good faith participants can build a ton of value on them and that value can be used as a signal for trustworthiness.  The hard part is conveying that value to regular users in a way that's simple to understand.<p>We could also have systems that use some type of collateral attestation.  For example, if I donate $1000 to the EFF, maybe I could attribute that donation to my domain 'example.com' and the EFF could attest to the fact that I've spent $1000 in the name of 'example.com'.<p>You probably have to gate that though some type of authority, but I can imagine a system where domain registrars could do that.  I would love to buy reputation from my registrar by donating money to charity.</p>
]]></description><pubDate>Sat, 18 Jan 2025 20:17:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=42751092</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42751092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42751092</guid></item><item><title><![CDATA[New comment by ryan29 in "ATProto and the ownership of identity"]]></title><description><![CDATA[
<p>The platform owners have spent two decades de-emphasizing domains, so it's not too surprising that most people struggle to understand how they work.  I think that can change with education and awareness if domains as identity start to catch on.  It just takes time.<p>For now, I think wider adoption of things like DomainConnect [1] would make a difference.  It works really well to set up an MS365 account with DNS hosted at Cloudflare, but it would need a workflow that supports sending requests to your DNS admin rather than assuming everyone is a DNS admin.<p>> A lot of people do not want to look at and understand domain names, instead they want to see a name and a check mark. They want a central authority to tell them who is trustworthy and who is not.<p>I think 'trustworthy' is a key word there and would add that I think a lot of regular people conflate identity verification with moderation.  It's important to keep those separate because as soon as an identity system becomes a moderation system, it's worthless.<p>That's what makes domains so great for identity, especially with the way the AT protocol works.  It helps to create a clear separation between identity verification and moderation.  Moderation is much harder than identity verification, so having a clear line between the two should make it easier to develop technical systems that perform identity verification.<p>For pure identity verification, I think BIMI [2] is sitting on a solution they don't even realize they have.  They're too tunnel visioned on email verification, but the system they've built with VMC (verified mark certificates) works as a decentralized system of logo verification.  For example, I can tell you this logo [3] is trademarked and owned by 'cnn.com' and I can do it via technical means starting with the domain name:<p><pre><code>    dig default._bimi.cnn.com TXT
</code></pre>
Seeing a 3rd party URL in the TXT value makes me think the implementation is weak since that would be better as a CNAME pointing to a TXT record managed by a 3rd party, but I've never looked into the details enough to know if it'll follow CNAMEs (like ACME or DKIM do).<p>Also, the VMCs are only good for high value brands because CNN is paying DigiCert $1600 / year for the certificate, but, since it's just PKI, it allows anyone to put up that logo with a verified badge on the @cnn.com identity.  A more accurate badge would be the registered trademark symbol [4].<p>Even though that only works for high value brands that own a logomark, it works extremely well and would be a great start to a system that's easier for the average person to understand because logos are a simpler concept than something abstract like domains and no one is spending the time and effort needed to get a fake VMC (if it's even possible).<p>The Bluesky implementation for domain verification has a long way to go though.  It's very naive at the moment and doesn't even do a proper job of dealing with changes in domain ownership.  In fact, almost everyone doing domain validation is doing it wrong because very few implementation do re-validation from what I've seen.<p>1. <a href="https://www.domainconnect.org/" rel="nofollow">https://www.domainconnect.org/</a><p>2. <a href="https://bimigroup.org/" rel="nofollow">https://bimigroup.org/</a><p>3. <a href="https://amplify.valimail.com/bimi/time-warner/I0vDrJpkRnB-cable_news_network_inc2025.svg" rel="nofollow">https://amplify.valimail.com/bimi/time-warner/I0vDrJpkRnB-ca...</a><p>4. <a href="https://en.wikipedia.org/wiki/Registered_trademark_symbol" rel="nofollow">https://en.wikipedia.org/wiki/Registered_trademark_symbol</a></p>
]]></description><pubDate>Sat, 18 Jan 2025 19:50:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=42750903</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42750903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42750903</guid></item><item><title><![CDATA[New comment by ryan29 in "PyPI Blog: Project Quarantine"]]></title><description><![CDATA[
<p>> since there's no authoritative, authenticated unique name system across indices<p>Domains provide a globally unique namespace and ownership can be verified automatically with domain validation.  Bluesky did an ok job of it, but they didn't do anything to account for domain ownership changes and re-validation is non-existent, which is disappointing to see from the first big adopter since the oversight will eventually invite criticism.<p>I've wanted domain validated namespaces for 5+ years.  Here's a comment I made about using domain validated namespaces in package managers a couple of years ago [1]:<p>---<p>I think one possible solution to that would be to assume namespaces can have their ownership changed and build something that works with that assumption.<p>Think along the lines of having 'pypi.org/example.com' be a redirect to an immutable organization; 'pypi.org/abcd1234'. If a new domain owner wants to take over the namespace they won't have access to the existing account and re-validating to take ownership would force them to use a different immutable organization; 'pypi.org/ef567890'.<p>If you have a package locking system (like NPM), it would lock to the immutable organization and any updates that resolve to a new organization could throw a warning and require explicit approval. If you think of it like an organization lock:<p><pre><code>    v1:

        pypi.org/example.com --> pypi.org/abcd1234

    v2:

        pypi.org/example.com --> pypi.org/ef123456
</code></pre>
If you go from v1 to v2 you know there was an ownership change or, at the very least, an event that you need to investigate.<p>Losing control of a domain would be recoverable because existing artifacts wouldn't be impacted and you could use the immutable organization to publish the change since that's technically the source of truth for the artifacts. Put another way, the immutable organization has a pointer back the current domain validated namespace:<p><pre><code>    v1:

        pypi.org/abcd1234 --> example.com

    v2:

        pypi.org/abcd1234 --> example.net
</code></pre>
If you go from v1 to v2 you know the owner of the artifacts you want has moved from the domain example.com to example.net. The package manager could give a warning about this and let an artifact consumer approve it, but it's less risky than the change above because the owner of 'abcd1234' hasn't changed and you're already trusting them.<p>---<p>1. <a href="https://news.ycombinator.com/item?id=32754029">https://news.ycombinator.com/item?id=32754029</a></p>
]]></description><pubDate>Mon, 06 Jan 2025 19:06:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=42614131</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42614131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42614131</guid></item><item><title><![CDATA[New comment by ryan29 in "Pricing software adds billions to rental costs, White House says"]]></title><description><![CDATA[
<p>You can see the same thing starting to happen in the domain industry.  Registries are buying pricing data rather than setting their own prices, so high-value keywords end up having the same price across TLDs that should be competing with each other.</p>
]]></description><pubDate>Fri, 20 Dec 2024 02:17:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42467674</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42467674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42467674</guid></item><item><title><![CDATA[New comment by ryan29 in "ChatGPT Pro"]]></title><description><![CDATA[
<p>Wow.  I can honestly say I'm surprised it makes that suggestion.  That's great!<p>I don't understand how it gets there though.  How does it "know" that's the right thing to suggest when the majority of the online documentation all gets it wrong?<p>I know how I do it.  I read the Docker docs, I see that I don't think publishing that port is needed, I spin up a test, and I verify my theory.  AFAIK, ChatGPT isn't testing to verify assumptions like that, so I wonder how it determines correct from incorrect.</p>
]]></description><pubDate>Thu, 05 Dec 2024 22:00:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=42333302</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42333302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42333302</guid></item><item><title><![CDATA[New comment by ryan29 in "ChatGPT Pro"]]></title><description><![CDATA[
<p>> Wouldn't you say the same thing for most of the people? Most of the people suck at verifying truth and reasoning. Even "intelligent" people make mistakes based on their biases.<p>I think there's a huge difference because individuals can be reasoned with, convinced they're wrong, and have the ability to verify they're wrong and change their position.  If I can convince one person they're wrong about something, they convince others.  It has an exponential effect and it's a good way of eliminating common errors.<p>I don't understand how LLMs will do that.  If everyone stops learning and starts relying on LLMs to tell them how to do everything, who will discover the mistakes?<p>Here's a specific example.  I'll pick on LinuxServer since they're big [1], but almost every 'docker-compose.yml' stack you see online will have a database service defined like this:<p><pre><code>    services:
      app:
        # ...
        environment:
          - 'DB_HOST=mysql:3306'
        # ...
      mariadb:
        image: linuxserver/mariadb
        container_name: mariadb
        environment:
          - PUID=1000
          - PGID=1000
          - MYSQL_ROOT_PASSWORD=ROOT_ACCESS_PASSWORD
          - TZ=Europe/London
        volumes:
          - /home/user/appdata/mariadb:/config
        ports:
          - 3306:3306
        restart: unless-stopped
</code></pre>
Assuming the database is dedicated to that app, and it typically is, publishing port 3306 for the database isn't necessary and is a bad practice because it unnecessarily exposes it to your entire local network.  You don't need to publish it because it's already accessible to other containers in the same stack.<p>Another Docker related example would be a Dockerfile using 'apt[-get]' without the '--error-on=any' switch.  Pay attention to Docker build files and you'll realize almost no one uses that switch.  Failing to do so allows silent failures of the 'update' command and it's possible to build containers with stale package versions if you have a transient error that affects the 'update' command, but succeeds on a subsequent 'install' command.<p>There are tons of misunderstandings like that which end up being so common that no one realizes they're doing things wrong.  For people, I can do something as simple as posting on HN and others can see my suggestion, verify it's correct, and repeat the solution.  Eventually, the misconception is corrected and those paying attention know to ignore the mistakes in all of the old internet posts that will never be updated.<p>How do you convince ChatGPT the above is correct and that it's a million posts on the internet that are wrong?<p>1. <a href="https://docs.linuxserver.io/general/docker-compose/#multiple-service-usage" rel="nofollow">https://docs.linuxserver.io/general/docker-compose/#multiple...</a></p>
]]></description><pubDate>Thu, 05 Dec 2024 20:31:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42332425</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42332425</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42332425</guid></item><item><title><![CDATA[New comment by ryan29 in "Phishers Love New TLDs Like .shop, .top and .xyz"]]></title><description><![CDATA[
<p>I think first year premium pricing makes a lot of sense.  I'm not sure what the average time to sell is for a domain investor, but say it's 10 years for an easy example.<p>If you go from a standard registration price of $12 / year to a first year premium of $132, you double the 10 year carrying cost of a domain.  That, naively, means domain investors can only speculate on half as many domains.<p>By having a first year premium price and then dropping domains back into the 'standard' tier, you also leave registrants with a semblance of price protections via section 2.10c of the registry agreement.  As-is, premium domains have <i>zero</i> guarantees when it comes to premium renewal pricing.<p>There's a lot of room between squeezing domain investors and asking registrants to pay $100-1000+ <i>per year</i> for premium domains.</p>
]]></description><pubDate>Tue, 03 Dec 2024 21:12:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42311501</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42311501</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42311501</guid></item><item><title><![CDATA[New comment by ryan29 in "Phishers Love New TLDs Like .shop, .top and .xyz"]]></title><description><![CDATA[
<p>Domains are the ultimate identity system for building a more trustworthy internet without handing over control to some kind of verified ID scheme or being forced into publishing your personal details to gain credibility.<p>You can build reputation and trust using a handle, even if it's not associated with your real world identity.  For example, I know that if 'ryao' replies to a question about ZFS, the response can be considered trustworthy.  I don't know who that is or even what country they live in, but I know they're a contributor that isn't speculating or guessing when they reply and that's all that matters to me.<p>Domains can be used as verifiable, globally unique handles which simplifies things for the average user because it makes it easier to help users avoid impersonation and confusion if you can point them to something simple and verifiable.  For example, look at Bluesky [1].<p>I've been wanting domain based namespaces and handles for a solid 5 years because it just makes sense.  Here's my oldest mention of it (asking why package managers don't use domain verified namespacing) I have on HN [2]:<p>> It seems like a waste to me when I'm required to register a new identity for every package manager when I already have a globally unique, extremely valuable (to me), highly brandable identity that costs $8 / year to maintain.<p>You can tell it's old because .com domains only costed $8 back then.  IMHO, domain based handles are <i>the</i> #1 reason to use Bluesky over X/Twitter.  People used to spend $10-15k buying "noteworthiness" via fake articles, etc. to get verified on Twitter.  I can't find any links because search results are saturated with talk of X wanting $1000 <i>per month</i> for organization validation (aka a gold check mark).  Domain validation is just as good as that kind of organization validation, at least for well known individuals and organizations.<p>Given that, I think there would be a bigger market for domains if domain validated identities catch on.  It could even spawn specialty gTLDs that do extra identity or notoriety checks (if that's allowed) or maybe attestations would become a big thing if there were an easy way to do them against a domain verified handle.<p>1. <a href="https://bsky.social/about/blog/3-6-2023-domain-names-as-handles-in-bluesky" rel="nofollow">https://bsky.social/about/blog/3-6-2023-domain-names-as-hand...</a><p>2. <a href="https://news.ycombinator.com/item?id=24674882">https://news.ycombinator.com/item?id=24674882</a></p>
]]></description><pubDate>Tue, 03 Dec 2024 20:55:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42311285</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42311285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42311285</guid></item><item><title><![CDATA[New comment by ryan29 in "Phishers Love New TLDs Like .shop, .top and .xyz"]]></title><description><![CDATA[
<p>> When I see a billboard or print ad with e.g. `example.travel`, I read that as a social media handle and not a website address like `example.com` would convey.<p>This is where I think the new gTLDs registries could do better.  Using your domain as a handle on Bluesky is a perfect example of something they could push for to grow the industry, but they seem to think the status quo with a sprinkle of price discrimination is the winning formula.<p>Most of the new gTLDs work great as domain verified social media handles, but no one is going to use them for that if all the good keywords are classified as premium with $100+ annual renewal fees.  However, if you make them too cheap and they get popularized, domain investors will register everything good and try to flip them.<p>I think first year premium pricing strikes a good balance that doesn't limit novel, non revenue generating use cases too much.  Charging $100-200 for the first year causes a very large increase in the amount of capital domain flippers need to invest to acquire a large portfolio of good names.<p>If Bluesky catches on I think we could hit a point where non-technical people are suddenly shocked when the see someone "using their social media handle for a website."  Getting back to having people understand there's more than just Facebook and Twitter would be a step in the right direction IMO, so it would be nice to see Bluesky continue to gain popularity.</p>
]]></description><pubDate>Tue, 03 Dec 2024 19:56:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42310569</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42310569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42310569</guid></item><item><title><![CDATA[New comment by ryan29 in "Phishers Love New TLDs Like .shop, .top and .xyz"]]></title><description><![CDATA[
<p>It's the <i>registries</i> not the <i>registrars</i> that classify some domains as premium.  I think they're a risky product because you don't even get the limited price protections provided by section 2.10c of the registry agreement, but there seems to be a market for them [1].<p>1. <a href="https://domainnamewire.com/2024/08/28/radix-sets-record-for-premium-domain-revenue/" rel="nofollow">https://domainnamewire.com/2024/08/28/radix-sets-record-for-...</a></p>
]]></description><pubDate>Tue, 03 Dec 2024 19:36:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=42310280</link><dc:creator>ryan29</dc:creator><comments>https://news.ycombinator.com/item?id=42310280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42310280</guid></item></channel></rss>