<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: eggbrain</title><link>https://news.ycombinator.com/user?id=eggbrain</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 22:14:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=eggbrain" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by eggbrain in "Toward automated verification of unreviewed AI-generated code"]]></title><description><![CDATA[
<p>There are two opposite answers here, and I feel like I could argue either one:<p>1) Humans were never held accountable, really<p>Outside of a few regulated industries, the worst that happens to an engineer who pushes negligent code is that they get fired. But after that happens, what actually changes? The organizational structure of the company that allowed the employee to push bad code still exists.<p>2) Humans will still be held accountable<p>If a human (managing a fleet of AI agents, let's say) ends up deploying bad code to production, they won't be able to point to the AI agent and say "it was them that did it!" -- it will still be the human at the end of the line that is held responsible.</p>
]]></description><pubDate>Thu, 19 Mar 2026 14:22:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47440019</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=47440019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47440019</guid></item><item><title><![CDATA[New comment by eggbrain in "Toward automated verification of unreviewed AI-generated code"]]></title><description><![CDATA[
<p>Your comment seems to imply AI is currently at a junior developer's level -- 12 months ago I would have agreed (like I  mentioned in my parent comment, both near the end and about the "latter" team I was a part of), but it's gotten quite good over the past few months.<p>When even Linus Torvalds compliments AI code (ref: <a href="https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fat50gxdjltcg1.png" rel="nofollow">https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fa...</a>) I think we can say he wouldn't have said that about any junior engineer.<p>That's not to say it won't ship bugs, but so does any engineer (junior or senior). It's up to you as to what level of tooling you surround the AI with (automated testing / linting / etc), but at the very least it doesn't also hurt to have that set up anyways (automated tests have helped prevent senior devs from shipping bad code too).</p>
]]></description><pubDate>Tue, 17 Mar 2026 21:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47418829</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=47418829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47418829</guid></item><item><title><![CDATA[New comment by eggbrain in "Toward automated verification of unreviewed AI-generated code"]]></title><description><![CDATA[
<p>I find people over-rotate on whether we should be reviewing AI-produced code. "What if bad code gets into production!" some programmers gasp, as if they themselves have never pushed bad code, or had coworkers do the same.<p>I've worked at places where I've trusted everyone on my team to the extent that most PRs got only a quick glance before getting a "LGTM". On the flipside, I've also worked on teams where every person was a different kind of liability with the code that they pushed, and for those teams I implemented every linting / pre-commit / testing tool possible that all needed to pass inspection (including human review) before any code arrived on production.<p>A year ago, AI was like that latter team I mentioned -- something I had to check, double check, and correct until I was happy with what it produced. Over the past 6 months, it's gotten closer (but still fairly far away) from the former team I mentioned -- I have to correct it about 10% of the time, whereas for most things it gets it right.<p>The fact that AI produces a much _larger_ volume of code than the average engineer is perhaps slightly concerning, but I don't see it much differently than code at large companies. Does every Facebook engineer review every junior engineer's pull request to make sure bad code doesn't slip in?<p>That isn't to say I'm for letting AI go wild with code -- but I think if at worse we consider AI to be a junior engineer we need to reign in with static analysis tools / linters / testers etc, we will probably be able to mitigate a lot of the downside.</p>
]]></description><pubDate>Tue, 17 Mar 2026 21:33:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47418625</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=47418625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47418625</guid></item><item><title><![CDATA[New comment by eggbrain in "Most-read tech publications have lost over half their Google traffic since 2024"]]></title><description><![CDATA[
<p>Many of today's news websites (tech or otherwise) cashed in their goodwill / reputation / page rank to sell ads.<p>The first shoe dropped when news websites realized they weren't generating content fast enough. Hard, in depth journalism takes time, but when people want to know something that happened _today_, they don't want to wait a week for all the facts to come out, and so the major websites started losing traffic to websites that churned out articles fast.<p>The additional benefit of churning out articles was that you could match against more and more long tail keywords, which lead to more traffic and more ability to sell ads. To keep up, many websites dropped quality for speed, and consumers noticed.<p>The second shoe then to drop was with affiliate marketing -- articles on CNET / Wirecutter etc were already ranking and rating products, so they figured "[...] why shouldn't we get a cut if someone ends up buying a product we recommend"? The challenge then became that consumers couldn't tell the difference between a product that was recommended because it was good, or because the product gave the biggest "kickback" to the website for using the affiliate link. Thus, people that gave "honest" opinions on products (e.g. people asking on Reddit, at least for a while, as the article suggests) became the new source of truth.<p>The result of this means that these days, if you read a lot of articles on the major tech websites, they feel more like they've been optimized for speed (e.g. churning out an article fast), SEO, and not much else. Many people have talked about how recipie websites are now short story generators more than food instructions, but it's been common for a while where I go to a tech website to read about something I specifically Googled, only for it to feel more like it was written _specifically_ to capture traffic for a keyword, rather than actually solve the issue or question I came into the website with.<p>The cherry on top is that AI has none of these problems (so far) -- yes, there's some movement on trying to do SEO for AI, and of course ads will eventually come to AI like it has everything else, but currently, you can get the answers you want, described to you exactly how you'd like to hear it -- who wouldn't want that?</p>
]]></description><pubDate>Tue, 03 Mar 2026 14:53:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47233288</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=47233288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47233288</guid></item><item><title><![CDATA[New comment by eggbrain in "Hacker News.love – 22 projects Hacker News didn't love"]]></title><description><![CDATA[
<p>I'm curious -- are there any stories of projects that launched on Hacker News, Hacker News loved it, and it ended up _also_ being a big success?<p>E.g. we have stories like Dropbox where HN seemed to be dismissive only to be proven wrong, and there are numerous launches where HN was dismissive and they were proven right, but I'd be more curious when the HN crowd got it right in a positive way.</p>
]]></description><pubDate>Mon, 23 Feb 2026 19:34:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47127574</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=47127574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47127574</guid></item><item><title><![CDATA[New comment by eggbrain in "The next chapter of the Microsoft–OpenAI partnership"]]></title><description><![CDATA[
<p>If we assume token providers are becoming more and more of a commodity service these days, it seems telling that OpenAI specifically decided to claw out consumer hardware.<p>Perhaps their big bet is that their partnership with Jony Ive will create the first post-phone hardware device that consumers attach themselves with, and then build an ecosystem around that?</p>
]]></description><pubDate>Tue, 28 Oct 2025 13:21:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45732540</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=45732540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45732540</guid></item><item><title><![CDATA[New comment by eggbrain in "Doing Rails Wrong"]]></title><description><![CDATA[
<p>I’m glad someone called this out. “Let’s just use vanilla rails” — sure, except basically every version of rails for the past 5 years has decided to completely change how they do JS.<p>So many gems are also still built on sprockets — even when you want to use the “rails” way, you are stuck now with a hodgepodge of JS anyways.<p>It’s a mess — maybe one day we’ll get it fixed, but don’t pretend it’s not partially rails fault as well.</p>
]]></description><pubDate>Wed, 08 Oct 2025 01:07:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45510893</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=45510893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45510893</guid></item><item><title><![CDATA[New comment by eggbrain in "30 minutes with a stranger"]]></title><description><![CDATA[
<p>Another potential self-selection bias -- if people know they are signing up to have a conversation with a stranger, perhaps they are already predisposed to be more "pleasant" in conversations, vs a potential curmudgeon who doesn't ever want to speak to anyone, even for money.</p>
]]></description><pubDate>Thu, 04 Sep 2025 14:52:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45127964</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=45127964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45127964</guid></item><item><title><![CDATA[New comment by eggbrain in "30 minutes with a stranger"]]></title><description><![CDATA[
<p>There's also the magnitude of a negative interaction as well to consider.<p>If I have 99 great interactions with someone, but one REALLY bad interaction (they insult me deeply, or say something irredeemable), that can also sour the whole relationship.<p>It would be interesting to research commonalities amongst bad interactions -- are there patterns that emerge from certain personality types, politics, etc? What about a few "sour" people that will take any interaction and make it bad regardless of matchup -- if we removed them from the interaction pool, do the stats suddenly adjust quickly?<p>In my mind this would have big implications for social media sites -- not that all bad interactions need to be quelled, but if you are trying to keep conversations civil, attempt to implement X strategy or Y strategy.</p>
]]></description><pubDate>Thu, 04 Sep 2025 14:11:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45127491</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=45127491</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45127491</guid></item><item><title><![CDATA[Ask HN: Short term housing for founders / entrepreneurs in the Bay Area / SF?]]></title><description><![CDATA[
<p>I've been searching for the past month for short term housing in the San Francisco / Bay Area, especially where I can live with fellow founders / hackers / engineers, but everything I find feels unideal:<p>- There's Furnished Finder, but the site was set up initially for travelling nurses (and it shows), so there's no real way to search for places with other engineers / founders, or any other features that matter (e.g. fast internet).<p>- There's a few hacker houses out there, but some take equity in your startup, while others are invite-only, and still others are just so huge (e.g. 30+ people) that it feels more like a hotel than a group of people with shared interests.<p>- There's DirectorySF, but the amount of places listed there is pretty low, so you have to get lucky for a place to overlap timing / requirements / price wise.<p>- I know Bookface / Y Combinator has some resources for housing for founders that have gotten into YC, but that won't apply to me.<p>On top of the search aspect, it's a bit frustrating as even if I wanted to rent an apartment traditionally, places will require you to submit proof of income, which as a founder who is focusing on building a startup may not be available (even if you have good resources saved up).<p>I think the closest "fit" has been DirectorySF in terms of what I'm looking for, but I'm just surprised there's not better websites out there to help entrepreneurs / founders / engineers moving to the SF area, unless I've been missing something.<p>Any thoughts / ideas? Would love to understand how to better find housing for like-minded startup people.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45110303">https://news.ycombinator.com/item?id=45110303</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 02 Sep 2025 23:10:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45110303</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=45110303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45110303</guid></item><item><title><![CDATA[New comment by eggbrain in "OpenFreeMap survived 100k requests per second"]]></title><description><![CDATA[
<p>Limiting by referrer seems strange — if you know a normal user makes 10-20 requests (let’s assume per minute), can’t you just rate limit requests to 100 requests per minute per IP (5x the average load) and still block the majority of these cases?<p>Or, if it’s just a few bad actors, block based on JA4/JA3 fingerprint?</p>
]]></description><pubDate>Sat, 09 Aug 2025 14:55:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44846988</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44846988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44846988</guid></item><item><title><![CDATA[New comment by eggbrain in "Social media platforms: what's wrong, and what's next"]]></title><description><![CDATA[
<p>Thanks Zak, appreciate your thoughts.<p>I hope to have some of the followup posts soon, although you are right that my idea is based around a centralized platform with ID Verification.<p>RE: How to solve for enshittification, I'd mention two things:<p>1. I think a good product can _stay_ good over time with strong centralized leadership, aka a "benevolent dictator". Think Steve Jobs at Apple, DHH at 37 Signals, etc.<p><pre><code>  - Once that power structure changes, however (new leader, etc), that can quickly fall apart, so it's definitely not a bulletproof solution.
</code></pre>
2. If incentives from the start are built into your platform to make the "user" the biggest customer on your platform, incentives will make sure that you keep those users happy.<p><pre><code>  - If you have to choose between customers who give you $0/month and advertisers who will give you $1000/month, you'll eventually choose the advertisers to the detriment of the users.</code></pre></p>
]]></description><pubDate>Tue, 27 May 2025 17:11:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44108821</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44108821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44108821</guid></item><item><title><![CDATA[New comment by eggbrain in "Social media platforms: what's wrong, and what's next"]]></title><description><![CDATA[
<p>> Care to give an argument to substantiate that?<p>Because eventually bad actors take any decentralized platform / standard and ruin it for the rest of us, leading us to trust the few good players that remain (see: email). Sure, technically you can spin up your own mail server -- but because of the copious amount of spam from people who have done that in the past, you'll go through so many hoops that eventually you'll throw in the towel and probably use GSuite or a known major provider.<p>As Ben Thompson says:<p>> [...] centralization is a second order effect of decentralization: when all constraints on content are removed, more power than ever accrues to the entity that is the preferred choice for navigating that content.</p>
]]></description><pubDate>Thu, 22 May 2025 18:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44064819</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44064819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44064819</guid></item><item><title><![CDATA[New comment by eggbrain in "Social media platforms: what's wrong, and what's next"]]></title><description><![CDATA[
<p>> [...] Its 100% about getting people to stay on your platform as long as possible and engage with your content. Usually that means creating content that gets people to negatively engage with your content. So much so, its now referred to as "rage bait" where Only Fans women purposely post content that gets men to engage with their posts in order to make more money. Political posts are made to inflame either side and get more shares and upvotes.<p>I touch upon this in <a href="https://www.scottgoci.com/social-media-platforms-whats-wrong-part-3/" rel="nofollow">https://www.scottgoci.com/social-media-platforms-whats-wrong...</a> and <a href="https://www.scottgoci.com/social-media-platforms-whats-wrong-part-4/" rel="nofollow">https://www.scottgoci.com/social-media-platforms-whats-wrong...</a> -- but as you mention, this is a result of engagement being a core metric of social media platforms, and users attempting to game the platform's algorithm for their own purposes.<p>An easy way to solve for this is customization -- if no two users have the same "algorithm" powering their feed, it becomes hard for anyone to do this, because perhaps one user's algorithm filters out anything tagged with politics, or with a low Flesch–Kincaid score, or non-text posts, etc.</p>
]]></description><pubDate>Thu, 22 May 2025 17:34:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44064452</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44064452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44064452</guid></item><item><title><![CDATA[New comment by eggbrain in "Social media platforms: what's wrong, and what's next"]]></title><description><![CDATA[
<p>I agree both with you and the OP comment here.<p>Ideally, I think if we could, we'd only get content from the creators or people we care about -- but that content runs out eventually, and yet our minds still want to be stimulated.<p>To bring it back to your food analogy, if I had a personal chef that made delicious food whenever I wanted it, I'd probably not indulge in fast food very often -- but if I needed a quick bite to eat, I'd probably still jump into a McDonalds.</p>
]]></description><pubDate>Thu, 22 May 2025 16:49:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063909</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44063909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063909</guid></item><item><title><![CDATA[New comment by eggbrain in "Social media platforms: what's wrong, and what's next"]]></title><description><![CDATA[
<p>> I guess we have to assume that Mastodon is not "large" by the author's definition.<p>It's a fair point that Mastadon was left out, and yet it does tackle some of the problems I mention -- perhaps worth a followup post. That being said, I feel like federated social media platforms are not going to be the answer in the end -- and although its adoption has grown in the coming years, I think it's always going to lag behind others.<p>> Practically speaking, the existence of a large social media platform requires investors seeking unlimited growth, and that's the predictable recipe for enshittification [...] What's the author's escape route to avoid this trap?<p>I think reddit, to some extent, can be considered a success story here -- it grew fairly slowly compared to other social media platforms, but now feels like it has quite a lot of staying power (although as it approached its IPO it did indeed start to enshittify).<p>That being said, I think a lot of problems I mention can be solved just by giving the customer (e.g. the user on the social media platform) more choice. Imagine you had a platform that asked you how you wanted to pay to use it: with your data, with advertisements, or with a membership of $XXX/month, amongst other options.</p>
]]></description><pubDate>Thu, 22 May 2025 16:15:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063512</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44063512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063512</guid></item><item><title><![CDATA[New comment by eggbrain in "Social media platforms: what's wrong, and what's next"]]></title><description><![CDATA[
<p>An interesting thought (to your point) is what people hope social media can actually become -- in the ideal scenario, is it just us being able to engage with people online in better ways? Is it us being able to consume content from others in the ways we want?</p>
]]></description><pubDate>Thu, 22 May 2025 15:33:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063079</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44063079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063079</guid></item><item><title><![CDATA[New comment by eggbrain in "Social media platforms: what's wrong, and what's next"]]></title><description><![CDATA[
<p>I can see a platform that gives you just the data you need to stay informed working, although what data you wish to receive and how quickly seems like a potential stumbling point.<p>Perhaps you don't want to receive news about celebrities, _unless_ it involves someone you care deeply about, for example Michael Jackson. It would require quite a bit of tailoring for a platform to be able to curate for that.</p>
]]></description><pubDate>Thu, 22 May 2025 15:26:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063020</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44063020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063020</guid></item><item><title><![CDATA[Social media platforms: what's wrong, and what's next]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.scottgoci.com/social-media-platforms-whats-wrong-and-whats-next/">https://www.scottgoci.com/social-media-platforms-whats-wrong-and-whats-next/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44062774">https://news.ycombinator.com/item?id=44062774</a></p>
<p>Points: 45</p>
<p># Comments: 58</p>
]]></description><pubDate>Thu, 22 May 2025 15:01:09 +0000</pubDate><link>https://www.scottgoci.com/social-media-platforms-whats-wrong-and-whats-next/</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=44062774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44062774</guid></item><item><title><![CDATA[New comment by eggbrain in "Cloud of Disillusion: The Broken Promise of PaaS"]]></title><description><![CDATA[
<p>The author touches upon Heroku in the sense that they say they've used it, but I think it is still, to date, the best PaaS out there, and makes some (if not many) of the author's arguments void.<p>There's been so many apps on Heroku that I can deploy with just `git push heroku master`, and I've worked on team where we've scaled on heroku on the db and app side with very little devops work, if any.<p>What the author is completely right on, however, is Fly.io -- it's unfortunately a platform that has _just enough PaaS_ to seem easy, but ends up being frustratingly difficult and comes with a lot of hard edges, even for simple apps.<p>Provisioning a postgres db on Fly.io is a great example - just use `fly postgres create` and go through the steps! Uh oh, the provisioned db template is defaulting to `SQL_ASCII`, I need `UTF8` encoding, what's the best way to do that? Good luck -- the Fly.io docs don't talk about that at all, and if you aren't a high enough "tier" of plan you get 0 customer support, just a community forum, with people asking questions and many times getting no responses.</p>
]]></description><pubDate>Wed, 04 Sep 2024 16:46:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41447815</link><dc:creator>eggbrain</dc:creator><comments>https://news.ycombinator.com/item?id=41447815</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41447815</guid></item></channel></rss>