<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: cletus</title><link>https://news.ycombinator.com/user?id=cletus</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 22:08:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=cletus" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by cletus in "My son pleasured himself on Gemini Live. Entire family's Google accounts banned"]]></title><description><![CDATA[
<p>I worked at Google a long time ago now during the whole Google+ fiasco. One thing that was super controversial internally was the so-called Real Names policy. For those who don't know or remember, it was Vic Gundotra's idea that people should use their real identities. He kept using this weird example that he didn't want it filled with people named "Dog Turd". I don't know why.<p>So there was this mysterious black box that decided if your name was "real" or not. At first this didn't support pseudonyms or any kind of anonymity and that's actually really important for any social network. Think of someone seeking help coming to terms with their sexual orientation, gender identity, addiction, eating disorder or whatever. Or simply going against their family's religious wishes. I later worked at Facebook and one thing I'll give them credit for is Groups. FB Groups had an identity that actually couldn't be tied by anyone else to your profile or identity in any other group. That was a good product decision.<p>Anyway, if your name somehow failed the magic real names filter, your account got banned. Your entire Google account was banned and basically there was no recourse other than knowing someone who worked at the company or making a big enough fuss on Twitter.<p>Many people, myself included, criticized and protested this decision. You should at least segment Google products. There's absolutely no reason to ban your Gmail account because an automated system decided your Google+ account name wasn't "real". But that feedback was ignored and this was well before the public launch. And the public backlash proved this position correct (IMHO).<p>But the net effect was that I decided I can't use any other Google product. Let's say a system is launched to find offensive photos and there's a false positive on one of my images in Google Photos. Maybe it's just a hash collision with a known image. And then what? I lose my entire Gmail? Are you kidding me?<p>It's wild to me that this is still an issue ~15 years later. I think my stance actually isn't strict enough anymore. You probably shouldn't use Gmail at all. I should really find a paid email provider hosted entirely in Europe, preferably Switzerland or some other country with strong pro-user regulation.<p>So I have no idea if this Gemini story is true or not. I say that because 95% of the things on Reddit are completely made up. But it is plausible. I wouldn't be surprised if it's true. It means I wouldn't use Gemini at all if I used Gmail.</p>
]]></description><pubDate>Wed, 01 Apr 2026 04:10:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47596711</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=47596711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47596711</guid></item><item><title><![CDATA[New comment by cletus in "Apple randomly closes bug reports unless you "verify" the bug remains unfixed"]]></title><description><![CDATA[
<p>Story time. I used to work for Facebook (and Google) and lots of games were played around bugs.<p>At some point the leadership introduced an SLA for high then medium priority bugs. Why? because bugs would sit in queues for years. The result? Bugs would often get downgraded in priority at or close to the SLA. People even wrote automated rules to see if their bugs filed got downgraded to alert them.<p>Another trick was to throw it back to the user, usually after months, ostensibly to request information, to ask "is this still a problem?" or just adding "could not reproduce". Often you'd get no response. sometimes the person was no longer on the team or with the company. Or they just lost interest or didn't notice. Great, it's off your plate.<p>If you waited long enough, you could say it was "no longer relevant" because that version of the app or API had been deprecated. It's also a good reason to bounce it back with "is still this relevant?"<p>Probably the most Machiavellian trick I saw was to merge your bug with another one vaguely similar that you didn't own. Why? Because this was hard to unwind and not always obvious.<p>Anyone who runs a call center or customer line knows this: you want to throw it back at the customer because a certain percentage will give up. It's a bit like health insurance companies automatically sending a denial for a prior authorization: to make people give up.<p>I once submitted some clear bugs to a supermarket's app and I got a response asking me to call some 800 number and make a report. My bug report was a complete way to reproduce the issue. I knew what was going on. Somebody simply wanted to mark the issue as "resolved". I'm never going to do that.<p>I don't think you can trust engineering teams (or, worse, individuals) to "own" bugs. They're not going to want to do them. They need to be owned by a QA team or a program team that will collate similar bugs and verify something is actually fixed.<p>Google had their own versions of things. IIRC bugs had both a priority and s everity for some reason (they were the same 99% of the time) between 0 and 4. So a standard bug was p2/s2. p0/s0 was the most severe and meant a serious user-facing outage. People would often change a p2/s2 to p3/s3, which basically meant "I'm never going to do this and I will never look at it again".<p>I've basically given up on filing bug reports because I'm aware of all these games and getting someone to actually pay attention is incredibly difficult. So much of this comes down to stupid organizational-level metrics about bug resolution SLAs and policies.</p>
]]></description><pubDate>Wed, 25 Mar 2026 20:55:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47523107</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=47523107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47523107</guid></item><item><title><![CDATA[New comment by cletus in "Try to take my position: The best promotion advice I ever got"]]></title><description><![CDATA[
<p>So there's some survivor bias here but it's generally not bad advice. You should be focusing on outcomes like improving SLAs, top line metrics and so on. You should be solving user and business problems. That's all good advice. But still this article presumes a lot.<p>In my experience, managers will naturally partition their reports into three buckets: their stars, their problems and their worker bees. The worker bees tend to be ignored. They're doing fine. They're getting on with whatever they've been told to do or possibly what they've found to do. They're not going to create any problems. The problems are the underperformers. These are people who create problems and/or are at risk of getting a subpar performance rating.<p>Now there are lots of reasons that someone can be a problem. I tend to believe that any problem just hasn't found the right fit yet and, until proven otherwise, problems are a failure in management. That tends to be a minority view in practice. It's more common to simply throw people in the deep end and sink or swim because that takes much less overhead. You will  see this as teams who have a lot of churn but only in part of the team. In particularly toxic environments, savvy managers will game the system by having a sacrificial anode position. They hire someone to take the bad rating they have to give to protect the rest of the team.<p>And then there are the stars. These are the people you expect to grow and be promoted. More often than not however they are chosen rather than demonstrating their potential. I've seen someone shine when their director is actively trying to sabotage them but that's rare.<p>Your stars will get the better projects. Your problems will get the worse ones. If a given project is a success or not will largely come down to perception not reality.<p>The point I'm getting to is that despite all the process put around this at large companies like performance ratings, feedback, calibration, promo committees, etc the majority of all this is vibes based.<p>So back to the "take my job" advice. If someone is viewed as a star, that's great advice. For anyone else, you might get negative feedback about not doing your actual job, not being a team player and so on. I've seen it happen a million times.<p>And here's the dirty little secret of it all: this is where the racism, sexism and ableism sneaks in. It's usually not that direct but Stanford grads (as just one example) will tend to vibe with other Stanford grads. They have common experience, probably common professors and so on. Same for MIT. Or CMU. Or UW. Or Waterloo. And so on.<p>So all of the biases that go into the selection process for those institutions will bleed into the tech space.<p>And this kind of environment is much worse for anyone on the spectrum because allistic people will be inclined to dislike from the start for no reason and that's going to hurt how they're viewed (ie as a star, a worker bee or a problem) and their performance ratings.<p>Because all of this is ultimately just a popularity contest with very few exceptions. I've seen multiple people finagle their way to Senior STaff SWE on just vibes.<p>And all of this gets worse since the tech sector has joined Corporate America in being in permanet layoff mode. The Welchian "up or out" philosophy has taken hold in Big Tech where there are quotas of 5-10% of the workforce have to get subpar ratings every year and that tends to kill their careers at that company. This turns the entire workplace even more into an exercise in social engineering.</p>
]]></description><pubDate>Mon, 05 Jan 2026 21:23:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46505177</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=46505177</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46505177</guid></item><item><title><![CDATA[New comment by cletus in "Lessons from 14 years at Google"]]></title><description><![CDATA[
<p>I'm going to pick out 3 points:<p>> 2. Being right is cheap. Getting to right together is the real work<p>> 6. Your code doesn’t advocate for you. People do<p>> 14. If you win every debate, you’re probably accumulating silent resistance<p>The common thread here is that in large organizations, your impact is largely measured by how much you're liked. It's completely vibes-based. Stack ranking (which Google used to have; not sure if it still does) just codifies popularity.<p>What's the issue with that? People who are autistic tend to do really badly through no fault of their own. These systems are basically a selection filter for allistic people.<p>This comes up in PSC ("perf" at Meta, "calibration" elsewhere) where the exact same set of facts can be constructed as a win or a loss and the only difference is vibes. I've seen this time and time again.<p>In one case I saw a team of 6 go away and do nothing for 6 months then come back and shut down. If they're liked, "we learned a lot". If they're not, "they had no impact".<p>Years ago Google studied the elements of a successful team and a key element was psychological safety. This [1] seems related but more recent. This was originally done 10-15 years ago. I agree with that. The problem? Permanent layoffs culture, designed entirely to suppress wages, kills pyschological safety and turns survival into a game of being liked and manufacturing impact.<p>> 18. Most performance wins come from removing work, not adding cleverness<p>One thing I really appreciated about Google was that it has a very strict style guide and the subset of C++ in particular that you can use is (was?) very limited. At the time, this included "no exceptions", no mutable function arguments and adding templtes had an extremely high bar to be allowed.<p>Why? To avoid arguments about style issues. That's huge. But also because C++ in particular seemed to attract people who were in love with thier own cleverness. I've seem some horrific uses of templates (not at Google) that made code incredibly difficult to test for very little gain.<p>> 9. Most “slow” teams are actually misaligned teams<p>I think this is the most important point but I would generalize it and restate it as: most problems are organizational problems.<p>At Meta, for example, product teams were incentivized to ship and their impact was measured in metric bumps. But there was no incentive to support what you've already shipped beyond it not blowing up. So in many teams there was a fire and forget approach to filing a bug and forgetting about it, to the point where it became a company priority to have SLAs on old bugs, which caused the inevitable: people just downgrading bug priorities to avoid SLAs.<p>That's an organizational problem where the participants have figured out that shiping is the only thing they get rewarded for. Things like documentation, code quality and bug fixes were paid lip service to only.<p>Disclaimer: Xoogler, ex-Facebooker.<p>[1]: <a href="https://www.aristotleperformance.com/post/project-aristotle-google-s-data-driven-insights-on-high-performing-teams" rel="nofollow">https://www.aristotleperformance.com/post/project-aristotle-...</a></p>
]]></description><pubDate>Sun, 04 Jan 2026 18:18:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46490632</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=46490632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46490632</guid></item><item><title><![CDATA[New comment by cletus in "Was it a billion dollar mistake?"]]></title><description><![CDATA[
<p>The mistake was not having nullability be expressed in the type system.<p>At Facebook I used their PHP fork Hack a lot and Hack has a really expressive type system where PHP does not. You can express nullability of a type and it defaults to a type being non-nullable, which is the correct default. The type checker was aware of changes too, so:<p><pre><code>    function foo(?A $a): void {
      $a->bar(); // compile error, $a could be null
      if ($a is null) {
        return;
      }
      $a->bar(); // not a compiler error because $a is now A not ?A
      if ($a is ChildOfA) {
        $a->childBar(); // not an error, in this scope $a is ChildOfA
      }
    }
</code></pre>
Now Hack like Java used type erasure so you could force a null into something non-nullable if you really wanted to but, in practice, this almost never happened. A far bigger problem was dealing with legacy code that was converted with a tool and returned or used the type "mixed", which could be literally anything.<p>The real problem with Java in particular is you'd end up chaining calls then get the dreaded NullPointerException and have no idea from the error or the logs what was broken from:<p><pre><code>   a.b.c.d();
</code></pre>
I'm fine with things like Option/Maybe types but to me they solve different problems. They're a way of expressing that you don't want to specify a value or that a value is missing and that's different to something being null (IMHO).</p>
]]></description><pubDate>Sun, 04 Jan 2026 15:41:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46488983</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=46488983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46488983</guid></item><item><title><![CDATA[New comment by cletus in "Total monthly number of StackOverflow questions over time"]]></title><description><![CDATA[
<p>As an early user of SO [1], I feel reasonably qualified to discuss this issue. Note that I barely posted after 2011 or so so I can't really speak to the current state.<p>But what I can say is that even back in 2010 it was obvious to me that moderation was a problem, specifically a cultural problem. I'm really talking about the rise of the administrative/bureaucratic class that, if left unchecked, can become absolute poison.<p>I'm constantly reminded of the Leonard Nimoy voiced line from Civ4: "the bureaucracy is expanding to meet the needs of the expanding bureaucracy". That sums it up exactly. There is a certain type of person who doesn't become a creator of content but rather a moderator of content. These are people who end up as Reddit mods, for example.<p>Rules and standards are good up to a point but some people forget that those rules and standards serve a purpose and should never become a goal unto themselves. So if the moderators run wild, they'll start creating work for themselves and having debates about what's a repeated question, how questions and answers should be structured, etc.<p>This manifested as the war of "closed, non-constructive" on SO. Some really good questions were killed this way because the moderators decided on their own that a question had to have a provable answer to avoid flame wars. And this goes back to the rules and standards being a tool not a goal. My stance was (and is) that shouldn't we solve flame wars when they happen rather than going around and "solving" imaginary problems?<p>I lost that battle. You can argue taht questions like "should I use Javascript or Typescript?" don't belong on SO (as the moderators did). My position was that even though there's no definite answer, somebody can give you a list of strengths and weaknesses and things to consider.<p>Even something that does have a definite answer like "how do I efficiently code a factorial function?" has multiple but different defensible answers. Even in one language you can have multiple implementations that might, say, be compile-time or runtime.<p>Another commenter here talked about finding the nearest point on an ellipse and came up with a method they're proud of where there are other methods that would also do the job.<p>Anyway, I'd occasionally login and see a constant churn on my answers from moderators doing pointless busywork as this month they'd decided something needed to be capitalized or not capitalized.<p>A perfect example of this kind of thing is Bryan Henderson's war on "comprised of" on Wikipedia [2].<p>Anyway, I think the core issue of SO was that there was a lot of low-hanging fruit and I got a lot of accepted answers on questions that could never be asked today. You'll also read many anecdotes about people having a negative experience asking questions on SO in later years where their question was immediately closed as, say, a duplicate when the question wasn't a duplicate. The moderator just didn't understand the difference. That sort of thing.<p>But any mature site ultimately ends with an impossible barrier to entry as newcomers don't know all the cultural rules that have been put in place and they tend to have a negative experience as they get yelled at for not knowing that Rule 11.6.2.7 forbids the kind of question they asked.<p>[1]: <a href="https://stackoverflow.com/users/18393/cletus" rel="nofollow">https://stackoverflow.com/users/18393/cletus</a><p>[2]: <a href="https://www.npr.org/2015/03/12/392568604/dont-you-dare-use-comprised-of-on-wikipedia-one-editor-will-take-it-out" rel="nofollow">https://www.npr.org/2015/03/12/392568604/dont-you-dare-use-c...</a></p>
]]></description><pubDate>Sun, 04 Jan 2026 05:30:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46485234</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=46485234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46485234</guid></item><item><title><![CDATA[New comment by cletus in "Stepping down as Mockito maintainer after ten years"]]></title><description><![CDATA[
<p>Imagine you're testing a service to creates, queries and deletes users. A fake version of that service might just be a wrapper on a HashMap keyed by ID. It might have several fields like some personal info, a hashed password, an email address, whether you're verified and so on.<p>Imagine one of your tests is if the user deletes their account. What pattern of calls should it make? You don't really care other than the record being deleted (or marked as deleted, depending on retention policy) after you're done.<p>In the mock world you might mock out calls like deleteUserByID and make suer it's called.<p>In the fake world, you simply check that the user record is deleted (or marked as such) after the test. You don't really care about what sequence of calls made that happen.<p>That may sound trivial but it gets less trivial the more complex your example is. Imagine instead you want to clear out all users who are marked for deletion. If you think about the SQL for that you might do a DELETE ... WHERE call so your API call might look like that. But if the logic is more complicated? Where if there's a change where EU and NA users have different retention periods or logging requirements so they're suddenly handled differently?<p>In a mokcing world you would have to change all your expected mocks. In fact, implementing this change might require fixing a ton of tests you don't care about at all and aren't really being broken by the change regardless.<p>In a fake world, you're testing what the data looks like after you're done, not the specific steps it took to get there.<p>Now those are pretty simple examples because there's not much to do the arguments used and no return values to speak of. Your code might branch differently based on those values, which then changes what calls to expects and with what values.<p>You're testing implementation details in a really time-consuming yet brittle way.</p>
]]></description><pubDate>Mon, 29 Dec 2025 03:21:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46417139</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=46417139</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46417139</guid></item><item><title><![CDATA[New comment by cletus in "Stepping down as Mockito maintainer after ten years"]]></title><description><![CDATA[
<p>My second project at Google basically killed mocking for me and I've basically never done it since. Two things happened.<p>The first was that I worked on a rewrite of something (using GWT no less; it was more than a decade ago) and they decided to have a lot of test coverage and test requirements. That's fine but they way it was mandated and implemented, everybody just testing their service and DIed a bunch of mocks in.<p>The results were entirely predictable. The entire system was incredibly brittle and a service that existed for only 8 weeks behaved like legacy code. You could spend half a day fixing mocks in tests for a 30 minute change just because you switched backend services, changed the order of calls or just ended up calling a given service more times than expected. It was horrible and a complete waste of time.<p>Even the DI aspect of this was horrible because everything used Guice andd there wer emodules that installed modules that installed modules and modifying those to return mocks in a test environment was a massive effort that typically resulted in having a different environment (and injector) for test code vs production code so what are you actually testing?<p>The second was that about this time the Java engineers at the company went on a massive boondoggle to decide on whether to use (and mandate) EasyMock vs Mockito. This was additionally a waste of time. Regardless of the relative merits of either, there's really not that much difference. At no point is it worth completely changing your mocking framework in existing code. Who knows how many engineering man-yars were wasted on this.<p>Mocking encourages bad habits and a false sense of security. The solution is to have dummy versions of services and interfaces that have minimal correct behavior. So you might have a dummy Identity service that does simple lookups on an ID for permissions or metadata. If that's not what you're testing and you just need it to run a test, doing that with a mock is just wrong on so many levels.<p>I've basically never used mocks since, so much so that I find anyone who is strongly in favor of mocks or has strong opinions on mocking frameworks to be a huge red flag.</p>
]]></description><pubDate>Sun, 28 Dec 2025 23:41:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46415651</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=46415651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46415651</guid></item><item><title><![CDATA[New comment by cletus in "Mattermost restricted access to old messages after 10000 limit is reached"]]></title><description><![CDATA[
<p>Story time. This has basically nothing to do with this post other than it involves a limit of 10,000 but hey, it's Christmas and I want to tell a story.<p>I used to work for Facebook and many years ago people noticed you couldn't block certain people but the one that was most public was Mark Zuckerberg. It would just say it failed or something like that. And people would assign malice or just intent to it. But the truth was much funnier.<p>Most data on Facebook is stored in a custom graph database that basically only has 2 tables that are sharded across thousands of MySQL instances but most almost always accessed via an in-memory write-through cache, also custom. It's not quite a cache because it has functionality built on top of the database that accessing directly wouldn't have.<p>So a person is an object and following them is an edge. Importantly, many such edges were one-way so it was easy to query if person A followed B but much more difficult to query all the followers of B. This was by design to avoid hot shards.<p>So I lied when I said there were 2 tables. There was a third that was an optimization that counted certain edges. So if you see "10.7M people follow X" or "136K people like this", it's reading a count, not doing a query.<p>Now there was another optimization here: only the last 10,000 of (object ID,edge type) were in memory. You generally wanted to avoid dealing with anything older than that because you'd start hitting the database and that was generally a huge problem on a large, live query or update. As an example, it was easy to query the last 10,000 people or pages you've followed.<p>You should be able to see where this is going. All that had happened was 10,000 people had blocked Mark Zuckerberg. Blocks were another kind of edge that was bidirectional (IIRC). The system just wasn't designed for a situation where more than 10,000 people wanted to block someone.<p>This got fixed many years ago because somebody came along and build a separate system to handle blocking that didn't have the 10,000 limit. I don't know the implementation details but I can guess. There was a separate piece of reverse-indexing infrastructure for doing queries on one-way edges. I suspect that was used.<p>Anyway, I love this story because it's funny how a series of technical decisions can lead to behavior and a perception nobody intended.</p>
]]></description><pubDate>Thu, 25 Dec 2025 13:52:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46384396</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=46384396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46384396</guid></item><item><title><![CDATA[New comment by cletus in "At Least 13 People Died by Suicide Amid U.K. Post Office Scandal, Report Says"]]></title><description><![CDATA[
<p>People should go to jail for this.<p>Anyone who has worked on a large migration eventually lands on a pattern that goes something like this:<p>1. Double-write to the old system and the new system. Nothing uses the new system;<p>2. Verify the output in the new system vs the old system with appropriate scripts. If there are issues, which there will be for awhile, go back to (1);<p>3. Start reading from the new system with a small group of users and then an increasingly large group. Still use the old system as the source of truth. Log whenever the output differs. Keep making changes until it always matches;<p>4. Once you're at 100% rollout you can start decomissioning the old system.<p>This approach is incremental, verifiable and reversible. You need all of these things. If you engage in a massive rewrite in a silo for a year or two you're going to have a bad time. If you have no way of verifying your new system's output, you're going to have a bad time. In fact, people are going to die, as is the case here.<p>If you're going to accuse someone of a criminal act, a system just saying it happened should NEVER be sufficient. It should be able to show its work. The person or people who are ultimately responsible for turning a fraud detection into a criminal complaint should themselves be criminally liable if they make a false complaint.<p>We had a famous example of this with Hertz mistakenly reporting cars stolen, something they ultimately had to pay for in a lawsuit [1] but that's woefully insufficient. It is expensive, stressful and time-consuming to have to criminally defend yourself against a felony charge. People will often be forced to take a plea because absolutely everything is stacked in the prosecution's favor despite the theoretical presumption of innocence.<p>As such, an erroneous or false criminal complaint by a company should itself be a criminal charge.<p>In Hertz's case, a human should eyeball the alleged theft and look for records like "do we have the car?", "do we know where it is?" and "is there a record of them checking it in?"<p>In the UK post office scandal, a detection of fraud from accounting records should be verified by comparison to the existing system in a transition period AND, moreso in the beginning, double checking results with forensic accountants (actual humans) before any criminal complaint is filed.<p>[1]: <a href="https://www.npr.org/2022/12/06/1140998674/hertz-false-accusation-stealing-cars-settlement" rel="nofollow">https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...</a></p>
]]></description><pubDate>Fri, 11 Jul 2025 13:59:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44532233</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=44532233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44532233</guid></item><item><title><![CDATA[New comment by cletus in "Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix"]]></title><description><![CDATA[
<p>I realize scale makes everything more difficult but at the end of the day, Netflix is encoding and serving several thousand videos via a CDN. It can't be this hard. There are a few statements in this that gave me pause.<p>The core problem seems to be development in isolation. Put another way: microservices. This post hints at microservices having complete autonomy over their data storage and developing their own GraphQL models. The first is normal for microservices (but an indictment at the same time). The second is... weird.<p>The whole point of GraphQL is to create a unified view of something, not to have 23 different versions of "Movie". Attributes are optional. Pull what you need. Common subsets of data can be organized in fragments. If you're not doing that, why are you using GraphQL?<p>So I worked at Facebook and may be a bit biased here because I encountered a couple of ex-Netflix engineers in my time who basically wanted to throw away FB's internal infrastructure and reinvent Netflix microservices.<p>Anyway, at FB there a Video GraphQL object. There aren't 23 or 7 or even 2.<p>Data storage for most things was via write-through in-memory graph database called TAO that persisted things to sharded MySQL servers. On top of this, you'd use EntQL to add a bunch of behavior to TAO like permissions, privacy policies, observers and such. And again, there was one Video entity. There were offline data pipelines that would generally process logging data (ie outside TAO).<p>Maybe someone more experienced with microservices can speak to this: does UDA make sense? Is it solving an actual problem? Or just a self-created problem?</p>
]]></description><pubDate>Sat, 14 Jun 2025 15:21:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44276917</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=44276917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44276917</guid></item><item><title><![CDATA[New comment by cletus in "The world could run on older hardware if software optimization was a priority"]]></title><description><![CDATA[
<p>The key part here is "machine utilization" and absolutely there was a ton of effort put into this. I think before my time servers were allocated to projects but even early on in my time at Google Borg had already adopted shared machine usage and therew was a whole system of resource quota implemented via cgroups.<p>Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.<p>I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.</p>
]]></description><pubDate>Tue, 13 May 2025 20:32:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43977380</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43977380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43977380</guid></item><item><title><![CDATA[New comment by cletus in "The world could run on older hardware if software optimization was a priority"]]></title><description><![CDATA[
<p>So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.<p>More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.<p>Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.<p>Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.<p>I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the <i>thousands</i>. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.<p>Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.<p>The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.<p>I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.<p>I don't know how to solve that problem or even if it will ever be "solved".</p>
]]></description><pubDate>Tue, 13 May 2025 16:32:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43974735</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43974735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43974735</guid></item><item><title><![CDATA[New comment by cletus in "Leaving Google"]]></title><description><![CDATA[
<p>Sure. First you need to separate buffered and unbuffered channels.<p>Unbuffered channels basically operate like cooperate async/await but without the explictness. In cooperative multitasking, putting something on an unbuffered channel is essentially a yield().<p>An awful lot of day-to-day programming is servicing requests. That could be HTTP, an RPC (eg gRPC, Thrift) or otherwise. For this kind of model IMHO you almost never want to be dealing with thread primitives in application code. It's a recipe for disaster. It's so easy to make mistakes. Plus, you often need to make expensive calls of your own (eg reading from or writing to a data store of some kind) so there's no really a performance benefit.<p>That's what makes cooperative async/await so good for <i>application</i> code. The system should provide compatible APIs for doing network requests (etc). You never have to worry about out-of-order processing, mutexes, thread pool starvation or a million other issues.<p>Which brings me to the more complicated case of buffered channels. IME buffered channels are almost always a premature optimization that is often hiding concurrency issues. As in if that buffered channels fills up you may deadlock where you otherwise wouldn't if the buffer wasn't full. That can be hard to test for or find until it happens in production.<p>But let's revisit why you're optimizing this with a buffered channel. It's rare that you're CPU-bound. If the channel consumer talks to the network any perceived benefit of concurrency is automatically gone.<p>So async/await doesn't allow you to buffer and create bugs for little benefit and otherwise acts like unbuffered channels. That's why I think it's a superior programming model for most applications.</p>
]]></description><pubDate>Sun, 11 May 2025 04:58:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=43951531</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43951531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43951531</guid></item><item><title><![CDATA[New comment by cletus in "Leaving Google"]]></title><description><![CDATA[
<p>Google has over the years tried to get several new languages off the ground. Go is by far the most successful.<p>What I find fascinating is that all of them that come to mind were conceived by people who didn't really understand the space they were operating in and/or had no clear idea of what problem the language solved.<p>There was Dart, which was originally intended to be shipped as a VM in Chrome until the Chrome team said no.<p>But Go was originally designed as a systems programming language. There's a lot of historical revisionism around this now but I guarantee you it was. And what's surprising about that is that having GC makes that an immediate non-starter. Yet it happened anyway.<p>The other big surprise for me was that Go launched without external dependencies as a first-class citizen of the Go ecosystem. For the longest time there were two methods of declaring them: either with URLs (usually Github) in the import statements or with badly supported manifests. Like just copy what Maven did for Java. Not the bloated XML of course.<p>But Go has done many things right like having a fairly simple (and thus fast to compile) syntax, shipping with gofmt from the start and favoring error return types over exceptions, even though it's kind of verbose (and Rust's matching is IMHO superior).<p>Channels were a nice idea but I've become convinced that cooperative async-await is a superior programming model.<p>Anyway, Go never became the C replacement the team set out to make. If anything, it's a better Python in many ways.<p>Good luck to Ian in whatever comes next. I certainly understand the issues he faced, which is essentially managing political infighting and fiefdoms.<p>Disclaimer: Xoogler.</p>
]]></description><pubDate>Sun, 11 May 2025 04:16:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43951315</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43951315</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43951315</guid></item><item><title><![CDATA[New comment by cletus in "Why Companies Don't Fix Bugs"]]></title><description><![CDATA[
<p>A lot of the time, a lack of bugfixes comes from the incentive structure management has created. Specifically, you rarely get rewarded for fixing things. You get rewarded for shipping new things. In effect, you're punished for fixing things because that's time you're not shipping new things.<p>Ownership is another one. For example, product teams who are responsible for shipping new things but support for existing things get increasingly pushed onto support teams. This is really a consequence of the same incentive structure.<p>This is partially why I don't think that all subscription software is bad. The Adobe end of the spectrum is bad. The Jetbrains end is good. There is value in creating good, reliable software. If your only source of revenue is new sales then bugs are even less of a priority until it's so bad it makes your software virtually unusuable. And usually it took a long while to get there with many ignored warnings.</p>
]]></description><pubDate>Tue, 08 Apr 2025 05:42:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=43618654</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43618654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43618654</guid></item><item><title><![CDATA[New comment by cletus in "Comparing Fuchsia components and Linux containers [video]"]]></title><description><![CDATA[
<p>Xoogler here. I never worked on Fuchsia (or Android) but I knew a bunch of people who did and in other ways I was kinda adjacent to them and platforms in general.<p>Some have suggested Fuchsia was never intended to replace Android. That's either a much later pivot (after I left Google) or it's historical revisionism. It absolutely was intended to replace Android and a bunch of ex-Android people were involved with it from the start. The basic premise was:<p>1. Linux's driver situation for Android is fundamentally broken and (in the opinion of the Fuchsia team) cannot be fixed. Windows, for example, spent a lot of time on this issue to isolate issues within drivers to avoid kernel panics. Also, Microsoft created a relatively stable ABI for drivers. Linux doesn't do that. The process of upstreaming drivers is tedious and (IIRC) it often doesn't happen; and<p>2. (Again, in the opinion of the Fuchsia team) Android needed an ecosystem reset. I think this was a little more vague and, from what I could gather, meant different things to different people. But Android has a strange architecture. Certain parts are in the AOSP but an increasing amount was in what was then called Google Play Services. IIRC, an example was an SSL library. AOSP had one. Play had one.<p>Fuchsia, at least at the time, pretty much moved everything (including drivers) from kernel space into user space. More broadly. Fuchsia can be viewed in a similar way to, say, Plan9 and micro-kernel architectures as a whole. Some think this can work. Some people who are way more knowledgeable and experienced on OS design seem to be pretty vocal saying it can't because of the context-switching. You can find such treatises online.<p>In my opinion, Fuchsia always struck me as one of those greenfield vanity projects meant to keep very senior engineers. Put another way: it was a solution in search of a problem. You can argue the flaws in Android architecture are real but remember, Google doesn't control the hardware. At that time at least, it was Samsung. It probably still is. Samsung doesn't like being beholden to Google. They've tried (and failed) to create their own OS. Why would they abandon one ecosystem they don't control for another they don't control? If you can't answer that, then you shouldn't be investing billions (quite literally) into the project.<p>Stepping back a bit, Eric Schmidt when he was CEO seemed to hold the view that ChromeOS and Android could coexist. They could compete with one another. There was no need to "unify" them. So often, such efforts to unify different projects just lead to billions of dollars spent, years of stagnation and a product that is the lowest common denominator of the things it "unified". I personally thought it was smart not to bother but I also suspect at some point someone would because that's always what happens. Microsoft completely missed the mobile revolution by trying to unify everything under Windows OS. Apple were smart to leave iOS and MacOS separate.<p>The only fruit of this investment and a decade of effort by now is Nest devices. I believe they tried (and failed) to embed themselves with Chromecast<p>But I imagine a whole bunch of people got promoted and isn't that the real point?</p>
]]></description><pubDate>Tue, 04 Mar 2025 03:48:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43250044</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43250044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43250044</guid></item><item><title><![CDATA[New comment by cletus in "Intel delays $28B Ohio chip fabs to 2030"]]></title><description><![CDATA[
<p>You're alluding to the double taxation problem with dividends. This is a problem and has had a bunch of bad solutions (eg the passthrough tax break from 2017) when in fact the solution is incredibly simple.<p>In Australia, dividends come with what are called "franking credits". Imagine a company has a $1 billion profit and wants to pay that out as a dividend. The corporate tax rate is 30%. $700M is paid to shareholders. It comes wiht $300m (30%) in franking credits.<p>Let's say you own 1% of this company. When you do your taxes, you've made $10M in gross income (1% of $1B), been paid $7M and have $3M in tax credits. If your tax rate is 40% then you owe $4M on that $10M but you have already effectively paid $3M on that already.<p>The point is, the net tax rate on your $10M gross payout is still whatever your marginal tax rate is. There is no double taxaation.<p>That being said, dividends have largely fallen out of favor in favor of share buybacks. Some of those reasons are:<p>1. It's discretionary. Not every shareholders wants the income. Selling on the open market lets you choose if you want money or not;<p>2. Share buybacks are capital gains and generally enjoy lower tax rates than income;<p>3. Reducing the pool of available shares puts upward pressure on the share price; and<p>4. Double taxation of dividends.<p>There are some who demonize share buybacks specifically. I'm not one of them. It's simply a vehicle for returning money to shareholders, functionally very similar to dividends. My problem is doing either to the point of destroying the business.</p>
]]></description><pubDate>Sat, 01 Mar 2025 15:31:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=43220218</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43220218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43220218</guid></item><item><title><![CDATA[New comment by cletus in "Intel delays $28B Ohio chip fabs to 2030"]]></title><description><![CDATA[
<p>Reinvesting in the company is the one thing we should absolutely subsidize. That goes to wages, capital expenditure and other measures to sustain and grow the company.<p>Paying out dividends and doing share buybacks just strips the company for cash until there's nothing of value left. It's why entshittification is a thing.</p>
]]></description><pubDate>Sat, 01 Mar 2025 15:23:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43220145</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43220145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43220145</guid></item><item><title><![CDATA[New comment by cletus in "Intel delays $28B Ohio chip fabs to 2030"]]></title><description><![CDATA[
<p>I'm all for root cause analysis. A big part of that is that large companies become extremely risk-tolerant because history has shown there is little to no downside to their actions. If the government always bails you out, what incentive is there to be prudent? You may as well fly close to the Sun and pay out big bonuses now. Insolvency is a "next quarter" problem.<p>I'm aware that TARP funds were repaid. Still, a bunch of that money went straight into bonuses [1]. Honestly, I'd rather the company be seized, restructured and sold.<p>You know who ends up making sacrifices to keep a company afloat? The labor force. After 2008, auto workers took voluntary pay cuts, gave up benefits and otherwise did what they could to keep the company afloat, benefits it took them ~15 years to fight to get back. In a just world, executive compensation would go down to $1 until such a time that labor sacrifices are repaid.<p>[1]: <a href="https://www.theguardian.com/business/2009/jul/30/bank-bonuses-tarp" rel="nofollow">https://www.theguardian.com/business/2009/jul/30/bank-bonuse...</a></p>
]]></description><pubDate>Sat, 01 Mar 2025 15:20:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=43220102</link><dc:creator>cletus</dc:creator><comments>https://news.ycombinator.com/item?id=43220102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43220102</guid></item></channel></rss>