<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: GeneralMayhem</title><link>https://news.ycombinator.com/user?id=GeneralMayhem</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 20:29:58 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=GeneralMayhem" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by GeneralMayhem in "Waymo in Portland"]]></title><description><![CDATA[
<p>In FY25, according to their budget [1], TriMet - the Portland public transit authority - spent $19M on bus services.<p>In that same budget, PDOT spent $56M on streets, signs and streetlights, before you even consider the $242M spent on "asset management" - which appears to generally be capital improvements; i.e., rebuilding roads [2, page 509].<p>I don't care what fraction of that wear and tear is due to buses, it's not remotely close. And in any case, by the same fourth-power law, private 18-wheelers do astronomically more damage than buses.<p>And yes, PDOT makes revenue back from some of those things, so it's not all straight from the city general fund, but it doesn't matter in any practical way. They don't have revenues broken down as far as I'd like on that budget - there's one big $89M line item for "charges for services", which appears to include parking meters as well as tram fare - but the vast majority of their budget still comes from taxes plus "intergovernmental" sources (aka state and federal money, aka taxes).<p>[1] <a href="https://www.gpmetro.org/wp-content/uploads/2025/01/2025-Operating-Budget.pdf" rel="nofollow">https://www.gpmetro.org/wp-content/uploads/2025/01/2025-Oper...</a>  
[2] <a href="https://www.portland.gov/budget/documents/fy-2025-26-city-portland-adopted-budget-vol-1-city-summaries-and-bureau-budgets/download" rel="nofollow">https://www.portland.gov/budget/documents/fy-2025-26-city-po...</a></p>
]]></description><pubDate>Wed, 29 Apr 2026 03:47:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47943975</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=47943975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47943975</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Waymo in Portland"]]></title><description><![CDATA[
<p>If you only wanted to run buses, you would not build nearly as many roads as we do.</p>
]]></description><pubDate>Wed, 29 Apr 2026 03:22:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47943838</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=47943838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47943838</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Waymo in Portland"]]></title><description><![CDATA[
<p>> by simple virtue of being a car<p>State and local governments spend a truly obscene amount of money building and repairing roads, and set aside a nauseating amount of publicly owned land to serve as roads, street parking, and parking lots. Those of us who don't frequently drive get some benefit from the roads, sure, because of the efficiencies of shops needing deliveries and whatnot, but not anything close to proportional to what drivers get out of it. And we accept this as the default way that things should be, whereas we assume that public transit needs to "pay for itself".</p>
]]></description><pubDate>Wed, 29 Apr 2026 03:10:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47943776</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=47943776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47943776</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "RAM Has a Design Flaw from 1966. I Bypassed It [video]"]]></title><description><![CDATA[
<p>Right, but the impressive part is finding addresses that are actually on different memory channels.</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:16:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714681</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=47714681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714681</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "What size should I use for my queue"]]></title><description><![CDATA[
<p>> My current thinking is that queues don’t increase average throughput. Instead, they act as buffers that absorb short-term bursts and timing differences between senders and receivers<p>Absorbing bursts is one purpose for a queue, but often not the only or even most important one. Other reasons include:<p>Correctness reasons:<p>* Providing a durable history of requests that were sent<p>* Creating at-least-once semantics for important (e.g. financial) data<p>Scaling reasons:<p>* Allowing a shuffle between producer and consumer to change shard/key affinity (can be done with direct RPCs, but would need extra library support)<p>* Pipelining variable-cost requests so that consumers can self-rate-limit and stay optimally loaded</p>
]]></description><pubDate>Mon, 23 Feb 2026 06:13:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47118716</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=47118716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47118716</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Everything as code: How we manage our company in one monorepo"]]></title><description><![CDATA[
<p>Do you take down all of your projects and then bring them back up at the new version? If not, then you have times at which the change is only partially complete.</p>
]]></description><pubDate>Tue, 30 Dec 2025 21:38:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46438326</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46438326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46438326</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Rue: Higher level than Rust, lower level than Go"]]></title><description><![CDATA[
<p>Same here. I've worked on one project that used code generation to implement a DSL, but that would have been the same in any implementation language, it was basically transpiring. And protobufs, of course, but again, that's true in all languages.<p>The only thing I can think of that Go uses a lot of generation for that other languages have other solutions for is mocks. But in many languages the solution is "write the mocks by hand", so that's hardly fair.</p>
]]></description><pubDate>Mon, 22 Dec 2025 13:20:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46353903</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46353903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46353903</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Prompt caching for cheaper LLM tokens"]]></title><description><![CDATA[
<p>Really only prefixes, without a significant loss in accuracy. The point is that because later tokens can't influence earlier ones, the post-attention embeddings for those first tokens can't change. But the post-attention embeddings for "and then tell me what" <i>would</i> be wildly different for every prompt, because the embeddings for those tokens are affected by what came earlier.<p>My favorite not-super-accurate mental model of what's going on with attention is that the model is sort of compressing the whole preceding context into each token. So the word "tell" would include a representation not just of the concept of telling, but also of what it is that's supposed to be told. That's explicitly what you don't want to cache.<p>> So if I were running a provider I would be caching popular prefixes for questions across all users<p>Unless you're injecting user context before the question. You can have a pre baked cache with the base system prompt, but not beyond that. Imagine that the prompt always starts with "SYSTEM: You are ChatGPT, a helpful assistant. The time is 6:51 ET on December 19, 2025. The user's name is John Smith. USER: Hi, I was wondering..." You can't cache the "Hi, I was wondering" part because it comes after a high-entropy component (timestamp and user name).</p>
]]></description><pubDate>Fri, 19 Dec 2025 11:49:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46324785</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46324785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46324785</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Why Twilio Segment moved from microservices back to a monolith"]]></title><description><![CDATA[
<p>Go with Bazel gives you a couple options:<p>* You can use gazelle to auto-generate Bazel rules across many modules - I think the most up to date usage guide is <a href="https://github.com/bazel-contrib/rules_go/blob/master/docs/go/core/bzlmod.md" rel="nofollow">https://github.com/bazel-contrib/rules_go/blob/master/docs/g...</a>.<p>* In addition, you can make your life a lot easier by just making the whole repo a single Go module. Having done the alternate path - trying to keep go.mod and Bazel build files in sync - I would definitely recommend only one module per repo unless you have a very high pain tolerance or actually need to be able to import pieces of the repo with standard Go tooling.<p>> a beefy VM to host CI<p>Unless you really need to self-host, Github Actions or GCP Cloud Build can be set up to reference a shared Bazel cache server, which lets builds be quite snappy since it doesn't have to rebuild any leaves that haven't changed.</p>
]]></description><pubDate>Sun, 14 Dec 2025 03:20:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46260496</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46260496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46260496</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Why Twilio Segment moved from microservices back to a monolith"]]></title><description><![CDATA[
<p>>  if updating that shared library automatically updates everyone and isn’t backward compatible you’re doing it wrong that library should be published as a v2 or dependents should pin to a specific version<p>...but <i>why</i>? You're begging the question.<p>If you can automatically update everyone <i>including running their tests and making any necessary changes to their code</i>, then persisting two versions forever is a waste of time. If it's because you can't be certain from testing that it's actually a safe change, then fine, but note that that option <i>is still available to you</i> by copy/pasting to a v2/ or adding a feature flag. Going to a monorepo gives you strictly more options in how to deal with changes.<p>> You literally wouldn’t be able to keep track of your BOM in version control as it obtains a time component based on when you built the service<p>This is true regardless of deployment pattern. The artifact that you publish needs to have pointers back to all changes that went into it/what commit it was built at. Mono vs. multi-repo doesn't materially change that, although I would argue it's slightly easier with a monorepo since you can look at the single history of the repository, rather than having to go an extra hop to find out what version 1.0.837 of your dependency included.<p>> the version that was published in the registry<p>Maybe I'm misunderstanding what you're getting at, but monorepo dependencies typically don't <i>have</i> a registry - you just have the commit history. If a binary is built at commit X, then all commits before X across all dependencies are included. That's kind of the point.</p>
]]></description><pubDate>Sun, 14 Dec 2025 01:27:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46260010</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46260010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46260010</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Why Twilio Segment moved from microservices back to a monolith"]]></title><description><![CDATA[
<p>Internal and external have wildly different requirements. Google internally <i>can't</i> update a library unless the update is either backward-compatible for all current users or part of the same change that updates all those users, and that's enforced by the build/test harness. That was an explicit choice, and I think an excellent one, for that scenario: it's more important to be certain that you're <i>done</i> when you move forward, so that it's obvious when a feature no longer needs support, than it is to enable moving faster in "isolation" when you all work for the same company anyway.<p>But also, you're conflating code and services. There's a huge difference between libraries that are deployed as part of various binaries and those that are used as remote APIs. If you want to update a utility library that's used by importing code, then you don't need simultaneous deployment, but you would like to update everywhere to get it done with - that's only really possible with a monorepo. If you want to update a remote API without downtime, then you need a multi-phase rollout where you introduce a backward-compatibility mode... but that's true whether you store the code in one place or two.</p>
]]></description><pubDate>Sat, 13 Dec 2025 23:14:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46259162</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46259162</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46259162</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Why Twilio Segment moved from microservices back to a monolith"]]></title><description><![CDATA[
<p>I worked on building this at $PREV_EMPLOYER. We used a single repo for many services, so that you could run tests on all affected binaries/downstream libraries when a library changed.<p>We used Bazel to maintain the dependency tree, and then triggered builds based on a custom Github Actions hook that would use `bazel query` to find the transitive closure of affected targets. Then, if anything in a directory was affected, we'd trigger the set of tests defined in a config file in that directory (defaulting to :...), each as its own workflow run that would block PR submission. That worked really well, with the only real limiting factor being the ultimate upper limit of a repo in Github, but of course took a fair amount (a few SWE-months) to build all the tooling.</p>
]]></description><pubDate>Sat, 13 Dec 2025 23:11:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46259148</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46259148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46259148</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "A Vibe Coded SaaS Killed My Team"]]></title><description><![CDATA[
<p>"Ignoring the code entirely and only prompting" is the only definition of vibe-coding I'm aware of. It's from a Karpathy tweet (<a href="https://x.com/karpathy/status/1886192184808149383" rel="nofollow">https://x.com/karpathy/status/1886192184808149383</a>):<p>> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists...  I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension.<p>It specifically doesn't mean "using an LLM as a code assistant". It <i>definitely</i> doesn't mean asking the LLM questions about code which you'll then use to write your own code. Those are LLM-assisted activities, and it's totally fine if you're using the LLM that way. But it's not what the term "vibe coding" means. "Vibe coding" is giving up on any pretense that you're in control, and letting the LLM take the wheel. It's fun for getting quick projects done, but it's also now becoming a distressingly common practice for people who literally do not know how to program in order to get a "product" to market.</p>
]]></description><pubDate>Wed, 26 Nov 2025 19:18:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46061308</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=46061308</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46061308</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "In orbit you have to slow down to speed up"]]></title><description><![CDATA[
<p>Turning around a track definitely dissipates <i>some</i> heat energy through increased friction with the rails. Imagine taking a semicircle turn and making it tighter and tighter. At the limit, the train is basically hitting a solid wall and rebounding in the other direction, which would certainly transfer some energy.<p>The energy question is this: going from a 100kmh-due-north momentum to a 100kmh-due-south momentum via slowing, stopping, and accelerating again clearly takes energy. You can also switch the momentum vector by driving in a semicircle. Turning around a semicircle takes <i>some</i> energy, but how much - and where does it come from? Does it depend on how tight the circle is - or does that just spread it out over a wider time/distance? If you had an electric train with zero loss from battery to wheels, and you needed to get it from going north to going south, what would be the most efficient way to do it?</p>
]]></description><pubDate>Fri, 31 Oct 2025 15:02:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45772753</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=45772753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45772753</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Why UUIDs won't protect your secrets"]]></title><description><![CDATA[
<p>I like natural keys... if you can prove that they're actually immutable and unique for the thing they're representing. Credit card number is a decent natural key <i>for a table of payment instruments</i>, not for users. Even for a natural-key-believer, users pretty much always need a synthetic ID, because anything you might possibly believe to be constant about humans turns out not to be.</p>
]]></description><pubDate>Tue, 21 Oct 2025 04:11:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45652414</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=45652414</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45652414</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "California age verification bill backed by Google, Meta, OpenAI heads to Newsom"]]></title><description><![CDATA[
<p>I hadn't thought about GitHub -I'm guessing the authors of the bill didn't either - but you're right, that is somewhat concerning. Still, I don't think it's the end of the world...<p>> The requirement is also that developers will request the signal. No scoping to developers that have a reason to care?<p>I don't see that requirement. Here's the sum total of the developer's responsibilities (emphasis added):<p>> A developer with actual knowledge that a user is a child via receipt of a signal regarding a user’s age shall, <i>to the extent technically feasible</i>, provide readily available features for parents to support a child user with respect to the child user’s use of the service and <i>as appropriate given the risks that arise from use of the application</i>, including features to do all of the following:<p>>  (A) Help manage which accounts are affirmatively linked to the user under 18 years of age.<p>>  (B) Manage the delivery of age-appropriate content.<p>>  (C) Limit the amount of time that the user who is 18 years of age spends daily on application.<p>It would be nice if it had specific carve outs for things that aren't expected to interact with this system, but it seems like they're leaving it up to court judgment instead, with just enough wiggle room in the phrasing to make that possible.<p>If your application doesn't have a concept of "accounts", then A is obviously moot. If you don't deliver age-inappropriate content, then B is moot. The only thing that can matter is C, but I'd expect that (a) nobody is going to complain about the amount of time their kids are spending on Vim and (b) the OS would just provide that control at a higher level.</p>
]]></description><pubDate>Mon, 15 Sep 2025 14:08:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45249949</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=45249949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45249949</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "California age verification bill backed by Google, Meta, OpenAI heads to Newsom"]]></title><description><![CDATA[
<p>It's always possible that they'll say it, but it would be a lie based on my reading of this bill. Sideloaded apps can choose whether or not to respect the OS's advice about the age of the user, it's not on the OS or device to enforce them being honest.</p>
]]></description><pubDate>Mon, 15 Sep 2025 14:02:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45249888</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=45249888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45249888</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "California age verification bill backed by Google, Meta, OpenAI heads to Newsom"]]></title><description><![CDATA[
<p>Bill text: <a href="https://legiscan.com/CA/text/AB1043/id/3193837" rel="nofollow">https://legiscan.com/CA/text/AB1043/id/3193837</a><p>This seems... not terrible? The typical counter-argument to any "think of the children!" hand-wringing is that parents should instead install parental controls or generally monitor what their own kids are up to. Having a standardized way to actually do that, without getting into the weirdness of third-party content controls (which are themselves a privacy/security nightmare), is not an awful idea. It's also limited to installed applications, so doesn't break the web.<p>This is basically just going to require all smartphones to have a "don't let this device download rated-M apps" mode. There's no actual data being provided - and the bill explicitly says so; it just wants a box to enter a birth date or age, not link it to an actual ID. I'm not clear on how you stop the kid from just flipping the switch back to the other mode; maybe the big manufacturers would have a lock such that changing the user's birthdate when they're a minor requires approval from a parent's linked account?<p>That said, on things like this I'm never certain whether to consider it a win that a reasonable step was taken instead of an extreme step, or to be worried that it's the first toe in the door that will lead to insanity.</p>
]]></description><pubDate>Mon, 15 Sep 2025 00:40:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45244850</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=45244850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45244850</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "Ask HN: Best foundation model for CLM fine-tuning?"]]></title><description><![CDATA[
<p>Yeah... I'm far from an expert on state-of-the-art ML, but it feels like a new embedding would invalidate any of the layers you keep. Taking off a late layer makes sense to me, like in cases where you want to use an LLM with a different kind of output head for scoring or something like that, because the basic "understanding" layers are still happening in the same numerical space - they're still producing the same "concepts", that are just used in a different way, like applying a different algorithm to the same data structure. But if you have a brand new embedding, then you're taking the <i>bottom</i> layer off. Everything else is based on those dimensions. I suppose it's possible that this "just works", in that there's enough language-agnostic structure in the intermediate layers that the model can sort of self-heal over the initial embeddings... but that intuitively seems kind of incredible to me. A transformation over vectors from a completely different basis space feels vanishingly unlikely to do anything useful. And doubly so given that we're talking about a low-resource language, which might be more likely to have unusual grammatical or linguistic quirks which self-attention may not know how to handle.</p>
]]></description><pubDate>Mon, 01 Sep 2025 09:27:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45091042</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=45091042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45091042</guid></item><item><title><![CDATA[New comment by GeneralMayhem in "PuTTY has a new website"]]></title><description><![CDATA[
<p>It's much weirder now.<p>The current holder of that domain is using it to host a single page that pushes anti-vax nonsense under the guise of fighting censorship... but also links to the actual PuTTY site. Very weird mix of maybe-well-meaning and nonsense.</p>
]]></description><pubDate>Sat, 16 Aug 2025 04:15:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44920113</link><dc:creator>GeneralMayhem</dc:creator><comments>https://news.ycombinator.com/item?id=44920113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44920113</guid></item></channel></rss>