<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mojosam</title><link>https://news.ycombinator.com/user?id=mojosam</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 04:24:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mojosam" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mojosam in "Thoughts on slowing the fuck down"]]></title><description><![CDATA[
<p>> Once the codebase has become fully agentic, i.e., only agents fundamentally understand it<p>What exactly do we mean this? Because it is obviously common for human coders to tackle learning how an unfamiliar and complex codebase works so that they can modify it (new hires do it all the time). I can think this means one of two things:<p>* The code and architecture being produced by agents takes approaches that are abnormally complex or inscrutable to human reviewers. Is that what folks working with cutting edge agents are seeing? In which case, such code obviously isn’t beeping reviewed; it can’t be.<p>* the code and architecture being produced by agents can still be understood by human reviewers, but it isn’t actually being reviewed by anyone — since reviewing pull requests isn’t always fun or easy, and injecting in-depth human review slows everything down a lot — and so no one understands how the code works. (I keep thinking about the AI maximalist who recently said he woke up to 75 pull requests from his agent, like that was a good thing)<p>And maybe it’s a combination of the two: agent-generated pull requests are incrementally harder to grok, which makes reviewing more painful and take longer, which means more of them go without in-depth reviews.<p>But if your claim is true, the bottom line is that it means no one is fully reviewing code produced by agents.</p>
]]></description><pubDate>Thu, 26 Mar 2026 12:47:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47529771</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=47529771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47529771</guid></item><item><title><![CDATA[New comment by mojosam in "Cells use 'bioelectricity' to coordinate and make group decisions"]]></title><description><![CDATA[
<p>This is classic Quanta Magazine sensationalism. Here's what the study actually said:<p>As cells in epithelial tissue get crowded, their membranes start to allow more sodium ions to enter, which makes the cell more electrically positive (depolarization). The cells try to counter this, but cells with insufficient stored energy (ATP) will struggle to do so, and will lose water through their membranes, causing them to shrink, which causes them to signal their neighbors to extrude them.<p>So there's no "group decisions" being made, no "coordination" between cells using "bioelectricity". Yes, all cells rely on electrical potentials across their membranes for normal functioning, potentials that they have to maintain. That's all the involvement of electricity here.<p>And the only "decision-making" happening here is within a single cell, but of course cells don't "make decisions', cells are little machines, and part of the mechanism for epithelial cells -- a mechanism that works in part using chemistry and electricity -- includes the cell signaling that it needs to be extruded in certain circumstances, like shrinkage.</p>
]]></description><pubDate>Sun, 01 Feb 2026 14:51:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46846582</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=46846582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46846582</guid></item><item><title><![CDATA[New comment by mojosam in "Cells use 'bioelectricity' to coordinate and make group decisions"]]></title><description><![CDATA[
<p>> I don’t believe genetics ever claimed to provide a theory of why eyes grow where eyes grow.<p>That’s the whole point of developmental biology, to show how features of the human body form and develop based on gene expression, the timing of which during embryonic and fetal development itself is dictated by your genes.<p>If not your genes, what else would determine why you have eyes in about the same place in your head as every other human?<p>> The cells in your eyes have exactly the same DNA as the cells in your big toe, so developmental morphology cannot be explained with DNA alone.<p>Sure it can, because while every cell has essentially the same DNA, the expression of genes differs between cells, which is what causes cells to differentiate. And this differentiation also controls development; look up the Hox genes as an example.</p>
]]></description><pubDate>Sun, 01 Feb 2026 13:56:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46846248</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=46846248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46846248</guid></item><item><title><![CDATA[The "Discombobulator": Unpacking the Physics of the Weapon That Captured Maduro]]></title><description><![CDATA[
<p>Article URL: <a href="https://medium.com/@jcanchola1264/the-discombobulator-unpacking-the-physics-and-the-risks-of-the-weapon-that-captured-maduro-899be6f43aa9">https://medium.com/@jcanchola1264/the-discombobulator-unpacking-the-physics-and-the-risks-of-the-weapon-that-captured-maduro-899be6f43aa9</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46754411">https://news.ycombinator.com/item?id=46754411</a></p>
<p>Points: 16</p>
<p># Comments: 12</p>
]]></description><pubDate>Sun, 25 Jan 2026 14:40:42 +0000</pubDate><link>https://medium.com/@jcanchola1264/the-discombobulator-unpacking-the-physics-and-the-risks-of-the-weapon-that-captured-maduro-899be6f43aa9</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=46754411</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46754411</guid></item><item><title><![CDATA[New comment by mojosam in "Economics of Orbital vs. Terrestrial Data Centers"]]></title><description><![CDATA[
<p>> That "why" is almost missing from the public conversation. People jump straight to hardware and hand-wave the business case, as if the economics are self-evident. They aren't.<p>But then he never answers that fundamental question, and jumps straight to the hardware and power and cost? What problems are orbital data centers trying to solve? What optimizations are they intended to deliver? Are these optimizations beneficial to everyone who uses a data centers, or just operators or users of orbiting satellite constellations?<p>> But the knock-on effects are why this keeps pulling at people. If you can industrialize power and operations in orbit at meaningful scale, you're not just running GPUs. You're building a new kind of infrastructure that makes it easier for humans to keep spreading out. Compute is just one of the first excuses to pay for the scaffolding.<p>This seems to be the closest we get to a “Why”, but it doesn’t make much sense. A constellation of 40,000 satellites with GPUs “infrastructure that makes it easier for humans to keep spreading out”? How?<p>> The target I care about is simple: can you make space-based, commodity compute cost-competitive with the cheapest terrestrial alternative? That's the whole claim. … Can you deliver useful watts and reject the waste heat at a price that beats a boring Crusoe-style tilt-wall datacenter tied into a 200–500 MW substation?<p>Isn’t the answer clearly “No”? The default settings of his model — which I assume he considers optimal — tell us that power for orbital data enters will cost 3.5X terrestrial ones. And that only SpaceX has the vertical integration to do even attempt to do this. So again, where is the competitive advantage?<p>Also, I don’t understand why he’s including satellite construction and launch costs for a 40,000 satellite constellations in his analysis, if he’s assuming SpaceX as he claims. Wouldn’t SpaceX simply implement these compute capabilities in the next gen of Starlink, so which would reduce costs significantly.<p>> It might not be rational. But it might be physically possible.<p>But isn’t that precisely what everyone has been saying? I don’t think the question has been whether orbital data centers are possible, it’s been whether they are rational. And that centers foremost h the unanswered question, Why is this a good idea?</p>
]]></description><pubDate>Tue, 16 Dec 2025 14:45:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46289135</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=46289135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46289135</guid></item><item><title><![CDATA[New comment by mojosam in "Our investigation into the suspicious pressure on Archive.today"]]></title><description><![CDATA[
<p>> Someone with a subscription logs into the site, then archives it.<p>That’s not the case. I don’t have a NYT subscription, I just Googled for an old obscure article from 1989 on pork bellies I thought would be unlikely for archive.today to have cached, and sure enough when I asked to retrieve that article, it didn’t have it and began the caching process. A few minutes later, it came up with the webpage, which if you visit on archive.is, you can see it was first cached just a few minutes ago.<p><a href="https://www.nytimes.com/1989/11/01/business/futures-options-pork-bellies-rise-the-limit-as-export-optimism-grows.html" rel="nofollow">https://www.nytimes.com/1989/11/01/business/futures-options-...</a><p>My assumption has been that the NYT is letting them around the paywall, much like the unrelated Wayback Machine. How else could this be working? Only way I could think it could work is that either they have access to a NYT account and are caching using that — something I suspect the NYT would notice and shutdown — or there is a documented hole in the paywall they are exploiting (but not the Wayback Machine, since the caching process shows they are pulling direct from the NYT).</p>
]]></description><pubDate>Sat, 15 Nov 2025 14:49:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45937797</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=45937797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45937797</guid></item><item><title><![CDATA[New comment by mojosam in "Jeep pushed software update that bricked all 2024 Wrangler 4xe models"]]></title><description><![CDATA[
<p>> So can someone who owns a modern car please help me understand why you would buy a car that has the mere capability to be remotely shut off?<p>That’s not what is going on here. These cars are not being intentionally shut down remotely. Instead, a software update for some computerized components of the car was pushed down to the cars and installed with the owners permissions, but that update apparently has severe bugs that should have been caught by QA.</p>
]]></description><pubDate>Sun, 12 Oct 2025 15:28:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45558893</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=45558893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45558893</guid></item><item><title><![CDATA[New comment by mojosam in "Searchcode.com's SQLite database is probably 6 terabytes bigger than yours"]]></title><description><![CDATA[
<p>Yeah, I just searched for “driver_register”, a call that would show upin a large number of Linux drivers in the open source Linux kernel, not to mention other public-facing repos, and it only returned two results, neither from the mainline Linux kernel repo.</p>
]]></description><pubDate>Mon, 17 Feb 2025 13:04:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43078553</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=43078553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43078553</guid></item><item><title><![CDATA[New comment by mojosam in "The almost-lost art of rosin potatoes"]]></title><description><![CDATA[
<p>It sounds like most commenters here have never had a rosin tater. When I was kid, a very popular, upscale restaurant called Planters Back Porch in seafood mecca Murrells Inlet, SC specialized in rosin taters. They were very good, enough for there to always be long lines to get in.<p>In case it’s not clear from the description, after removing the potato from the rosin, and wrapping in paper, the thin layer of remaining rosin quickly solidifies into a hard shell, so you can then cut through it to get access to the flesh of the potato without accidentally eating rosin.</p>
]]></description><pubDate>Sat, 02 Nov 2024 12:42:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=42026057</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=42026057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42026057</guid></item><item><title><![CDATA[New comment by mojosam in "Energy-based model explains how chronic stress transforms into disease over time"]]></title><description><![CDATA[
<p>it sounds like the authors are suggesting that additional energy usage caused by stress can, in isolation from other causes, be a mechanism for disease. But that doesn’t make much sense:<p><pre><code>  - our metabolisms are adaptable, so why wouldn’t this increase in energy use simply be offset by an increase in energy production? It can’t be that people who are stressed in general aren’t getting enough energy, because that would correlate stress with weight loss, but I would argue that there are plenty of overweight people with stress.

  - if the argument is that an increased metabolism by itself is the culprit, then why wouldn’t people with higher metabolisms in general — like anyone who exercises regularly, but certainly athletes — not also experience more disease? If your answer is “that’s different for some reason”, then that means that increased energy usage and metabolism is not by itself the cause, which suggests it may not be the cause at all.
</code></pre>
Furthermore, even granting the supposition that stress requires increased energy usage, their abstract doesn’t make much sense:<p><pre><code>  - “Living organisms have a limited capacity to consume energy.” Okay, so that means that no matter how stressed we get, there’s a cap to the energy we can use. But how is that relevant, since it also applies to exercise or other energy utilization by the body? Why does a limited capacity to consume energy only apply to stress?

  - “Overconsumption of energy by [stress handling] brain-body processes leads to … excess energy expenditure above the organism’s optimum”. Thats basically a tautology, but more importantly, it doesn’t tell us that energy consumption above “optimal” — which seems extremely vague — is a bad thing.

  - “In turn, [excess energy consumption above the optimal] accelerates physiological decline in cells, laboratory animals, and humans, and may drive biological aging”. So that “may” is a pretty good reason to dismiss this, since again why wouldn’t this lead to increased disease among athletes or anyone with higher metabolism?

  - “Mechanistically, the energetic restriction of growth, maintenance and repair processes leads to the progressive wear-and-tear of molecular and organ systems” Maybe, but why are they energetically restricted if metabolism has increased to provide more energy? And again, why don’t we then see increased disease and aging in anyone who exercises regularly, since that exercise not only uses energy that restricts growth, maintenance and repair, but exercise causes more need for repair.
</code></pre>
I think the core problem is that it’s all going to boil down to how you define “optimum”, which the authors conveniently don’t. The authors are going to be left with defining “optimum” as meaning “that energy usage which does not cause disease”. But that’s no different than simply claiming “stress causes disease”, so this model describes nothing, since it tells us nothing about how to identify non-optimum energy usage or how non-optimum energy usage causes disease.</p>
]]></description><pubDate>Sun, 20 Oct 2024 16:14:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=41896353</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=41896353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41896353</guid></item><item><title><![CDATA[New comment by mojosam in "Intel sells stake in chip designer Arm Holdings"]]></title><description><![CDATA[
<p>A lot of people don’t remember that Intel was a huge early ARM licensee. If you were building a smart mobile device 25 years ago, you were probably seriously considering the Intel StongARM SoC. They then followed up on this with the more advanced ARM XScale family of SoCs, which you’d likely use if wanted to build a ARM battery-powered smart device in the early 2000s. Background per Wikipedia:<p>> The StrongARM is a family of computer microprocessors developed by Digital Equipment Corporation and manufactured in the late 1990s which implemented the ARM v4 instruction set architecture. It was later acquired by Intel in 1997 from DEC's own Digital Semiconductor division as part of a settlement of a lawsuit between the two companies over patent infringement. Intel then continued to manufacture it before replacing it with the StrongARM-derived ARM-based follow-up architecture called XScale in the early 2000s.<p>However, after developing and manufacturing these for nine years, Intel exited this business by selling their ARM unit to Marvell. Intel was developing its own “low power” x86 chip, the Atom, and decided to put all its mobile eggs in that basket, which unfortunately was never as low power as comparable ARM designs. I suspect Intel also saw that the number of licensees in the ARM market was growing and competition along with it, their value-add wasn’t  that great, and their margins were necessarily smaller due to the ARM licensing fees.</p>
]]></description><pubDate>Fri, 16 Aug 2024 12:10:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=41265594</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=41265594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41265594</guid></item><item><title><![CDATA[New comment by mojosam in "How to mail an SD card with gummy glue"]]></title><description><![CDATA[
<p>The oroduct description in the FAQ is entirely based on what it doesn’t or can’t do, but doesn’t say anything about what it will be able to do, except run an off-the-shelf window manager on an off-the-shelf OS. Are any apps going to be available, And how do customers install third-party apps without the dreaded Cloud?<p>They are aiming this at someone with:<p>> a high discretionary budget for personal electronics and willingness to pay a premium for novel ideas.<p>But what are those novel ideas that would justify the “quite high” price?<p>And if I wanted a BSD-based desktop computer with “No AI. No Cloud. No Distractions”, I would just buy a Mac Mini, not log into an Apple ID, disable Siri, out it into Do Not Disturb. And Mac OS has never been a “walled garden”. So from a customer’s perspective, why wouldn’t this be an easier, cheaper, and superior solution?</p>
]]></description><pubDate>Tue, 16 Jul 2024 09:16:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=40974812</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=40974812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40974812</guid></item><item><title><![CDATA[New comment by mojosam in "Comets that 'bounce' from planet to planet could spread life across the universe"]]></title><description><![CDATA[
<p>I don't see how that's an apt analogy. Geocentrism put the Earth at the center of the universe, around which the rest of the universe rotated. But saying life on Earth originated on Earth does not in any way put the Earth at the center of anything. Nor does it in any way mean that Earth is unique.<p>The bottom line is that -- because we don't know how abiogenesis occurred, whether here or somewhere else -- we have no way to judge how common it is. It could be that, given enough time, life spontaneously forms on any planet or moon that offers a certain set of conditions, and Earth just happens to be one of those planets, meaning it is still not "the center" of anything.<p>In fact, in the extreme case, panspermia is much more geocentric, saying that life formed in just one very special place -- maybe not the Earth, but somewhere, the "center of life in the universe" -- and then spread by diffusion to all the other locations in which life existed. But that seems like an unlikely and unnecessary model; if life can spontaneously begin somewhere, why should we assume it can't begin in many places, and if that's true, why not also on Earth?</p>
]]></description><pubDate>Thu, 16 Nov 2023 00:29:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=38284390</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=38284390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38284390</guid></item><item><title><![CDATA[New comment by mojosam in "The C Programming Language: Myths and Reality"]]></title><description><![CDATA[
<p>I think there’s a better example, but whether it applies it depends on one of two major divisions of C code: that designed to run on systems with a MMU (as typically used for Linux and other large OSes) — where virtual memory makes dynamic momory allocation practical — and those without — which today is primarily the very large world of embedded devices.<p>For the latter, the industry best practice is to avoid malloc(), except maybe at init time, and instead allocate memory statically. And in that use case, you break your code into modules, which can contain private data, public data, private functions, and public functions.<p>In other words, building an app out of C modules is a lot like building an app in a more modern language just using static classes, with no instantiation. And that design pattern — which is extremely common in the embedded world — we have a direct equivalent to the “private” qualifier, which is “static”, which restricts the rest of the app from accessing so-marked file-scope variables and functions.<p>Where this breaks down — as always with C — is when you need multiple instantiations of a module, which modern programming languages refer to as an object. The closest we can get in C is to pass the module’s public functions a struct with some sort of data structure containing the object’s n9n-static data. And the author explains, there are standard  ways make that data structure opaque to calling code, but those are definitely workarounds to language shortcomings.<p>But the bottom line is that those language shortcomings — the lack of objects and a private qualifier for its members — are only shortcomings if you need those features, and in the embedded world, most applications don’t, they only require all the advantages offered by C. So as always, this is about picking the right language for the project, there’s no one size fits all.</p>
]]></description><pubDate>Mon, 17 Jul 2023 15:13:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=36759240</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=36759240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36759240</guid></item><item><title><![CDATA[New comment by mojosam in "CWE Top Most Dangerous Software Weaknesses"]]></title><description><![CDATA[
<p>Isn’t #17 the same as #1 and #7 combined?</p>
]]></description><pubDate>Thu, 13 Jul 2023 13:08:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=36708536</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=36708536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36708536</guid></item><item><title><![CDATA[New comment by mojosam in "Metformin shown to prevent long Covid"]]></title><description><![CDATA[
<p>The claim is that metformin greatly reduces viral load. If your long COVID is caused by the virus continuing to linger on 8n some tissues, as there is evidence might be the case for s9me patience, then it might. But if your long COVID is caused by microclots or lung scarring or other physiological damage caused by COVID, then no.</p>
]]></description><pubDate>Fri, 16 Jun 2023 12:02:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=36355127</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=36355127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36355127</guid></item><item><title><![CDATA[New comment by mojosam in "Facebook is going after LLaMA repos with DMCA's"]]></title><description><![CDATA[
<p>Some degree of “human authorship” is a requirement for copyright. I don’t see how weights generated by training an AI would be protected.</p>
]]></description><pubDate>Fri, 24 Mar 2023 14:11:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=35289290</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=35289290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35289290</guid></item><item><title><![CDATA[New comment by mojosam in "A man says he accidentally unlocked and drove someone else’s Tesla using the app"]]></title><description><![CDATA[
<p>So does that mean if the driver accidentally leaves their phone in your Tesla — which is something that’s easy to do accidentally — it stays unlocked and anyone can drive off with it?</p>
]]></description><pubDate>Sat, 11 Mar 2023 13:59:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=35108323</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=35108323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35108323</guid></item><item><title><![CDATA[New comment by mojosam in "Report: U.S. pedestrian death rate increased during COVID"]]></title><description><![CDATA[
<p>In addition to bad use of statistics, this article really buries the lead, making the identification of solutions difficult. US pedestrian deaths steadily declined over 20 years from 2.6 per 100K in 1990 to 1.4 per 100K in 2010 -- a 46% decrease -- but then in the following 10 years increased to 1.96 per 100K -- a 40% increase in half the time.<p>Furthermore, the curve is V-shaped, with the best year in 2009; it seems like some thing or things happened around 2010 that suddenly changed a long-term downward trend into a much more rapid upward trend. It's hard to see how a change in human behavior would cause such an abrupt turnaround or such a steady increase.<p>I think what fits the data best is that this is due to a technological change, and I suspect it might be tied to sales of hybrids and EVs, which are quieter vehicles that pedestrians are less likely to be aware of, and sales of which really ramped up starting in the late 2000s. Every year, we add more of these vehicles to the roads -- which is a very good thing overall -- and that would explain why the pedestrian death rate is steadily increasing.<p>In other words, it may be that this is not due to a change in human behavior, but rather that human behavior hasn't caught up to a technology change. In addition, it looks like NHTSA's "2015 Pedestrian and Bicyclist Data Analysis" suggests that pedestrian deaths for kids continued to decrease between 2010-2015, but that adults 50+ suffered the largest increases. In other words, the demographic that is both hardest of hearing and had the hardest time to adapt to hybrids/EVs was hit the hardest.</p>
]]></description><pubDate>Wed, 01 Mar 2023 12:19:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=34981540</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=34981540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34981540</guid></item><item><title><![CDATA[New comment by mojosam in "Ask HN: How to Stop Caring (Professionally)?"]]></title><description><![CDATA[
<p>I don’t know if this will help, but I can tell you what helped me. Quite a few years ago, I stopped working as an employee and started working as a freelancer/consultant. I get that that’s probably not an option in your case, but let me continue.<p>In doing that, I changed my perspective, in a way that I think may also be possible while still working as an employee. The only company I worry about now is me, my own business, my own career, my own success.<p>That’s not to say that I don’t want my clients to be successful, or that I don’t work my ass off trying to help them be successful, but I don’t worry if they aren’t. Why not? Because I understand that I have no control over them, no way to prevent them from making bad decisions, no way to prevent them from being managed poorly (as most companies are).<p>The problem is that employers want you to have an emotional stake in their success — to take personal responsibility for their success — but you ultimately have no control. That’s kind of the point of stock options — making you care the success of the company as a whole so you’ll work hard and stick around — but really you don’t have real control over the company’s success. And that dynamic — giving responsibility without control — is a classic sign of bad management.<p>So don’t play that game: only take responsibility for, only worry about, the things you have control over. And for most employees, that’s just themselves. Focus on the success of your personal business — how you do your assignments, the next step in your career, your mental and physical health, your family — and let the rest go.</p>
]]></description><pubDate>Sun, 02 Oct 2022 12:57:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=33055526</link><dc:creator>mojosam</dc:creator><comments>https://news.ycombinator.com/item?id=33055526</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33055526</guid></item></channel></rss>