<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: yathaid</title><link>https://news.ycombinator.com/user?id=yathaid</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 23:52:06 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=yathaid" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by yathaid in "Oh My Zsh adds bloat"]]></title><description><![CDATA[
<p>Slightly off topic but:<p>>> My workflows involve opening and closing up to hundreds of terminal or tmux tabs a day.<p>What?!?</p>
]]></description><pubDate>Sat, 10 Jan 2026 05:37:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46563084</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=46563084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46563084</guid></item><item><title><![CDATA[New comment by yathaid in "I don't care how well your "AI" works"]]></title><description><![CDATA[
<p>>> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress.<p>Without an explanation of what they author is calling out as flaws, it is hard to take this article seriously.<p>I know engineers I respect a ton who have gotten a bunch of productivity upgrades using "AI". My own learning curve has been to see Claude say "okay, these integration tests aren't working. Let me write unit tests instead" and go on when it wasn't able to fix a jest issue.</p>
]]></description><pubDate>Wed, 26 Nov 2025 15:32:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46058369</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=46058369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46058369</guid></item><item><title><![CDATA[New comment by yathaid in "Advice for new principal tech ICs (i.e., notes to myself)"]]></title><description><![CDATA[
<p>Nobody performs as the CEO until they are given the CEO title. If the Peter principle was true 100% of the time, we wouldn't have any successful CEOs ever. Which is clearly not the case.</p>
]]></description><pubDate>Tue, 28 Oct 2025 06:40:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45729760</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=45729760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45729760</guid></item><item><title><![CDATA[New comment by yathaid in "Advice for new principal tech ICs (i.e., notes to myself)"]]></title><description><![CDATA[
<p>A part of the job is only enabled when you get the Principal label. Unlike almost all other transitions, you only prove that you can do the role when given the opportunities. The hardest part about this transition is that you are doing two almost orthogonal roles - Sr. SDE / Tech Lead and the principal parts. It is very easy to not show impact in the former while chasing the latter.</p>
]]></description><pubDate>Mon, 27 Oct 2025 06:30:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45717969</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=45717969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45717969</guid></item><item><title><![CDATA[New comment by yathaid in "GrapheneOS accessed Android security patches but not allowed to publish sources"]]></title><description><![CDATA[
<p>>> Those are choices. If you want to do that, you need a process that can support it.<p>__need__ is doing a lot of work here. There is no forcing function to get OEMs to do this ASAP: 1) the market doesn't really care that much 2) there are no regulations around this (and even if they were, can you immediately recall a tech exec going to jail for breaking the law ... )</p>
]]></description><pubDate>Thu, 11 Sep 2025 18:13:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45214473</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=45214473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45214473</guid></item><item><title><![CDATA[New comment by yathaid in "US attack on renewables will lead to power crunch that spikes electricity prices"]]></title><description><![CDATA[
<p>This is a feature, not a bug.</p>
]]></description><pubDate>Sun, 24 Aug 2025 15:04:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45004758</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=45004758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45004758</guid></item><item><title><![CDATA[New comment by yathaid in "Systems Correctness Practices at Amazon Web Services"]]></title><description><![CDATA[
<p>>> a good design<p>good is doing a lot of heavy lifting here. The point of TLA+/Pluscal is to have a proof of the soundness of the design.</p>
]]></description><pubDate>Fri, 30 May 2025 14:35:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44136642</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=44136642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44136642</guid></item><item><title><![CDATA[New comment by yathaid in "What If We Had Bigger Brains? Imagining Minds Beyond Ours"]]></title><description><![CDATA[
<p>There is a long tradition in India, which started with oral transmission of the Vedas, of parallel cognition. It is almost an art form or a mental sport - <a href="https://en.wikipedia.org/wiki/Avadhanam" rel="nofollow">https://en.wikipedia.org/wiki/Avadhanam</a></p>
]]></description><pubDate>Thu, 29 May 2025 05:22:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44123364</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=44123364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44123364</guid></item><item><title><![CDATA[New comment by yathaid in "Ditching Obsidian and building my own"]]></title><description><![CDATA[
<p>> Obsidian charges $8 a month to access the same notes across multiple devices. While not a huge amount for such a useful app, it adds up to an eye-watering amount - almost $1,000 if I planned to use Obsidian for a decade.<p>This highlights one of my personal bugbears. People have a mental barrier when it comes to recurring, low-cost payments; even though the net sum is small in comparison to other things that they wouldn't think twice to pay for.<p>A $5 latte every workday comes to (260 * 5) $1300 annually. Obsidian sync is $96. Why would you not pay this amount for a tool you use everyday?</p>
]]></description><pubDate>Mon, 19 May 2025 08:10:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44027530</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=44027530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44027530</guid></item><item><title><![CDATA[New comment by yathaid in "LegoGPT: Generating Physically Stable and Buildable Lego"]]></title><description><![CDATA[
<p>This is super cool! The GIFs showing the object being built are just yummy; I have no other way to describe it.<p>If anyone else was searching for the dataset, it is at <a href="https://huggingface.co/datasets/AvaLovelace/StableText2Lego" rel="nofollow">https://huggingface.co/datasets/AvaLovelace/StableText2Lego</a><p>It contains " contains 47,000+ different LEGO structures, covering 28,000+ unique 3D objects from 21 common object categories of the ShapeNetCore dataset".<p>Local inference instructions are over at their github page - <a href="https://github.com/AvaLovelace1/LegoGPT/?tab=readme-ov-file">https://github.com/AvaLovelace1/LegoGPT/?tab=readme-ov-file</a></p>
]]></description><pubDate>Fri, 09 May 2025 06:26:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43934297</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=43934297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43934297</guid></item><item><title><![CDATA[New comment by yathaid in "An image of an archeologist adventurer who wears a hat and uses a bullwhip"]]></title><description><![CDATA[
<p>>> Stealing deprives someone of something<p>Yes. In this case, it is the artist's sole right to reproduce said images, based on their creative output.<p>>> decreases scarcity, it doesn't increase it<p>What does scarcity have to do with stealing? You can steal bread and reduce food scarcity, but that is still theft.</p>
]]></description><pubDate>Fri, 04 Apr 2025 05:51:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=43578764</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=43578764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43578764</guid></item><item><title><![CDATA[New comment by yathaid in "Google to buy Wiz for $32B"]]></title><description><![CDATA[
<p>The Trump admin has shown the same attitude as the Biden admin when it comes to mergers. So why do they think the merger will go through this time?</p>
]]></description><pubDate>Tue, 18 Mar 2025 13:19:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=43399115</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=43399115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43399115</guid></item><item><title><![CDATA[New comment by yathaid in "Who's Afraid of Peter Thiel? A New Biography Suggests We All Should Be (2021)"]]></title><description><![CDATA[
<p>Federal agencies pay over $65 billion to consultants each year. 98% of Booz Allen's revenues (~$11 billion) is from government consulting. I don't know what the threshold for "extreme waste" for you is, but that is a hell of a lot of money that consulting firms have been able to siphon from American taxpayers.<p>[1] - <a href="https://www.inc.com/bruce-crumley/doge-cost-cuts-zero-in-on-top-u-s-consulting-firm-contracts/91155069" rel="nofollow">https://www.inc.com/bruce-crumley/doge-cost-cuts-zero-in-on-...</a></p>
]]></description><pubDate>Wed, 05 Mar 2025 14:00:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=43266468</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=43266468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43266468</guid></item><item><title><![CDATA[New comment by yathaid in "A son spent a year trying to save his father from conspiracy theories"]]></title><description><![CDATA[
<p>Painful to read. I have had similar conversations with my own father, though nothing quite extreme. There is no moving them from their warped reality.<p>I have theorized some root causes:<p>- They cannot differentiate between well-meaning friends and high quality information i.e. there is a fallacy of "this person is honest, hence this forward they just sent me is true".<p>- Starting from at least my generation (born in late 80s), there is an understanding of "echo chamber effects", personalizing newsfeeds for engagement etc. There is some inoculation against content meant to trigger/resonate with specific sub-groups. I have found this to be completely lacking in discussions with my parents/their generation.<p>All these make it hard to move them out of the dis-information locus they fall into.</p>
]]></description><pubDate>Thu, 27 Feb 2025 16:02:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43195605</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=43195605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43195605</guid></item><item><title><![CDATA[New comment by yathaid in "I think Yann Lecun was right about LLMs (but perhaps only by accident)"]]></title><description><![CDATA[
<p>Thanks for replying, hope it wasn't too critical.<p>>> But in the limit of tokens generated, the chance that they generate the correct answer still decays to zero.<p>I don't understand this assertion though.<p>Lecun's thesis was errors just accumulate.<p>Reasoning models accumulate errors, track back and are able to reduce it back down.<p>Hence the hypothesis of errors accumulating (at least asymptotically) is false.<p>What is the difference between "Probability of correct answer decaying to zero" and "Errors keep accumulating" ?</p>
]]></description><pubDate>Fri, 21 Feb 2025 19:40:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43131975</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=43131975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43131975</guid></item><item><title><![CDATA[New comment by yathaid in "I think Yann Lecun was right about LLMs (but perhaps only by accident)"]]></title><description><![CDATA[
<p>>> But the limiting behavior remains the same: eventually, if we continue generating from a language model, the probability that we get the answer we want still goes to zero<p>In the previous paragraph, the author makes the case for why Lecun was wrong with the example of reasoning models. Yet, in the next paragraph, this assertion is made which is just a paraphrasing of Yecun's original assertion. Which the author himself says is wrong.<p>>> Instead of waiting for FAA (fully-autonomous agents) we should understand that this is a continuum, and we’re consistently increasing the amount of useful work AIs<p>Yes! But this work is already well underway. There is no magic threshold for AGI - instead the characterization is based on what percentile of the human population the AI can beat. One way to characterize AGI in this manner is "99.99% percentile at every (digital?) activity".</p>
]]></description><pubDate>Fri, 21 Feb 2025 18:46:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=43131317</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=43131317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43131317</guid></item><item><title><![CDATA[New comment by yathaid in "Modern-Day Oracles or Bullshit Machines? How to thrive in a ChatGPT world"]]></title><description><![CDATA[
<p>Does his accuracy take a sudden precipitous fall when going from multiplying two three-digit numbers to two four-digit numbers?</p>
]]></description><pubDate>Mon, 10 Feb 2025 08:52:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=42998184</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=42998184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42998184</guid></item><item><title><![CDATA[New comment by yathaid in "Linux Asceticism"]]></title><description><![CDATA[
<p>As a software engineer, this post resonates with me.<p>But, you can find this attitude pervading the whole linux desktop ecosystem. This post may as well be titled "Why it will never be the year of the Linux Desktop".</p>
]]></description><pubDate>Sun, 10 Nov 2024 14:45:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42100516</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=42100516</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42100516</guid></item><item><title><![CDATA[New comment by yathaid in "The deep learning boom caught almost everyone by surprise"]]></title><description><![CDATA[
<p>Neural networks can encode any computable function.<p>KANs have no advantage in terms of computability. Why are they a promising pathway?<p>Also, the splines in KANs are no more "explainable" than the matrix weights. Sure, we can assign importance to a node, but so what? It has no more meaning than anything else.</p>
]]></description><pubDate>Thu, 07 Nov 2024 06:11:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42073900</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=42073900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42073900</guid></item><item><title><![CDATA[New comment by yathaid in "Gödel Agent: A self-referential agent framework for recursive self-improvement"]]></title><description><![CDATA[
<p>I am not sure it is useful to bring in something as nebulous as "intelligence" and hand wave everything else away, unless you are going to tightly define what intelligence means.<p>There are only two objective measurements needed:<p>-is it making progress towards its goal?<p>-is it able to acquire capabilities it didn't have previously?<p>I am not sure if even the first one is objective enough.<p>Dismissing the argument without stating why you aren't convinced just comes across as a form of AI ludditism.</p>
]]></description><pubDate>Sun, 13 Oct 2024 04:50:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=41825247</link><dc:creator>yathaid</dc:creator><comments>https://news.ycombinator.com/item?id=41825247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41825247</guid></item></channel></rss>