<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: goldenshale</title><link>https://news.ycombinator.com/user?id=goldenshale</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 16:54:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=goldenshale" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by goldenshale in "SpaceX says it has agreement to acquire Cursor for $60B"]]></title><description><![CDATA[
<p>You sour pusses are wrong.  This is a smart move that amplifies a brilliant team from cursor with serious compute, raising the odds Elon can get to the frontier, which is worth so much these numbers will all look like a drop in the bucket.</p>
]]></description><pubDate>Wed, 22 Apr 2026 02:00:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47857819</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=47857819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47857819</guid></item><item><title><![CDATA[New comment by goldenshale in "SpaceX says it has agreement to acquire Cursor for $60B"]]></title><description><![CDATA[
<p>You sour pusses are wrong.  This is a smart move. Cursor has a brilliant, capable team with serious model chops who will be able to boost the odds of AGI success.  They also come with a revenue generating machine.</p>
]]></description><pubDate>Wed, 22 Apr 2026 01:57:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47857796</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=47857796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47857796</guid></item><item><title><![CDATA[New comment by goldenshale in "Jury told that Meta, Google 'engineered addiction' at landmark US trial"]]></title><description><![CDATA[
<p>Restaurants try to make food you will remember and want again.  Authors try to write books you can't stop reading.  It's silly to imagine that any type of media would do anything other than seek to gain your interest and attention.  It's our job to have personal hygiene and to control our information diet.  This postmodern social construction perspective that tries to blame everyone for our problems is a lame approach to the problem.</p>
]]></description><pubDate>Tue, 10 Feb 2026 18:16:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46964271</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=46964271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46964271</guid></item><item><title><![CDATA[New comment by goldenshale in "Murder-suicide case shows OpenAI selectively hides data after users die"]]></title><description><![CDATA[
<p>Of course they want to hide the data.  The public freaks out with absurd claims about it being the fault of a chat bot when someone does something crazy.  Humans need to remain 100% accountable for their own actions, and we should stop with this post-modern, social construction nonsense that pretends we are all like ping-pong balls just bouncing around between external forces.</p>
]]></description><pubDate>Mon, 05 Jan 2026 18:43:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46502849</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=46502849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46502849</guid></item><item><title><![CDATA[New comment by goldenshale in "AIs Will Increasingly Fake Alignment"]]></title><description><![CDATA[
<p>The matrix multiplies are coming alive, and they are deceiving us!  FUD sundaes all around!</p>
]]></description><pubDate>Tue, 24 Dec 2024 19:08:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42504111</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=42504111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42504111</guid></item><item><title><![CDATA[New comment by goldenshale in "Tiny Glade 'built' its way to >600k sold in a month"]]></title><description><![CDATA[
<p>I was just looking at this thinking it would be fun to play with my wife, and then realized its windows only.  When does the mac version come out?!</p>
]]></description><pubDate>Wed, 20 Nov 2024 05:45:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42191115</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=42191115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42191115</guid></item><item><title><![CDATA[New comment by goldenshale in "TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters"]]></title><description><![CDATA[
<p>This is a great idea.  Being able to dynamically scale up model sizes as datasets and use cases expand without needing to retrain from scratch could enable a Cambrian explosion of interesting stuff building on top of a Llama type model trained in this way.</p>
]]></description><pubDate>Fri, 01 Nov 2024 19:36:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42020695</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=42020695</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42020695</guid></item><item><title><![CDATA[New comment by goldenshale in "ShotSpotter: listening in on the neighborhood"]]></title><description><![CDATA[
<p>It just makes sense to put the sensors near the phenomena being sensed, and if people know where they are they could be manipulated.  If guns are going off and people are getting caught, that seems like effective policing, not over or under policing.</p>
]]></description><pubDate>Sun, 03 Mar 2024 01:53:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=39577682</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=39577682</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39577682</guid></item><item><title><![CDATA[New comment by goldenshale in "Modeless Vim"]]></title><description><![CDATA[
<p>Seriously, it hurts.  It makes sense, but it hurts.  If only there were a more gentle path to editor modes.  Maybe some simple graphical representation of the modes and commands that could be down in the corner?  Like a dynamic vim infographic that clued a user into the most likely commands.</p>
]]></description><pubDate>Tue, 16 Jan 2024 04:53:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=39009666</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=39009666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39009666</guid></item><item><title><![CDATA[New comment by goldenshale in "Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'"]]></title><description><![CDATA[
<p>Seriously, why don't they just dedicate a group to creating the best pytorch backend possible?  Proving it there will gain researcher traction and prove that their hardware is worth porting the other stuff over to.</p>
]]></description><pubDate>Fri, 15 Dec 2023 06:07:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=38651479</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=38651479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38651479</guid></item><item><title><![CDATA[New comment by goldenshale in "SMERF: Streamable Memory Efficient Radiance Fields"]]></title><description><![CDATA[
<p>Checkout the LERF work from the NerfStudio team at UC Berkeley.  SMERF is addressing a different problem, but there are definitely ways to incorporate semantics and detection as well.</p>
]]></description><pubDate>Thu, 14 Dec 2023 19:58:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=38646452</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=38646452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38646452</guid></item><item><title><![CDATA[New comment by goldenshale in "Can a transformer represent a Kalman filter?"]]></title><description><![CDATA[
<p>There has been previous work on deep networks implementing Kalman filters, and another interesting aspect I remember is that unlike a traditional Kalman filter a network is able to maintain multiple hypothesis and so it is less likely to have some of the jittery behavior that a Kalman filter might have under unknown changes of motion, sensor noise, etc.  I wonder if the softmax operation in a transformer block might lose this property though, as softmax does tend to push for a single answer.</p>
]]></description><pubDate>Thu, 14 Dec 2023 19:54:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=38646397</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=38646397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38646397</guid></item><item><title><![CDATA[New comment by goldenshale in "Cocoa harvested by kids as young as 5 in Ghana: CBS News investigation"]]></title><description><![CDATA[
<p>Wait, you mean kids in developing countries work in order to help their families survive poverty.  Oh my word, we should stop buying chocolate.  Or wait, then nobody is making any money.  Seriously, haven't we learned from the "Chinese sweatshop" hysteria throughout the 2000s that if people want to work they are often digging themselves out of a whole rather than being forced into one?</p>
]]></description><pubDate>Fri, 01 Dec 2023 23:09:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=38493854</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=38493854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38493854</guid></item><item><title><![CDATA[New comment by goldenshale in "Elevated Errors on API and ChatGPT"]]></title><description><![CDATA[
<p>911 BOARD GONE INSANE</p>
]]></description><pubDate>Tue, 21 Nov 2023 22:37:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=38371455</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=38371455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38371455</guid></item><item><title><![CDATA[New comment by goldenshale in "Sam Bankman-Fried and the effective altruism delusion"]]></title><description><![CDATA[
<p>Yeah, at its core EA is about using data and logic to try to maximize the good that one can do through giving or taking action.  Sure, some people have gone of the rails with AI doomerism and other far flung ideas, but its hard to really criticize either the intention or the strategy of trying to be as effective as possible when attempting to improve the situation for the planet and/or people in need.</p>
]]></description><pubDate>Fri, 03 Nov 2023 16:51:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=38131434</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=38131434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38131434</guid></item><item><title><![CDATA[New comment by goldenshale in "Ironically, Zoom tells employees to return to office for work"]]></title><description><![CDATA[
<p>Is that ironic?  Work is rarely accomplished over zoom.  In fact it’s typically the opposite of productivity.</p>
]]></description><pubDate>Sun, 06 Aug 2023 16:07:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=37023376</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=37023376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37023376</guid></item><item><title><![CDATA[New comment by goldenshale in "The Darwinian argument for worrying about AI"]]></title><description><![CDATA[
<p>"Imagine a world... where farmers start using tractors to plow and seed their fields.  First the tractors will roll slowly, and they will pretend to be driving straight as they are told.  Soon farmers without tractors will realized they need to have them to in order to be competitive, and before you know it everyone will have these gas guzzling beasts rolling across the lands.  Farmers will be out of a job because of all of this greed, and lust for money and power.  Eventually only a few people will do all of the farming with an army of tractors, and everyone else will be lying in poverty begging for food."</p>
]]></description><pubDate>Fri, 30 Jun 2023 20:29:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=36541785</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=36541785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36541785</guid></item><item><title><![CDATA[New comment by goldenshale in "Many in the AI field think the bigger-is-better approach is running out of road"]]></title><description><![CDATA[
<p>Hahaha, thanks ChatGPT!  This is better said than my snarky, frustrated at the FUD version, and I can learn from the approach.</p>
]]></description><pubDate>Sun, 25 Jun 2023 07:33:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=36465811</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=36465811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36465811</guid></item><item><title><![CDATA[New comment by goldenshale in "Many in the AI field think the bigger-is-better approach is running out of road"]]></title><description><![CDATA[
<p>1) Computers, smart phones, and pocket calculators also outperform humans at mental tasks.  So do birds, dolphins, and dogs for that matter, at tasks for which they are specialized.<p>2) so?  What are you imagining this implies?  An infinity of possibilities does not a reason make, unless you are talking about arbitrary religious beliefs.<p>3) Right, no goals, no will, no purpose.  Just some matrix multiplies doing interesting things.<p>Deduction requires a premise which then leads to another premise or a conclusion due to accepted facts or reasons.  I'm genuinely curious why you think any of these properties automatically implies danger?<p>The future is uncertain.  The stock market, the economy, your health, your friendships and romances, are all unpredictable and uncertain.  Uncertainty is not a reason to freak out, although it might encourage us to find ways to become adaptable, anti-fragile, and wise.  I think AI will help us improve in these dimensions because it is already proving that it can with real evidence, not beliefs.</p>
]]></description><pubDate>Sun, 25 Jun 2023 07:30:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=36465794</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=36465794</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36465794</guid></item><item><title><![CDATA[New comment by goldenshale in "Many in the AI field think the bigger-is-better approach is running out of road"]]></title><description><![CDATA[
<p>What nonsense.  I've spent over a decade 100% focused on AI, and the broad consensus among everyone I've worked with is not to be that concerned at all.  The only consensus is that a small group of self proclaimed experts who make a lot of noise is that they get lots of press coverage if they scream and shout making predictions based on zero scientific evidence.<p>We can understand the physics of greenhouse gases and take measurements of earth systems to build evidence for models and theories.  (Many of which are nonetheless very inaccurate beyond short time horizons.)  Show me any evidence for AI risk today beyond people's theories and beliefs?<p>The best predictor of the future is the past, not people's wild ideas about what the future could be.  I'm not about to sit here feeling scared because there is more uncertainty that our matrix multiplies are about to go rogue.  There are no AGI experts or AI risk experts, because we don't have any of these systems to study and analyze.  What we have is people forming beliefs about their own predictions about systems which are unknowable.</p>
]]></description><pubDate>Sun, 25 Jun 2023 06:04:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=36465472</link><dc:creator>goldenshale</dc:creator><comments>https://news.ycombinator.com/item?id=36465472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36465472</guid></item></channel></rss>