<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: serverholic</title><link>https://news.ycombinator.com/user?id=serverholic</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 03:09:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=serverholic" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by serverholic in "“Rewrite It in Rust” Considered Harmful? [pdf]"]]></title><description><![CDATA[
<p>I really don’t understand the people who have a problem with rust. Do you not value the increased memory safety? Now that Microsoft and Google are adopting rust and reporting significant decreases in memory related bugs it’s pretty clear that rust does make a difference.</p>
]]></description><pubDate>Wed, 24 May 2023 16:35:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=36060559</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=36060559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36060559</guid></item><item><title><![CDATA[New comment by serverholic in "We need a more sophisticated debate about AI"]]></title><description><![CDATA[
<p>Thanks, can you explain what you mean by “vouched”? I’ve noticed that my comments have been getting much less engagement recently and sometimes they don’t show up.</p>
]]></description><pubDate>Tue, 04 Apr 2023 18:52:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=35444291</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35444291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35444291</guid></item><item><title><![CDATA[New comment by serverholic in "We need a more sophisticated debate about AI"]]></title><description><![CDATA[
<p>Perhaps the biggest issue is the mental framework that people use to approach AI. I've found that there are so many assumptions in people's thinking and these assumptions are strange and/or don't match up with the evidence we have so far.<p>First of all, you have to ask the question "what is intelligence?". What I've found is most people think intelligence is deeply connected to humanity or that intelligence is synonymous with knowledge. Really, intelligence is the ability to reason, predict, and learn. It's the ability to see patterns in the world, learn and act on those patterns. It doesn't have to be human-like. It doesn't mean emotions, wants, dreams, or desires. It's cold, hard logic and statistics.<p>Secondly, you have to ask "do I think it's possible for computers to be intelligent?". A lot of people have issues with this as well. The thing is that if you say "no, computers can't be intelligent" you are basically making a religious claim because we have brains and brains are intelligent. We can literally grow intelligence inside a human being during pregnancy. It might be difficult to program intelligence, but saying it's impossible is a bold claim that I don't find very convincing.<p>Third, you have to ask "if a computer is intelligent then how does it act?". So far the closest thing we have to general intelligence is an LLM model like GPT and even then it's questionable. However, reports indicate that after initial training these models don't have a moral compass. They aren't good or evil, they just do whatever you ask. This makes sense because, after all, they are computers right? Again we have to remember that computers aren't humans. Intelligence also means OPTIMIZATION, so we also have to be careful we don't give the AI the wrong instructions or it might find a solution that is technically correct but doesn't match up with humans wants or desires.<p>Four, you have to ask "can we control how these models act?" and the answer seems to be kinda but not really. We can shift the statistics in certain ways, like through reinforcement learning, but as many have found out these models still hallucinate, and can be jail broken. Our best attempts to control these models are still very flawed because basically an LLM is a soup of neural circuits and we don't really understand them.<p>Fifth, you have to ask "ok, if a computer can be intelligent, can it be super intelligent?". Once you've gotten this far, it seems very reasonable that once we understand intelligence we can just scale it up and make AI's super intelligent. Given the previous steps we now have an agent that is smarter than us, can learn and find patterns that we don't understand, and act in ways that appear mysterious to us. Furthermore, even if we had solid techniques to control AIs, it's been shown that as you scale up these models they display emergent behaviors that we can't predict. So this thing is powerful, and we can't understand it until we build it. This is a dangerous combination!<p>Finally, add in the human element. All along the way you have to worry about stupid or evil humans using these AIs in dangerous ways.<p>Given all of this, anyone who isn't a bit scared of AI in the future is either ignorant, superstitious, or blinded by some sort of optimism or desire to be build a cool sci-fi future where they have space ships and robots and light-sabers. There are so many things to be worried about here. The biggest point is that intelligence is POWER, it's the ability to shape the world as one sees fit whether that's the AI itself or humans who program it.</p>
]]></description><pubDate>Tue, 04 Apr 2023 16:51:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=35442546</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35442546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35442546</guid></item><item><title><![CDATA[New comment by serverholic in "AI doomism is quickly becoming indistinguishable from an apocalyptic religion"]]></title><description><![CDATA[
<p>I’m curious why you are so confident in your assertion. It seems to me that an advanced statistical model of the world is an essential component of AGI. How do you know that we aren’t a few breakthroughs away from AGI?<p>Some recent papers have shown significant performance improvements when these models are allowed to respond to their own outputs.<p>How do you know that putting an LLM in a fancy loop with access to external memory and tools isn’t AGI?</p>
]]></description><pubDate>Sat, 01 Apr 2023 18:22:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=35402636</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35402636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35402636</guid></item><item><title><![CDATA[New comment by serverholic in "Brains speed up perception by guessing what's next (2019)"]]></title><description><![CDATA[
<p>I don't think that's necessarily proof against the Bayesian brain. It seems reasonable that the brain is also using its statistical models to assess the relevance of new evidence. So it's not just "new evidence, I need to update" but more like "new evidence, how likely is this true? I'll update according to the magnitude of the likelihood."</p>
]]></description><pubDate>Fri, 31 Mar 2023 15:25:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=35388523</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35388523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35388523</guid></item><item><title><![CDATA[New comment by serverholic in "RWKV RNN: Better than ChatGPT?"]]></title><description><![CDATA[
<p>I'm skeptical that RNNs alone will outperform transformers. Perhaps some sort of transformer + rnn combo?<p>The issue with RNNs is that feedback signals decay over time, so the model will be biased towards more recent words.<p>Transformers on the other hand don't have this bias. A word 10,000 words ago could be just as important as a word 5 words ago. The tradeoff is that the context window for transformers is a hard cutoff point.</p>
]]></description><pubDate>Thu, 23 Mar 2023 22:24:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=35282219</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35282219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35282219</guid></item><item><title><![CDATA[New comment by serverholic in "Shields Up"]]></title><description><![CDATA[
<p>Yes. Capitalism.</p>
]]></description><pubDate>Wed, 22 Mar 2023 04:23:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=35256808</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35256808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35256808</guid></item><item><title><![CDATA[New comment by serverholic in "The Age of AI has begun"]]></title><description><![CDATA[
<p>It’s interesting how the first benefit he lists for AI is productivity. It was a similar message in one of OpenAI’s recent blog posts.</p>
]]></description><pubDate>Tue, 21 Mar 2023 19:55:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=35251648</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35251648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35251648</guid></item><item><title><![CDATA[New comment by serverholic in "Google Bard waitlist"]]></title><description><![CDATA[
<p>It is definitely not as good as GPT-4. It's much less creative and detailed in its answers. The answers are more surface level and shallow.</p>
]]></description><pubDate>Tue, 21 Mar 2023 18:47:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=35250669</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35250669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35250669</guid></item><item><title><![CDATA[New comment by serverholic in "ChatGPT's Chess Elo is 1400"]]></title><description><![CDATA[
<p>I don’t understand why the threshold is “never”. Isn’t it entirely possible that the AI is learning a model of chess but this model is imperfect? What if AIs don’t fail the same way as humans?</p>
]]></description><pubDate>Fri, 17 Mar 2023 18:16:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=35200792</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35200792</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35200792</guid></item><item><title><![CDATA[New comment by serverholic in "Bevy 0.10: data oriented game engine built in Rust"]]></title><description><![CDATA[
<p>This is likely related to data-oriented design<p><a href="https://en.m.wikipedia.org/wiki/Data-oriented_design" rel="nofollow">https://en.m.wikipedia.org/wiki/Data-oriented_design</a><p>The idea is that you use data structures and algorithms to make better use of CPU cache.</p>
]]></description><pubDate>Mon, 06 Mar 2023 19:23:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=35046266</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35046266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35046266</guid></item><item><title><![CDATA[New comment by serverholic in "How to hire engineering talent without the BS"]]></title><description><![CDATA[
<p>I don’t think you really understood what I said.<p>If your goal is population distribution then you are inevitably hiring based on attributes other than skill.<p>For example, women don’t make up 50% of the engineering talent pool so if your goal is 50% women then you have to lower standards to achieve that.</p>
]]></description><pubDate>Sun, 05 Mar 2023 18:33:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=35032483</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35032483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35032483</guid></item><item><title><![CDATA[New comment by serverholic in "How to hire engineering talent without the BS"]]></title><description><![CDATA[
<p>That sounds nice and all but the distribution of engineers doesn’t match population distributions.<p>This leads to organizational pressure to hire based on population distribution. Doing so inevitably means hiring based on attributes other than skill.</p>
]]></description><pubDate>Sun, 05 Mar 2023 17:33:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=35031812</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35031812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35031812</guid></item><item><title><![CDATA[New comment by serverholic in "First Impressions of Bluesky's Brand New iOS App"]]></title><description><![CDATA[
<p>More than that, it puts your data into your hands. If someone kicks you off a server for whatever reason you still have your account and you can visit other servers. With mastodon you’re at the mercy of whoever owns the server.</p>
]]></description><pubDate>Sat, 04 Mar 2023 00:24:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=35016824</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35016824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35016824</guid></item><item><title><![CDATA[New comment by serverholic in "First Impressions of Bluesky's Brand New iOS App"]]></title><description><![CDATA[
<p>Same with mastodon. Your id is tied to the server so if you get banned you have to create a new id on another server.<p>Mastodon is worse because your data is in the server owner’s hands. At least with Bluesky your identity and server are separate so you still have your data.</p>
]]></description><pubDate>Fri, 03 Mar 2023 15:38:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=35010971</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=35010971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35010971</guid></item><item><title><![CDATA[New comment by serverholic in "Bun v0.5.7"]]></title><description><![CDATA[
<p>He literally said that's what he's looking for when he announced his company. He also shows off his work stats on twitter.</p>
]]></description><pubDate>Sat, 25 Feb 2023 01:35:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=34932833</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=34932833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34932833</guid></item><item><title><![CDATA[New comment by serverholic in "Stanford faculty say anonymous student bias reports threaten free speech"]]></title><description><![CDATA[
<p>Politics is just decision making in groups of people. Similar analyses can be used whether you're talking about government, schools, your workplace, open-source projects, your household, etc.<p>As a general rule I tend to always interpret these discussions as "how humans interact with each other" unless the discussion itself is about the law.</p>
]]></description><pubDate>Thu, 23 Feb 2023 23:16:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=34918186</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=34918186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34918186</guid></item><item><title><![CDATA[New comment by serverholic in "“We’re All Gonna Die” with Eliezer Yudkowsky"]]></title><description><![CDATA[
<p>Wow. It's clear nobody gives a fuck. We're all doomed aren't we.</p>
]]></description><pubDate>Thu, 23 Feb 2023 04:38:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=34906826</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=34906826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34906826</guid></item><item><title><![CDATA[New comment by serverholic in "Social media is a cause, not a correlate, of mental illness in teen girls"]]></title><description><![CDATA[
<p>I’m inclined to believe you. In my experience men are socialized to downplay mental illness whereas women are the opposite, they wear their illnesses on their sleeve.</p>
]]></description><pubDate>Wed, 22 Feb 2023 21:51:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=34903433</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=34903433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34903433</guid></item><item><title><![CDATA[New comment by serverholic in "Dancing is stupid"]]></title><description><![CDATA[
<p>Because then my choices are:<p>1) give in and pretend to go along with it.<p>2) look like an asshole at a wedding.<p>Sorry if I don’t want to ruin someone’s wedding lol.</p>
]]></description><pubDate>Sun, 19 Feb 2023 17:18:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=34859587</link><dc:creator>serverholic</dc:creator><comments>https://news.ycombinator.com/item?id=34859587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34859587</guid></item></channel></rss>