<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: awfulneutral</title><link>https://news.ycombinator.com/user?id=awfulneutral</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 15:29:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=awfulneutral" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by awfulneutral in "Deezer says 44% of songs uploaded to its platform daily are AI-generated"]]></title><description><![CDATA[
<p>Thanks for writing this. For the record, I wasn't responding to your original comment, which I agreed with. And I agree with and empathize with what you're saying here too - it sounds like you've had an interesting relationship with creating and have a lot of perspective on it, good and bad. Although if you were advocating to get rid of human nature in a transhumanist way, I would understand that to some extent - it's at least a solution. :)</p>
]]></description><pubDate>Tue, 21 Apr 2026 17:18:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47851695</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=47851695</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47851695</guid></item><item><title><![CDATA[New comment by awfulneutral in "Deezer says 44% of songs uploaded to its platform daily are AI-generated"]]></title><description><![CDATA[
<p>Yeah. People act like it's a sin to want some notice or respect when you've worked and achieved something, like you should be some zen-like creature that is purely intrinsically motivated. It is not wrong to want some notice or respect from your peers once in a while.</p>
]]></description><pubDate>Mon, 20 Apr 2026 21:16:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47840906</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=47840906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47840906</guid></item><item><title><![CDATA[New comment by awfulneutral in "uBlock filter list to hide all YouTube Shorts"]]></title><description><![CDATA[
<p>Does it still work for you? Unhook hasn't been updated in years and doesn't work for shorts anymore, on Firefox. It's still worth it to get rid of the suggested videos though.</p>
]]></description><pubDate>Sat, 14 Feb 2026 20:37:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47018138</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=47018138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47018138</guid></item><item><title><![CDATA[New comment by awfulneutral in "My article on why AI is great (or terrible) or how to use it"]]></title><description><![CDATA[
<p>It feels like this to me too, whenever I give it a try. It's like a button you can push to spend 20 minutes and have a 50/50 chance of either solving the problem with effortless magic, or painfully wasting your time and learning nothing. But it feels like we all need to try and use it anyway just in case we're going to be obsolete without it somehow.</p>
]]></description><pubDate>Fri, 09 Jan 2026 20:22:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46558797</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=46558797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46558797</guid></item><item><title><![CDATA[New comment by awfulneutral in "I switched from VSCode to Zed"]]></title><description><![CDATA[
<p>Yeah, it just reminds me of the early days of VS Code, where features were constantly added and it was fun at first, but they didn't stop and eventually it did feel more sluggish and bloated. Sometimes I'd have to spend time fixing or re-configuring something just because I opened the editor and the daily auto-update did something annoying. It might not happen with Zed, but it seems like a very similar approach to development.</p>
]]></description><pubDate>Tue, 06 Jan 2026 07:01:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46509500</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=46509500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46509500</guid></item><item><title><![CDATA[New comment by awfulneutral in "I switched from VSCode to Zed"]]></title><description><![CDATA[
<p>I am ahead of this curve, my trajectory was VSCode -> Zed -> Helix. Helix doubles my battery life from 4 to 8 hours compared to Zed. Zed is also on a bad trajectory IMO with the huge amount of updates being pushed constantly.</p>
]]></description><pubDate>Mon, 05 Jan 2026 17:35:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46501907</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=46501907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46501907</guid></item><item><title><![CDATA[New comment by awfulneutral in "Measuring political bias in Claude"]]></title><description><![CDATA[
<p>> I genuinely don't know which I'm experiencing. That uncertainty itself feels like it should matter.<p>We don't even know how consciousness works in ourselves. If an AI gets to the point where it convinces us it might have awareness, then at what point do we start assigning it rights? Even though it might not be experiencing anything at all? Once that box is opened, dealing with AI could get a lot more complicated.</p>
]]></description><pubDate>Wed, 19 Nov 2025 22:22:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45986122</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=45986122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45986122</guid></item><item><title><![CDATA[New comment by awfulneutral in "Two narratives about AI"]]></title><description><![CDATA[
<p>That has been my experience too.  I recently turned it all off because I decided the amount of times it takes me 5x longer to accomplish something, in addition to the subtle increase of bugs, is not currently worth it.  I guess I'll try it again in a few months.</p>
]]></description><pubDate>Thu, 24 Jul 2025 17:13:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44673260</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=44673260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44673260</guid></item><item><title><![CDATA[New comment by awfulneutral in "Thiings"]]></title><description><![CDATA[
<p>Spinosaurus is the inaccurate one from Jurassic Park 3 and Tyrannosaurus has inaccurate hand rotation.  Velociraptor and Dilophosaurus are also from Jurassic Park and highly inaccurate.  For shame!</p>
]]></description><pubDate>Fri, 13 Jun 2025 21:06:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44272259</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=44272259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44272259</guid></item><item><title><![CDATA[New comment by awfulneutral in "The Problem with AI Welfare"]]></title><description><![CDATA[
<p>It does make you wonder if humanity doesn't scale up neatly to the levels of technology we are approaching...the whole ethics thing kind of goes out the window if you can just change the desires and needs of conscious entities.</p>
]]></description><pubDate>Thu, 12 Jun 2025 15:21:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44258849</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=44258849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44258849</guid></item><item><title><![CDATA[New comment by awfulneutral in "A look at Cloudflare's AI-coded OAuth library"]]></title><description><![CDATA[
<p>Yeah, the AI-generated bugs are really insidious.  I also pushed a couple subtle bugs in some multi-threaded code I had AI write, because I didn't think it through enough.  Reviews and tests don't replace the level of scrutiny something gets when it's hand-written.  For now, you have to be really careful with what you let AI write, and make sure any bugs will be low impact since there will probably be more than usual.</p>
]]></description><pubDate>Sun, 08 Jun 2025 13:50:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44216997</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=44216997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44216997</guid></item><item><title><![CDATA[New comment by awfulneutral in "Asimov and the Disease of Boredom (1964)"]]></title><description><![CDATA[
<p>Really interesting, thanks.  The last point is thought provoking:<p>> Even so, mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014. The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.<p>> Indeed, the most somber speculation I can make about A.D. 2014 is that in a society of enforced leisure, the most glorious single word in the vocabulary will have become work!<p>I think this could have already happened, except the definition of "work" is so nebulous, and there is so much wiggle room between the things that actually need to be done and the things we might as well do.  Or maybe it has happened in parts of the world, but we are all in denial about it.</p>
]]></description><pubDate>Sat, 07 Jun 2025 16:42:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44210748</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=44210748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44210748</guid></item><item><title><![CDATA[New comment by awfulneutral in "Circuit breaker triggered in Japan for stock futures trading"]]></title><description><![CDATA[
<p>Experts also do not know the answer to that.</p>
]]></description><pubDate>Mon, 07 Apr 2025 05:03:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43607903</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=43607903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43607903</guid></item><item><title><![CDATA[New comment by awfulneutral in "Crossing the uncanny valley of conversational voice"]]></title><description><![CDATA[
<p>Yes, fair enough about the dog - "non-human" was the wrong choice of words.  But I don't agree that emotions and social dynamics from an LLM are valid.  Emotions need real stakes behind them.  They communicate the inner state of another being.  If that inner state does not exist (maybe it could in an AGI, but I don't believe it could in an LLM), then I'd say the communication is utterly meaningless.</p>
]]></description><pubDate>Mon, 03 Mar 2025 01:17:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=43237274</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=43237274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43237274</guid></item><item><title><![CDATA[New comment by awfulneutral in "Crossing the uncanny valley of conversational voice"]]></title><description><![CDATA[
<p>But there is no message outside the rational layer when you're talking to a non-human.  The only message is the amount of true information the LLM is able to output - the rest is randomness.  It's fatiguing to have your human brain try to interpret emotions and social dynamics where they don't exist, the same way it's fatiguing to try and interpret meaning from a generated image.</p>
]]></description><pubDate>Sun, 02 Mar 2025 16:58:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43232380</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=43232380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43232380</guid></item><item><title><![CDATA[New comment by awfulneutral in "Crossing the uncanny valley of conversational voice"]]></title><description><![CDATA[
<p>Has it been tried the other way?  I don't remember an iteration where they weren't obnoxiously over-endearing.  After the initial novelty, it would be better to reduce the amount of fake information you have to read, and any attempt at pretending to be a human is completely fake information at this point.</p>
]]></description><pubDate>Sun, 02 Mar 2025 16:52:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43232314</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=43232314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43232314</guid></item><item><title><![CDATA[New comment by awfulneutral in "Getter-Setter Pattern Considered Harmful"]]></title><description><![CDATA[
<p>This article sort of reads like they think everything should be immutable, which feels kind of dogmatic.  Using `with` for everything by default seems like overkill.  But in C#, lately I have been using `readonly and `init` wherever vars should not be changed after initialization, which is most of them tbh.  For small, data-only objects that can easily be immutable and get passed around a lot, this makes sense to me as a way to avoid the kind of bugs they are talking about.</p>
]]></description><pubDate>Sun, 16 Feb 2025 16:25:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43069214</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=43069214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43069214</guid></item><item><title><![CDATA[New comment by awfulneutral in "Zed now predicts your next edit with Zeta, our new open model"]]></title><description><![CDATA[
<p>Ohhh, is that why I keep pressing tab and it doesn't accept the prediction lately?  I thought it was a bug.  It feels weird for tab to double-indent when it could be accepting a prediction - I wonder if alt-tab to do a manual indent rather than accept the current prediction might be preferable?<p>Edit - On the other hand, a related issue is that if the prediction itself starts with whitespace, in that case it would be good if tab just indents like normal; otherwise you can't indent without accepting the prediction.</p>
]]></description><pubDate>Fri, 14 Feb 2025 14:54:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43048919</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=43048919</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43048919</guid></item><item><title><![CDATA[New comment by awfulneutral in "Zed now predicts your next edit with Zeta, our new open model"]]></title><description><![CDATA[
<p>This just seems to be the way for code editors.  We just have to switch every few years to the next one.</p>
]]></description><pubDate>Fri, 14 Feb 2025 14:46:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43048851</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=43048851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43048851</guid></item><item><title><![CDATA[New comment by awfulneutral in "Avoiding outrage fatigue while staying informed"]]></title><description><![CDATA[
<p>Yes, this is really tricky, because nowadays we have people shouting from the rooftops continuously, and half of them are shouting the exact opposite thing as the other half.  WWII was openly racist, so from a modern perspective it would be easy to recognize and condemn some of the early behavior, but these days it's more about dog whistling and thought crimes.  Probably the signs we would all recognize are not going to happen.  But we have already moved a dramatic amount in terms of normalized behavior, from 20 years ago.</p>
]]></description><pubDate>Wed, 05 Feb 2025 19:20:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=42953681</link><dc:creator>awfulneutral</dc:creator><comments>https://news.ycombinator.com/item?id=42953681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42953681</guid></item></channel></rss>