<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sometimelurker</title><link>https://news.ycombinator.com/user?id=sometimelurker</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 09:05:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sometimelurker" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sometimelurker in "I believe there are entire companies right now under AI psychosis"]]></title><description><![CDATA[
<p>I'd like to chime in and mention that its really obvious how to RL a coding agent to get the human addicted asap. and its also clear that there's a ton of $$$ to be made by doing this. therefore its done. the only LLMs I use are the ones I run locally because i know they aren't RL'ed for that metric (no incentive for the company that made them to make their open weights models addictive)</p>
]]></description><pubDate>Sat, 16 May 2026 00:48:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=48155750</link><dc:creator>sometimelurker</dc:creator><comments>https://news.ycombinator.com/item?id=48155750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48155750</guid></item><item><title><![CDATA[New comment by sometimelurker in "Localmaxxing"]]></title><description><![CDATA[
<p>You need to be less etymology-pilled.
Seriously tho, its a practical word choice in a lot of cases. puts emphasis on the 'maxxing'
Think of it as claiming the word as your own.</p>
]]></description><pubDate>Fri, 15 May 2026 03:39:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=48144292</link><dc:creator>sometimelurker</dc:creator><comments>https://news.ycombinator.com/item?id=48144292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48144292</guid></item><item><title><![CDATA[New comment by sometimelurker in "7 in 10 Americans oppose data centers being built in their communities"]]></title><description><![CDATA[
<p>you still have to train all the stuff you want to run locally.</p>
]]></description><pubDate>Fri, 15 May 2026 03:31:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=48144239</link><dc:creator>sometimelurker</dc:creator><comments>https://news.ycombinator.com/item?id=48144239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48144239</guid></item><item><title><![CDATA[New comment by sometimelurker in "7 in 10 Americans oppose data centers being built in their communities"]]></title><description><![CDATA[
<p>here's a fun idea: with normal-person ram really expensive, and fabs are pivoting to making more HBM, what if a town built its own datacenter and gave access to it for the townspeople?</p>
]]></description><pubDate>Fri, 15 May 2026 03:30:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=48144232</link><dc:creator>sometimelurker</dc:creator><comments>https://news.ycombinator.com/item?id=48144232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48144232</guid></item><item><title><![CDATA[New comment by sometimelurker in "The Graveyard of the Internet"]]></title><description><![CDATA[
<p>you know I feel like there's some people so disillusioned with everything that they automatically assume that the only way to go forward is to try and work on the least ethical, most evil, nasty tech. There's actual thinking breathing humans that work at this company, and I'm sure that if you asked them if they think this is ethical you'll get a "oh if I didn't take the job someone else would" and "this is how the world works now". there are ways to make money that don't involve purposefully doing evil.</p>
]]></description><pubDate>Fri, 15 May 2026 03:25:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=48144201</link><dc:creator>sometimelurker</dc:creator><comments>https://news.ycombinator.com/item?id=48144201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48144201</guid></item><item><title><![CDATA[New comment by sometimelurker in "The people writing AI alignment policy are not whose work is being replaced"]]></title><description><![CDATA[
<p>This might be related to the fact that fully automating AI safety can't be meaningfully done. And a lot of work is put into automating parts of it. Circuit-finding algorithms and SAEs are automated algorithms for interpreting parts of LLMs, and RLAIF (RL with AI feedback) for alignment requires an LLM to judge if another LLM is <i>visibly</i> misaligned. (Claude says 'genuine' a lot due to this. Its harder to <i>look</i> misaligned when you use the word 'genuine' a ton) And there's work on having AIs write cute little stories in which AIs are ethical, and putting those stories in the pretraining corpus.<p>So there's a ton of work being done already on automating parts of alignment, but since the core premise of alignment being that its hard to encode human values into the reward function, automating it fully would be equivalent to solving it.</p>
]]></description><pubDate>Fri, 15 May 2026 02:51:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=48144000</link><dc:creator>sometimelurker</dc:creator><comments>https://news.ycombinator.com/item?id=48144000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48144000</guid></item></channel></rss>