<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: silentkat</title><link>https://news.ycombinator.com/user?id=silentkat</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 20:33:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=silentkat" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by silentkat in "New research suggests people can communicate and practice skills while dreaming"]]></title><description><![CDATA[
<p>I regularly have abstract dreams I have trouble remembering. I wake up feeling like I understand problems better but I can't articulate why. However, I can indeed tackle problems from the day before more easily.<p>It's pretty fascinating. What's even more fascinating is often times when I do remember the dream, a lot of it is nonsense. And yet I'm doing better at the things I dreamt about.</p>
]]></description><pubDate>Sat, 02 May 2026 00:27:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47982060</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=47982060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47982060</guid></item><item><title><![CDATA[New comment by silentkat in "The Zig project's rationale for their anti-AI contribution policy"]]></title><description><![CDATA[
<p>The power of AI is it rewards due diligence.<p>The weakness of AI is that it is really easy to fall into lazy habits.<p>Something about having to talk to a machine like it's a human makes me fall for treating it like a human. I want to treat it as a probability engine that collapses to an answer based on input, but that input explicitly needs to be one that has it collapse to something a reasonably knowledgeable person would respond with, which more-or-less means talking to it like it is that kind of person.<p>I feel like it activates the social part of my brain and then I stop working with it properly. I'm still building the habit, though, only recently started taking the LLMs seriously as a tool.</p>
]]></description><pubDate>Thu, 30 Apr 2026 17:32:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47965721</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=47965721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47965721</guid></item><item><title><![CDATA[New comment by silentkat in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>Oh, really. Very interesting. Any links on this? I'm curious if they tried to map that 3D understanding in a way we could read it (e.g. putting it into Blender somehow).</p>
]]></description><pubDate>Sun, 05 Apr 2026 21:45:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654203</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=47654203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654203</guid></item><item><title><![CDATA[New comment by silentkat in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>I like to call this Frieren's Demon. In that show, it is explained that demons evolved with no common ancestor to humans, but they speak the language. They learned the language to hunt humans. This leads to a fundamentally different understanding of words and language.<p>Now, I don't personally believe this is an intelligence at all, but it's possible I'm wrong. What we have with these machines is a different evolutionary reason for it speaking our language (we evolved it to speak our language ourselves). It's understanding of our language, and of our images is completely alien. If it is an intelligence, I could believe that the way it makes mistakes in image generation, and the strange logical mistakes it makes that no human would make are simply a result of that alien understanding.<p>After all, a human artist learning to draw hands makes mistakes, but those mistakes are rooted in a human understanding (e.g. the effects of perspective when translating a 3D object to 2D). The machine with a different understanding of what a hand is will instead render extra fingers (it does not conceptualize a hand as a 3D object at all).<p>Though, again, I still just think its an incomprehensible amount of data going through a really impressive pattern matcher. The result is still language out of a machine, which is really interesting. The only reason I'm not super confident it is not an intelligence is because I can't really rule out that I am not an incomprehensible amount of data going through a really impressive pattern matcher, just built different. I do however feel like I would know a real intelligence after interacting with it for long enough, though, and none of these models feel like a real intelligence to me.</p>
]]></description><pubDate>Sat, 04 Apr 2026 22:07:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47644011</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=47644011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47644011</guid></item><item><title><![CDATA[New comment by silentkat in "Further human + AI + proof assistant work on Knuth's "Claude Cycles" problem"]]></title><description><![CDATA[
<p>My work has required us all to be "AI Native". I am AI skeptical but am the type of person to try to do what is asked to the best of my ability. I can be wrong, after all.<p>There is some real power in AI, for sure. But as I have been working with it, one thing is very clear. Either AI is not even close to a real intelligence (my take), or it is an alien intelligence. As I develop a system where it iterates on its own contexts, it definitely becomes probabilistically more likely to do the right thing, but the mistakes it makes become even more logic-defying. It's the coding equivalent of a hand with extra fingers.<p>I'm only a few weeks into really diving in. Work has given me infinite tokens to play with. Building my own orchestrator system that's purely programmatic, which will spawn agents to do work. Treat them as functions. Defined inputs and defined outputs. Don't give an agent more than one goal, I find that giving it a goal of building a system often leads it to assert that it works when it does not, so the verifier is a different agent. I know this is not new thinking, as I said I am new.<p>For me the most useful way to think about it has been considering LLMs to be a probabilistic programming language. It won't really error out, it'll just try to make it work. This attitude has made it fun for me again. Love learning new languages and also love making dirty scripts that make various tasks easier.</p>
]]></description><pubDate>Sun, 29 Mar 2026 00:18:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47559261</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=47559261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47559261</guid></item><item><title><![CDATA[New comment by silentkat in "Shall I implement it? No"]]></title><description><![CDATA[
<p>Oh, no, I had these grand plans to avoid this issue. I had been running into it happening with various low-effort lifts, but now I'm worried that it will stay a problem.</p>
]]></description><pubDate>Thu, 12 Mar 2026 23:42:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47358850</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=47358850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47358850</guid></item><item><title><![CDATA[New comment by silentkat in "Labor market impacts of AI: A new measure and early evidence"]]></title><description><![CDATA[
<p>I’m at a big tech company. They proudly stated more productivity measures in commits (already nonsense). 47% more commits, 17% less time per commit. Meaning 128% more time spent coding. Burning us out and acting like the AI slop is “unlocking” productivity.<p>There’s some neat stuff, don’t get me wrong. But every additional tool so far has started strong but then always falls over. Always.<p>Right now there’s this “orchestrator” nonsense. Cool in principle, but as someone who made scripts to automate with all the time before it’s not impressive. Spent $200 to automate doing some bug finding and fixing. It found and fixed the easy stuff (still pretty neat), and then “partially verified” it fixed the other stuff.<p>The “partial verification” was it justifying why it was okay it was broken.<p>The company has mandated we use this technology. I have an “AI Native” rating. We’re being told to put out at least 28 commits a month. It’s nonsense.<p>They’re letting me play with an expensive, super-high-level, probabilistic language. So I’m having a lot of fun. But I’m not going to lie, I’m very disappointed. Got this job a year ago. 12 years programming experience. First big tech job. Was hoping to learn a lot. Know my use of data to prioritize work could be better. Was sold on their use of data. I’m sure some teams here use data really well, but I’m just not impressed.<p>And I’m not even getting into the people gaming the metrics to look good while actually making more work for everyone else.</p>
]]></description><pubDate>Fri, 06 Mar 2026 03:04:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47270323</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=47270323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47270323</guid></item><item><title><![CDATA[New comment by silentkat in "Show HN: It took 4 years to sell my startup. I wrote a book about it"]]></title><description><![CDATA[
<p>It’s a form of contrastive reduplication. Used to emphasize the realness of the experience, versus like second hand experience like interviewing those who have the actual experience.<p>Also consider a phrase like “work work” versus “school work”. For someone who both works a paid job and goes to school, clarifying that they need to do “work work” makes sense.</p>
]]></description><pubDate>Sun, 08 Feb 2026 23:59:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46939892</link><dc:creator>silentkat</dc:creator><comments>https://news.ycombinator.com/item?id=46939892</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46939892</guid></item></channel></rss>