<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: youoy</title><link>https://news.ycombinator.com/user?id=youoy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 17:20:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=youoy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by youoy in "Teaching Claude Why"]]></title><description><![CDATA[
<p>What? Children's play is now work? What timeline are we living in? Is this real life?</p>
]]></description><pubDate>Sat, 09 May 2026 06:05:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=48072272</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=48072272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48072272</guid></item><item><title><![CDATA[New comment by youoy in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>100% agree, and I experienced that behaviour first hand. I got confident, started giving less guidelines, and suddenly two weeks have passed and the LLM put me into a state of horrible code that looks good superficially because I trusted it too much.</p>
]]></description><pubDate>Thu, 16 Apr 2026 19:41:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47798473</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=47798473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47798473</guid></item><item><title><![CDATA[New comment by youoy in "What have been the greatest intellectual achievements? (2017)"]]></title><description><![CDATA[
<p>Nicely written! I was thinking about this the other day. What is the benefit from your point of view of procesing information full of null pointers? (I know what the benefit of not halting its programming is :P)</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:34:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743520</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=47743520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743520</guid></item><item><title><![CDATA[New comment by youoy in "Intelligent people are better judges of the intelligence of others"]]></title><description><![CDATA[
<p>If i get your point based on your answers: "intelligence" cannot be divided into categories. If you are intelligent, you can be trained to do whatever skill you want, its just a matter of being taught or exposed to the probelm. So it does not make sense for it to have its own category. So if you train intelligent people to be social, they will be social, its just software.<p>What i have seen: people can perform outstandingly well on classical intelligence without almost being taught. Think about mathematics or logic. But when you get into social/emotional territory, then it has a bigger correlation with how you were taught or your experienced when you where a small kid (but its not 100% causal). So in that sense its not the same thing.<p>Now, if you are unconfortable by calling it "intelligence", feel free to call it "skills". For me its the same thing as a football player having spacial awareness of the field. Sure, they have to be trained, but it is some "skill" that some people have an easier time using and improving.</p>
]]></description><pubDate>Tue, 07 Apr 2026 05:28:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47671057</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=47671057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47671057</guid></item><item><title><![CDATA[New comment by youoy in "Intelligent people are better judges of the intelligence of others"]]></title><description><![CDATA[
<p>Finally a comment which is clearly 100% human</p>
]]></description><pubDate>Mon, 06 Apr 2026 19:28:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665723</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=47665723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665723</guid></item><item><title><![CDATA[New comment by youoy in "Lean 4: How the theorem prover works and why it's the new competitive edge in AI"]]></title><description><![CDATA[
<p>This site is getting invaded by AI bots... how long before its just AI speaking with AI, and just people reading the conversations thinking that its actual people?</p>
]]></description><pubDate>Sat, 21 Feb 2026 09:27:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47099005</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=47099005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47099005</guid></item><item><title><![CDATA[New comment by youoy in "Raising money fucked me up"]]></title><description><![CDATA[
<p>To me that graph seems to say that the pure "subconscious" stuff or "ML similar" stuff peaks earlier, but comprehension peaks much later. So you perfect your tools in the brain at around 25, but then it takes another 20 years to really know how to use them correclty.</p>
]]></description><pubDate>Sun, 18 Jan 2026 07:47:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46665683</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46665683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46665683</guid></item><item><title><![CDATA[New comment by youoy in "The Eric and Wendy Schmidt Observatory System"]]></title><description><![CDATA[
<p>I would go even further: Not only the vast majority, but 100% of non pacifist like AI weapons.</p>
]]></description><pubDate>Wed, 07 Jan 2026 15:12:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46527279</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46527279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46527279</guid></item><item><title><![CDATA[New comment by youoy in "A website to destroy all websites"]]></title><description><![CDATA[
<p>I finished reading this comment wondering what should I take away from it. Is it better to include alarming titles and be read? Or the other way around? Or what would be the sweet middle point?</p>
]]></description><pubDate>Fri, 02 Jan 2026 12:18:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46464063</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46464063</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46464063</guid></item><item><title><![CDATA[New comment by youoy in "Children and Helical Time"]]></title><description><![CDATA[
<p>I quote for context:<p>> But what about those of us who are well into the flattening part of the curve, what can we do for ourselves? You can seek new experiences perhaps. If time goes faster because your life has fewer firsts and more routine, then it can be extended by adding firsts. You can learn new things, travel, take up hobbies, or new careers.<p>> This works, to a point, but there are only so many firsts for you, and chasing this exclusively seems to lead to resentment. You remember the things you had as a kid. You remember the excitement and warmth of that world, how immediate and raw everything felt, and you want to go back. You start to regret that the world has changed, even though what changed the most is you.<p>I like to think that life slows down once you form a stable image and story of yourself. The more you convince yourself that that image is fixed, the faster time will go by. That might justify why childhood seems longer, since that image seems to form around adolescence.<p>Experiencing new "firsts" but keeping that image of yourselfe fixed just works for a while. That is why it may lead to resentment, as the article says.<p>So dont fool yourself: some image of who you are gives you some stability, but just use it for that, so that you dont run crazy with options.<p>If you treat every event as something that might reshape your ego, then suddenly a big number of experiences are new, and time suddenly slows dont. It may even appear to dissapear from time to time.</p>
]]></description><pubDate>Thu, 01 Jan 2026 11:32:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46453283</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46453283</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46453283</guid></item><item><title><![CDATA[New comment by youoy in "Is it a bubble?"]]></title><description><![CDATA[
<p>It completely depends on the way you prompt the model. Nothing prevents you from telling it exactly what you want, to the level of specifying the files and lines to focus on. In my experience anything other than that is a recepy for failure in sufficiently complex projects.</p>
]]></description><pubDate>Thu, 11 Dec 2025 08:10:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46228858</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46228858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46228858</guid></item><item><title><![CDATA[New comment by youoy in "Is it a bubble?"]]></title><description><![CDATA[
<p>I think that the main missunderstanding is that we used to think programming=coding, but this is not the case. LLMs allow people to use natural language as a programming language, but you still need to program. As with every programing language, it requires you to learn how to use it.<p>Not everyone needs to be excited about LLMs, in the same way that C++ developers dont need to be excited about python.</p>
]]></description><pubDate>Thu, 11 Dec 2025 08:00:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46228812</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46228812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46228812</guid></item><item><title><![CDATA[New comment by youoy in "Most technical problems are people problems"]]></title><description><![CDATA[
<p>That would be the case in an idealized world. As with everything this depends on the circumstances and the economic activity of where the person is living in. I guess that with the north american eyes it is the employee's fault if the employee cannot find some other job since the only constraint for doing it is the personal drive. But there are other economical/educational constraints that don't allow people to have the necessary mobility for your example to be efficient and accurate.</p>
]]></description><pubDate>Sat, 06 Dec 2025 07:08:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46171340</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46171340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46171340</guid></item><item><title><![CDATA[New comment by youoy in "Most technical problems are people problems"]]></title><description><![CDATA[
<p>You were talking about exploitation. Using the fact that the employee cannot obtain a better employment elsewhere to extract as much of the production or value from the employee smells a lot like exploitation to me.</p>
]]></description><pubDate>Fri, 05 Dec 2025 19:42:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46166280</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46166280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46166280</guid></item><item><title><![CDATA[New comment by youoy in "Most technical problems are people problems"]]></title><description><![CDATA[
<p>In the end this depends on your definition of "fair". What percentage of your generated production do you think is fair for the company to take? 95%? 50%? 10%?</p>
]]></description><pubDate>Fri, 05 Dec 2025 18:48:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46165510</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46165510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46165510</guid></item><item><title><![CDATA[New comment by youoy in "Mathematics is hard for mathematicians to understand too"]]></title><description><![CDATA[
<p>Notation an symbology comes out of a minmax optimisation. Minimizing complexity maximizing reach. As with every local critical point, it is probably not the only state we could have ended at.<p>For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort  to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.<p>And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl?  If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.<p>Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.</p>
]]></description><pubDate>Wed, 03 Dec 2025 15:17:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46135468</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46135468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46135468</guid></item><item><title><![CDATA[New comment by youoy in "Mathematics is hard for mathematicians to understand too"]]></title><description><![CDATA[
<p>> As Venkatesh concludes in his lecture about the future of mathematics in a world of increasingly capable AI, “We have to ask why are we proving things at all?” Thurston puts it like this: there will be a “continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true.”<p>This type of resoning becomes void if instead of "AI" we used something like "AGA" or "Artificial General Automation" which is a closer description of what we actually have (natural language as a programming language).<p>Increasingly capable AGA will do things that mathematitians do not like doing. Who wants to compute logarithmic tables by hand? This got solved by calculators. Who wants to compute chaotic dynamical systems by hand? Computer simulations solved that. Who wants to improve by 2% a real analysis bound over an integral to get closer to the optimal bound? AGA is very capable at doing that. We just want to do it if it actually helps us understand why, and surfaces some structure. If not, who cares it its you who does it or a machine that knows all of the olympiad type tricks.</p>
]]></description><pubDate>Wed, 03 Dec 2025 15:00:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46135255</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46135255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46135255</guid></item><item><title><![CDATA[New comment by youoy in "AI Is Breaking the Moral Foundation of Modern Society"]]></title><description><![CDATA[
<p>> Right now, even people who reject meritocracy understand its logic. You develop rare skills, you work hard, you create value, and you capture some of that value.<p>The premise is that AI does not allow to do this any more, which is completely false. It may not allow to do it in the same way, so its true that some jobs may disappear, but others will be created.<p>The article is too alarmist by someone who has drank all of the corporate hype. AI is not AGI. AI is an automation tool, like any other that we have invented before. The cool thing is that now we can use natural language as a programming language which was not possible before. If you treat AI as something that can thin k, you will fail again and again. If you treat it as an automation tool, that cannot think you will get all of the benefits.<p>Here i am talking about work. Of course AI has introduced a new scale of AI slop, and that has other psycological impacts on society.</p>
]]></description><pubDate>Wed, 03 Dec 2025 07:20:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46131191</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46131191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46131191</guid></item><item><title><![CDATA[New comment by youoy in "Is Matrix Multiplication Ugly?"]]></title><description><![CDATA[
<p>So is what i wrote a third one? Fourth? Fifth? :)</p>
]]></description><pubDate>Sat, 22 Nov 2025 09:29:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46013428</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46013428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46013428</guid></item><item><title><![CDATA[New comment by youoy in "Is Matrix Multiplication Ugly?"]]></title><description><![CDATA[
<p>I get your point, but i think the real issue is -(1/(-1/x)). It is the one that is being overlooked the most in our society, as if it were something normal, but it contains some of the deepest truths imho.</p>
]]></description><pubDate>Sat, 22 Nov 2025 07:46:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46012958</link><dc:creator>youoy</dc:creator><comments>https://news.ycombinator.com/item?id=46012958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46012958</guid></item></channel></rss>