<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thatguysaguy</title><link>https://news.ycombinator.com/user?id=thatguysaguy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 18:11:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thatguysaguy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thatguysaguy in "Lean proved this program correct; then I found a bug"]]></title><description><![CDATA[
<p>What is up with people saying you cannot prove a negative? Of course you can! (At least in formal settings)<p>For example it's extremely easy to prove there is no square with diagonals of different lengths. I'm the hard end, Andrew Wiles proved Fermat's Last Theorem which expresses a negative.<p>That's just a nit though, you're right about the infinite regress problem.</p>
]]></description><pubDate>Tue, 14 Apr 2026 01:58:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47760358</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=47760358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47760358</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Can you reverse engineer our neural network?"]]></title><description><![CDATA[
<p>Ah dang. When I did this I also thought the length bug was intentional but I didn't figure it out before I started my new job, so I dropped the puzzle.</p>
]]></description><pubDate>Fri, 27 Feb 2026 17:20:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47182966</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=47182966</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47182966</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Large-Scale Online Deanonymization with LLMs"]]></title><description><![CDATA[
<p>Maybe I missed something, but I see little evidence that there is a concerning ability to deanonymize. Many people post under a pseudonym but then link to their GitHub etc. 
In fact by construction the HN dataset _only_ consists of people who are comfortable with their real identity being linked to it.<p>The real question is whether someone who is pseudonymous and actually attempting to remain so can be deanonymized.</p>
]]></description><pubDate>Wed, 25 Feb 2026 23:07:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47159347</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=47159347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47159347</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Gemini 3 Deep Think drew me a good SVG of a pelican riding a bicycle"]]></title><description><![CDATA[
<p>You can just try other svgs, I got some pretty good ones.<p>(*Disclaimer: I work for Google, but also I have zero idea about what they trained deepthink on)</p>
]]></description><pubDate>Sat, 14 Feb 2026 20:36:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47018136</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=47018136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47018136</guid></item><item><title><![CDATA[New comment by thatguysaguy in "TPUs vs. GPUs and why Google is positioned to win AI race in the long term"]]></title><description><![CDATA[
<p>TPUs predate LLMs by a long time. They were already being used for all the other internal ML work needed for search, youtube, etc.</p>
]]></description><pubDate>Fri, 28 Nov 2025 05:48:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46075864</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=46075864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46075864</guid></item><item><title><![CDATA[New comment by thatguysaguy in "AI has a deep understanding of how this code works"]]></title><description><![CDATA[
<p>I'm actually not talking about whether the PR works or was tested. Let's just assume it was bug-free and worked as advertised. I would say that even in that situation, they should not accept the PR. The reason is that no one is the owner of that code. None of the maintainers will want to dedicate some of their volunteer time to owning your code/the AIs code, and the AI itself can't become the owner of the code in any meaningful way. (At least not without some very involved engineering work on building a harness, and since that's still a research-level project, it's clearly something which should be discussed at the project level, not just assumed).</p>
]]></description><pubDate>Thu, 27 Nov 2025 20:17:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46072887</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=46072887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46072887</guid></item><item><title><![CDATA[New comment by thatguysaguy in "AI has a deep understanding of how this code works"]]></title><description><![CDATA[
<p>A big part of software engineering is maintenance not just adding features. When you drop a 22,000 line PR without any discussion or previous work on the project, people will (probably correctly) assume that you aren't there for the long haul to take care of it.<p>On top of that, there's a huge asymmetry when people use AI to spit out huge PRs and expect thorough review from project maintainers. Of course they're not going to review your PR!</p>
]]></description><pubDate>Thu, 27 Nov 2025 08:23:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46067012</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=46067012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46067012</guid></item><item><title><![CDATA[New comment by thatguysaguy in "FFmpeg dealing with a security researcher"]]></title><description><![CDATA[
<p>It's a volunteer run project... Saying that they have a duty to do anything other than what they want is quite strange.</p>
]]></description><pubDate>Sun, 02 Nov 2025 06:00:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45788126</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=45788126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45788126</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Updated practice for review articles and position papers in ArXiv CS category"]]></title><description><![CDATA[
<p>Verification via LLM tends to break under quite small optimization pressure. For example I did RL to improve <insert aspect> against one of the sota models from one generation ago, and the (quite weak) learner model found out that it could emit a few nonsense words to get the max score.<p>That's without even being able to backprop through the annotator, and also with me actively trying to avoid reward hacking. If arxiv used an open model for review, it would be trivial for people to insert a few grammatical mistakes which cause them to receive max points.</p>
]]></description><pubDate>Sat, 01 Nov 2025 17:07:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45783322</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=45783322</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45783322</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Meta is axing 600 roles across its AI division"]]></title><description><![CDATA[
<p>FAIR is not older AI... They've been publishing a bunch on generative models.</p>
]]></description><pubDate>Wed, 22 Oct 2025 18:47:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45673454</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=45673454</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45673454</guid></item><item><title><![CDATA[New comment by thatguysaguy in "BERT is just a single text diffusion step"]]></title><description><![CDATA[
<p>Back when BERT came out, everyone was trying to get it to generate text. These attempts generally didn't work, here's one for reference though: <a href="https://arxiv.org/abs/1902.04094" rel="nofollow">https://arxiv.org/abs/1902.04094</a><p>This doesn't have an explicit diffusion tie in, but Savinov et al. at DeepMind figured out that doing two steps at training time and randomizing the masking probability is enough to get it to work reasonably well.</p>
]]></description><pubDate>Mon, 20 Oct 2025 16:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45645680</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=45645680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45645680</guid></item><item><title><![CDATA[New comment by thatguysaguy in "[dead]"]]></title><description><![CDATA[
<p>I would recommend going and reading what the BlueSky leadership actually wrote, rather than this post's summary of it.</p>
]]></description><pubDate>Mon, 06 Oct 2025 15:11:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45492270</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=45492270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45492270</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Are OpenAI and Anthropic losing money on inference?"]]></title><description><![CDATA[
<p>Why would you think that deepseek is more efficient than gpt-5/Claude 4 though? There's been enough time to integrate the lessons from deepseek.</p>
]]></description><pubDate>Thu, 28 Aug 2025 16:10:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45053923</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=45053923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45053923</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Are OpenAI and Anthropic losing money on inference?"]]></title><description><![CDATA[
<p>37 billion bytes per token?<p>Edit: Oh assuming this is an estimate based on the model weights moving fromm HBM to SRAM, that's not how transformers are applied to input tokens. You only have to do move the weights for every token during generation, not during "prefill". (And actually during generation you can use speculative decoding to do better than this roofline anyways).</p>
]]></description><pubDate>Thu, 28 Aug 2025 14:59:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45053047</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=45053047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45053047</guid></item><item><title><![CDATA[New comment by thatguysaguy in "What are the real numbers, really? (2024)"]]></title><description><![CDATA[
<p>Joel's blog in general is an extremely great read. I highly recommend subscribing.</p>
]]></description><pubDate>Thu, 14 Aug 2025 19:30:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44904588</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=44904588</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44904588</guid></item><item><title><![CDATA[New comment by thatguysaguy in "A.I. researchers are negotiating $250M pay packages"]]></title><description><![CDATA[
<p>At least part of is is that the capex for LLM training is so high. It used to be that compute was extremely cheap compared to staff, but that's no longer the case for large model training.</p>
]]></description><pubDate>Sun, 03 Aug 2025 15:55:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44777426</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=44777426</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44777426</guid></item><item><title><![CDATA[New comment by thatguysaguy in "Ask HN: What's with the repeated job posts on "Who's hiring"?"]]></title><description><![CDATA[
<p>I both got a job through such a thread, and have now seen the other side of the applicant pipeline. The average applicant (in general, idk about HN in particular) is not very strong! Especially true when you consider the alternative of preserving runway and being patient.</p>
]]></description><pubDate>Tue, 03 Jun 2025 23:45:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44175957</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=44175957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44175957</guid></item><item><title><![CDATA[New comment by thatguysaguy in "I Want No One Else to Succeed"]]></title><description><![CDATA[
<p>> Do you think the students in that poll had really thought about the credibility of their university when voting?<p>That's fair, and I'm not sure of course. I guess a more interesting question would be what if there was another option based on what I described. I do know that this is a conversation that we explicitly had in my department many times. It's was an open secret that cheating was rampant, and a degree from that CS department isn't very prestigious. Those two things aren't unrelated.<p>To your point about a single class giving all A's not damaging it, you're right of course. My point is that this is a classic tragedy of the commons. One plane flight, extra datacenter etc. isn't moving the needle on climate change, but all put together it does.<p>> If that were the case, the people protesting student Lian forgiveness should be at tge forefront of demanding increased coverage for Medicare and better social security systems in general<p>I agree, but my response is simple: I don't think either major party in the US has a principled stance on economic issues. There is wild fluctuation in behavior on an issue-to-issue basis. The fact that most students/political parties/people in the universe don't have a coherent set of princples shouldn't stop us from trying to have them!</p>
]]></description><pubDate>Fri, 30 May 2025 20:33:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44139633</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=44139633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44139633</guid></item><item><title><![CDATA[New comment by thatguysaguy in "I Want No One Else to Succeed"]]></title><description><![CDATA[
<p>I think the author doesn't understand the example correctly, although to be fair I don't think the professor put the most important option on there either.<p>Imagine there are two schools, one gives all students a 95% in all their classes, one grades normally. Which school do you interview people from? When a teacher gives out free A's (or when students cheat), it's not a victimless change, it degrades a shared resource (the credibility of the school).<p>The loan forgiveness thing is not a free action, it is a handout of money to a specific demographic (college graduates), and in particular one that is much more affluent than the people who need government subsidies the most.The government handing out money is not a free action!</p>
]]></description><pubDate>Thu, 29 May 2025 22:18:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44130932</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=44130932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44130932</guid></item><item><title><![CDATA[New comment by thatguysaguy in "The "AI 2027" Scenario: How realistic is it?"]]></title><description><![CDATA[
<p>Yeah I wouldn't make a deal like this with someone who is operating in bad faith... The cases I've seen of this are between public intellectuals with relatively modest amounts of money.</p>
]]></description><pubDate>Thu, 22 May 2025 19:18:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44065707</link><dc:creator>thatguysaguy</dc:creator><comments>https://news.ycombinator.com/item?id=44065707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44065707</guid></item></channel></rss>