<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: slickytail</title><link>https://news.ycombinator.com/user?id=slickytail</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 08:49:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=slickytail" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by slickytail in "Digital Red Queen: Adversarial Program Evolution in Core War with LLMs"]]></title><description><![CDATA[
<p>I think it would, for all practical purposes, be impossible to determine an optimal warrior, even at very small core sizes. Not only is the search space huge but the evaluation function can take unbounded time to resolve. We should consider the halting problem embedded inside the optimization target as a clue to the problem's difficulty.</p>
]]></description><pubDate>Fri, 09 Jan 2026 00:19:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46548466</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=46548466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46548466</guid></item><item><title><![CDATA[New comment by slickytail in "The terminal of the future"]]></title><description><![CDATA[
<p>As a non Emacs user, I would be really interested in a full writeup of how this works.</p>
]]></description><pubDate>Wed, 12 Nov 2025 07:06:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45897165</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45897165</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45897165</guid></item><item><title><![CDATA[New comment by slickytail in "Do not accept terms and conditions"]]></title><description><![CDATA[
<p>The site has a little game where you click to dismiss as many cookie-consent popups as possible, in 30 seconds. I suppose if you can't see the cookie consent popups, then at the end of the timer you just have zero points.</p>
]]></description><pubDate>Tue, 21 Oct 2025 20:17:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45661117</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45661117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45661117</guid></item><item><title><![CDATA[New comment by slickytail in "The Unknotting Number Is Not Additive"]]></title><description><![CDATA[
<p>In the words of Kronecker: "God created the integers, all else is the work of man."</p>
]]></description><pubDate>Thu, 09 Oct 2025 07:53:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45524802</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45524802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45524802</guid></item><item><title><![CDATA[New comment by slickytail in "Is a movie prop the ultimate laptop bag?"]]></title><description><![CDATA[
<p>Two different groups of people commenting different things.</p>
]]></description><pubDate>Mon, 22 Sep 2025 22:30:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45340407</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45340407</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45340407</guid></item><item><title><![CDATA[New comment by slickytail in "Interview with Japanese Demoscener 0b5vr"]]></title><description><![CDATA[
<p>In what sense is this 64KB? Clearly there's more than 64KB of code in the repo. And since it's typescript it's not like there's a binary that could be 64KB.</p>
]]></description><pubDate>Fri, 05 Sep 2025 18:16:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45141766</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45141766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45141766</guid></item><item><title><![CDATA[New comment by slickytail in "What Is the Fourier Transform?"]]></title><description><![CDATA[
<p>In a discrete Fourier transform, the number of frequencies you get out is the number of datapoints you have as input. This is because any frequencies higher than that are impossible to know (ie, they are above the sampling frequency).<p>But in the continuous Fourier transform, the output you get is a continuous function that's defined on the entire real line.</p>
]]></description><pubDate>Fri, 05 Sep 2025 16:37:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45140548</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45140548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45140548</guid></item><item><title><![CDATA[New comment by slickytail in "New research reveals longevity gains slowing, life expectancy of 100 unlikely"]]></title><description><![CDATA[
<p>Traditional suicide is incredibly stigmatized; ending one's life manually is a huge trauma to place on loved ones. The benefit of MAID is that it's dignified, and won't leave families searching for answers after a death.</p>
]]></description><pubDate>Sun, 31 Aug 2025 09:36:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45081835</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45081835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45081835</guid></item><item><title><![CDATA[New comment by slickytail in "From multi-head to latent attention: The evolution of attention mechanisms"]]></title><description><![CDATA[
<p>The transformer <i>was</i> a major breakthrough in NLP, and it was clear at the time of publishing that it would have a major impact. But I will add that it is common in the Deep Learning field to give papers catchy titles (see, off the top of my head: all the YOLO papers, ViT, DiT, textual inversion). The transformer paper is one in a long line of seminal papers with funny names.</p>
]]></description><pubDate>Sat, 30 Aug 2025 10:21:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45073470</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=45073470</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45073470</guid></item><item><title><![CDATA[New comment by slickytail in "The warning signs the AI bubble is about to burst"]]></title><description><![CDATA[
<p>I think the biggest sign of a potential crash is that Meta has frozen AI hiring. Such a quick reversal from just a few months ago.</p>
]]></description><pubDate>Fri, 22 Aug 2025 16:52:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44986795</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=44986795</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44986795</guid></item><item><title><![CDATA[New comment by slickytail in "The surprise deprecation of GPT-4o for ChatGPT consumers"]]></title><description><![CDATA[
<p>I stand corrected! Thanks.</p>
]]></description><pubDate>Sat, 09 Aug 2025 07:39:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44844764</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=44844764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44844764</guid></item><item><title><![CDATA[New comment by slickytail in "The surprise deprecation of GPT-4o for ChatGPT consumers"]]></title><description><![CDATA[
<p>Source for this? My understanding was that this was true for dalle3, but that the autoregressive image generation just takes in the entire chat context — no hidden prompt.</p>
]]></description><pubDate>Fri, 08 Aug 2025 23:04:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44842582</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=44842582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44842582</guid></item><item><title><![CDATA[New comment by slickytail in "Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?"]]></title><description><![CDATA[
<p>The memory bandwidth on an H100 is 3TB/s, for reference. This number is the limiting factor in the size of modern LLMs. 100GB/s isn't even in the realm of viability.</p>
]]></description><pubDate>Fri, 08 Aug 2025 22:43:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44842451</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=44842451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44842451</guid></item><item><title><![CDATA[New comment by slickytail in "Qwen-Image: Crafting with native text rendering"]]></title><description><![CDATA[
<p>All of this only really applies to LLMs though. LLMs are memory bound (due to higher param counts, KV caching, and causal attention) whereas diffusion models are compute bound (because of full self attention that can't be cached). So even if the memory bandwidth of an M3 ultra is close to an Nvidia card, the generation will be much faster on a dedicated GPU.</p>
]]></description><pubDate>Mon, 04 Aug 2025 23:12:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44792412</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=44792412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44792412</guid></item><item><title><![CDATA[New comment by slickytail in "Gemini Diffusion"]]></title><description><![CDATA[
<p>The relative unimportance of the exact SDPA attention in use in modern transformers is already known: <a href="https://arxiv.org/abs/2111.11418" rel="nofollow">https://arxiv.org/abs/2111.11418</a><p>The FFN, normalization, and residual connections are absolutely irreplaceable -- but attention can be replaced with almost any other layer that shares information between tokens, such as pooling, convolution, random mixing, etc.</p>
]]></description><pubDate>Thu, 22 May 2025 09:16:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44060295</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=44060295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44060295</guid></item><item><title><![CDATA[New comment by slickytail in "Everyone knows your location: tracking myself down through in-app ads"]]></title><description><![CDATA[
<p>The linked wikipedia article below says that they asked you for your email password specifically -- is there any evidence that they would try to use your linkedin password itself?</p>
]]></description><pubDate>Tue, 04 Feb 2025 12:54:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=42931714</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=42931714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42931714</guid></item><item><title><![CDATA[New comment by slickytail in "Apple's AirPods Pro hearing health features"]]></title><description><![CDATA[
<p>Generally the problem with this type of argument is that the two sources are not volume-matched. Try out an ABX test here, of lossless vs various lossy codecs: <a href="https://abx.digitalfeed.net/" rel="nofollow">https://abx.digitalfeed.net/</a></p>
]]></description><pubDate>Tue, 22 Oct 2024 08:44:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=41912380</link><dc:creator>slickytail</dc:creator><comments>https://news.ycombinator.com/item?id=41912380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41912380</guid></item></channel></rss>