<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hawtads</title><link>https://news.ycombinator.com/user?id=hawtads</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 21:03:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hawtads" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hawtads in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>We are working on making agentic ads and regulatory compliance scalable.<p><a href="https://hawtads.com" rel="nofollow">https://hawtads.com</a><p>Just launched the blog too<p><a href="https://blog.hawtads.com/" rel="nofollow">https://blog.hawtads.com/</a></p>
]]></description><pubDate>Mon, 13 Apr 2026 04:58:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47747779</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47747779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47747779</guid></item><item><title><![CDATA[New comment by hawtads in "Finding all regex matches has always been O(n²)"]]></title><description><![CDATA[
<p>The original Kleene Star Regex was invented to model neural networks. Have you tried throwing a transformer at the problem /s? Also O(n²) but at least you get hardware acceleration ¯\(ツ)/¯<p>Here's Kleene's Representation of Events in Nerve Nets and Finite Automata:<p><a href="https://www.rand.org/content/dam/rand/pubs/research_memoranda/2008/RM704.pdf" rel="nofollow">https://www.rand.org/content/dam/rand/pubs/research_memorand...</a></p>
]]></description><pubDate>Tue, 24 Mar 2026 07:46:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499671</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47499671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499671</guid></item><item><title><![CDATA[New comment by hawtads in "Bayesian statistics for confused data scientists"]]></title><description><![CDATA[
<p>Ooh good find, thanks for the link. This will be my bedtime reading for this week :)</p>
]]></description><pubDate>Sun, 22 Mar 2026 03:30:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47474195</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47474195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47474195</guid></item><item><title><![CDATA[New comment by hawtads in "Bayesian statistics for confused data scientists"]]></title><description><![CDATA[
<p>I am more familiar with Bayesian than frequentist stats, but given that they are mathematically equivalent, shouldn't frequentist stats have an answer to e.g. the loss function of a VAE? Or are generative machine learning inherently impossible to model for frequentist stats?<p>Though if you think about it, a diffusion model is somewhat (partially) frequentist.</p>
]]></description><pubDate>Sun, 22 Mar 2026 03:26:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47474181</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47474181</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47474181</guid></item><item><title><![CDATA[New comment by hawtads in "Bayesian statistics for confused data scientists"]]></title><description><![CDATA[
<p>I think it would be interesting if frequentist stats can come up with more generative models. Current high level generative machine learning all rely on Bayesian modeling.</p>
]]></description><pubDate>Sun, 22 Mar 2026 03:17:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47474124</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47474124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47474124</guid></item><item><title><![CDATA[New comment by hawtads in "The Ugliest Airplane: An Appreciation"]]></title><description><![CDATA[
<p>Well, hope they reinforced the wings, that's a massive weak point for dusters.</p>
]]></description><pubDate>Sat, 21 Mar 2026 17:06:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47468884</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47468884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47468884</guid></item><item><title><![CDATA[New comment by hawtads in "The Ugliest Airplane: An Appreciation"]]></title><description><![CDATA[
<p>50 knots rotation is perfectly fine for a plane that size. A Cessna Skyhawk is certified to rotate at 55 knots fully loaded (and since the stall speed is around 40knots, for specialty take-offs like soft fields it's much lower, 50knots is more than enough).</p>
]]></description><pubDate>Sat, 21 Mar 2026 06:56:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47464628</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47464628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47464628</guid></item><item><title><![CDATA[New comment by hawtads in "OpenCode – Open source AI coding agent"]]></title><description><![CDATA[
<p>No, Claude on GitHub Copilot is billed at 3X the usage rate of the other models e.g. GPT-5.4 and you get an extremely truncated context window.<p>See <a href="https://models.dev" rel="nofollow">https://models.dev</a> for a comparison against the normal "vanilla" API.</p>
]]></description><pubDate>Sat, 21 Mar 2026 02:45:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47463536</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47463536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47463536</guid></item><item><title><![CDATA[New comment by hawtads in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>We are building an agentic ad tech system optimized for real time and scale. The process of making an ad, from ideation to distribution, is traditionally exceptionally labor intensive.  We are making it possible to target, design, and distribute ads at scale and in real time.<p><a href="https://hawtads.com" rel="nofollow">https://hawtads.com</a></p>
]]></description><pubDate>Mon, 09 Mar 2026 04:41:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304949</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=47304949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304949</guid></item><item><title><![CDATA[New comment by hawtads in "Ask HN: What are you working on? (January 2026)"]]></title><description><![CDATA[
<p>I have been working on the next generation of Canva and Photoshop for highly regulated verticals where there are specific demands placed on the generation and edit flow.<p><a href="https://hawtads.com" rel="nofollow">https://hawtads.com</a><p>If you are a brand who needs to deploy advertisements at scale, don't hesitate to reach out.</p>
]]></description><pubDate>Sun, 11 Jan 2026 19:18:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46578868</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=46578868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46578868</guid></item><item><title><![CDATA[New comment by hawtads in "Opus 4.5 is not the normal AI agent experience that I have had thus far"]]></title><description><![CDATA[
<p>It could have exceeded either its real context window size (or the artificially truncated one) and the dynamic summarization step failed to capture the important bits of information you wanted. Alternatively, the information might be stored in certain places in the context window where it failed to perform well in needle in haystack retrieval.<p>This is part of the reason why people use external data stores (e.g. vector databases, graph tools like Bead etc. in the hope of supplementing the agent's native context window and task management tools).<p><a href="https://github.com/steveyegge/beads" rel="nofollow">https://github.com/steveyegge/beads</a><p>The whole field is still in its infancy. Who knows, maybe in another update or two the problem might just be solved. It's not like needle in the haystack problems aren't differentiable (mathematically speaking).</p>
]]></description><pubDate>Fri, 09 Jan 2026 19:00:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46557668</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=46557668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46557668</guid></item><item><title><![CDATA[New comment by hawtads in "Opus 4.5 is not the normal AI agent experience that I have had thus far"]]></title><description><![CDATA[
<p>Okay, here's the tl;dr:<p>Attention based neural network architectures (on which the majority of LLMs are built) has a unit economic cost that scales (roughly) n^2 i.e. quadratic (for both memory and compute). In other words, the longer the context window, the more expensive it is for the upstream provider. That's one cost.<p>The second cost is that you have to resend the entire context every time you send a new message. So the context is basically (where a, b, and c are messages): first context: a, second context window: a->b, third context window: a->b->c. It's a mostly stateless (there are some short term caching mechanisms, YMMV based on provider, it's why "cached" messages, especially system prompts are cheaper) process from the point of view of the developer, the state i.e. context window string is managed by the end user application (in other words, the coding agent, the IDE, the ChatGPT UI client etc.)<p>The per token cost is an <i>amortized</i> (averaged) cost of memory+compute, the actual cost is mostly quadratic with respect to each marginal token. The longer the context window the more expensive things are. 
Because of the above, AI agent providers (especially those that charge flat fee subscription plans) are incentivized to keep costs low by limiting the maximum context window size.<p>(And if you think about it carefully, your AI API costs are a quadratic cost curve projected into a linear line (flat fee per token, so the model hosting provider in some cases may make more profit if users send in shorter contexts, versus if they constantly saturate the window. YMMV of course, but it's a race to the bottom right now for LLM unit economics)<p>They do this by interrupting a task halfway through and generating a "summary" of the task progress, then they prompt the LLM again with a fresh prompt and the "summary" so far and the LLM will restart the task from where it left of. Of course text is a poor representation of the LLM's internal state but it's the best option so far for AI application to keep costs low.<p>Another thing to keep in mind is that LLMs have poorer performance the larger the input size. This is due to a variety of factors (mostly because you don't have enough training data to saturate the massive context window sizes I think).<p>The general graph for LLM context performance looks something like this:
<a href="https://cobusgreyling.medium.com/llm-context-rot-28a6d0399655" rel="nofollow">https://cobusgreyling.medium.com/llm-context-rot-28a6d039965...</a>
<a href="https://research.trychroma.com/context-rot" rel="nofollow">https://research.trychroma.com/context-rot</a><p>There are a bunch of tests and benchmarks (commonly referred to as "needle in a haystack") to improve the LLM performance at large context window sizes, but it's still an open area of research.<p><a href="https://cloud.google.com/blog/products/ai-machine-learning/the-needle-in-the-haystack-test-and-how-gemini-pro-solves-it" rel="nofollow">https://cloud.google.com/blog/products/ai-machine-learning/t...</a><p>The thing is, <i>generally speaking</i>, you will get a slightly better performance if you can squeeze all your code and problem into the context window, because the LLM can get a "whole picture" view of your codebase/problem, instead of a bunch of broken telephone summaries every dozen of thousands of tokens. Take this with a grain of salt as the field is changing rapidly so it might not be valid in a month or two.<p>Keep in mind that if the problem you are solving requires you to saturate the entire context window of the LLM, a <i>single</i> request can cost you dollars. And if you are using 1M+ context window model like gemini, you can rack up costs fairly rapidly.</p>
]]></description><pubDate>Thu, 08 Jan 2026 19:30:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46545344</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=46545344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46545344</guid></item><item><title><![CDATA[New comment by hawtads in "Opus 4.5 is not the normal AI agent experience that I have had thus far"]]></title><description><![CDATA[
<p>Copilot and many coding agents truncates the context window and uses dynamic summarization to keep costs low for them. That's how they are able to provide flat fee plans.<p>You can see some of the context limits here:<p><a href="https://models.dev/" rel="nofollow">https://models.dev/</a><p>If you want the full capability, use the API and use something like opencode. You will find that a single PR can easily rack up 3 digits of consumption costs.</p>
]]></description><pubDate>Tue, 06 Jan 2026 23:10:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46520179</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=46520179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46520179</guid></item><item><title><![CDATA[New comment by hawtads in "I spent 100 hours researching how to rank in AI answers. Here is the guide"]]></title><description><![CDATA[
<p>Don't forget volume. Just having well structured content isn't enough, you need large volumes of content such that it makes a statistical difference during the training process.</p>
]]></description><pubDate>Tue, 06 Jan 2026 00:45:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46507324</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=46507324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46507324</guid></item><item><title><![CDATA[New comment by hawtads in "Google is dead. Where do we go now?"]]></title><description><![CDATA[
<p>Google ads are the cheapest yes, but depending on your audience they may not be looking on Google now.<p>For ChatGPT (and similar) you need to have a strong FAQ page and lots of content marketing to increase the likelihood of being the suggested answer when a user asks ChatGPT a relevant question (it's a highly probabilistic system, look up AEO/GEO).<p>CloudFlare for example offers an option to block AI scraping bots by default. If you are in the services business, this is the <i>opposite</i> of what you want because having AI crawlers scrape your site would drive traffic down the road when users ask a related question.<p>I would also suggest having accounts with major chatbot companies and enabling the "allow training on my conversations" option and then talk to it about your services. Ultimately you just want to get your brand into the training data corpus, and the rest is just basic machine learning statistics.</p>
]]></description><pubDate>Mon, 29 Dec 2025 20:55:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46425551</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=46425551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46425551</guid></item><item><title><![CDATA[New comment by hawtads in "Meta's ads tools started switching out top-performing ads with AI-generated ones"]]></title><description><![CDATA[
<p>It's not just Facebook, the entire ads industry is heading in this direction. There's a seismic change going on right now.</p>
]]></description><pubDate>Mon, 29 Dec 2025 20:12:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46424999</link><dc:creator>hawtads</dc:creator><comments>https://news.ycombinator.com/item?id=46424999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46424999</guid></item></channel></rss>