<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: apike</title><link>https://news.ycombinator.com/user?id=apike</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 18:19:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=apike" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by apike in "A Broken Heart"]]></title><description><![CDATA[
<p>How well agents can do this is mostly proportional to how well they can understand and navigate your codebase broadly.<p>There are various contributing factors to this, but they include clear docs, notes and refactors that clear up parts the agent commonly gets confused by, choosing boring technology (your dependencies are well understood) and access to command-line tools that let it lint + typecheck + test the code. A lot of the scaffolding and wiring necessary are built into Cursor and Claude Code themselves now. Hope that helps!</p>
]]></description><pubDate>Thu, 05 Feb 2026 17:06:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46901846</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=46901846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46901846</guid></item><item><title><![CDATA[New comment by apike in "A Broken Heart"]]></title><description><![CDATA[
<p>This is a great technique!<p>In this case, I had made an overlarge squashed merge that included both the Intercom integration (a suspiciously likely cause of slowness) and the feedback button that added the heart – so I needed to go deeper to figure out the true cause. (Noto Emoji was in the app from before, but wasn't triggered in the dashboard until we added an emoji there.)</p>
]]></description><pubDate>Thu, 05 Feb 2026 16:58:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46901752</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=46901752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46901752</guid></item><item><title><![CDATA[New comment by apike in "A Broken Heart"]]></title><description><![CDATA[
<p>Not off topic at all!<p>While in this case we’d included the emoji font for displaying user content in another part of the app, the hazard of letting a “simple” approach expand and get out of hand is part of what I wanted to convey in writing this.</p>
]]></description><pubDate>Thu, 05 Feb 2026 15:53:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46901023</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=46901023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46901023</guid></item><item><title><![CDATA[New comment by apike in "Novo Nordisk's Canadian Mistake"]]></title><description><![CDATA[
<p>The FDA (or equivalent in the relevant country) regulates whether an approved drug requires a prescription based on the safety profile. To be approved for OTC, there is a much higher bar in terms of ease of misuse, side effects, and so on.</p>
]]></description><pubDate>Sun, 19 Oct 2025 22:30:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45638630</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=45638630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45638630</guid></item><item><title><![CDATA[New comment by apike in "Vancouver Stock Exchange: Scam capital of the world (1989) [pdf]"]]></title><description><![CDATA[
<p>My experience raising in Vancouver is that there are two angel communities.<p>There is an easier to find Vancouver-centric investor group that behaves as a you describe. Many of these investors didn’t come up as tech founders. I was advised not to waste time with them, so I don't know if there are some gems in the rough there or not.<p>Then there is a quieter group that got their capital from building serious tech businesses. This group spends more time connecting outside Vancouver – Bay Area mostly, but also Toronto and globally. These folks do write early-stage cheques and can be very helpful advisors, but they're not full-time angels who are spending time on deal flow. They're mostly focused on building their next thing, so it’s more difficult to earn their attention.<p>So yes there are questionable actors, but there are also very legit folks doing great work, and it’s possible to go to an event or dinner party that only really has one or the other. Hope that’s helpful to any other founders building in Vancouver!</p>
]]></description><pubDate>Sun, 12 Oct 2025 16:39:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45559547</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=45559547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45559547</guid></item><item><title><![CDATA[New comment by apike in "Figma Slides Is a Beautiful Disaster"]]></title><description><![CDATA[
<p>Thanks Grey – other than the presenting-at-an-event flow I do really did like the Figma Slides experience, so this is great to hear. The world is better off with a strong Figma.</p>
]]></description><pubDate>Sun, 01 Jun 2025 19:08:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44153074</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=44153074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44153074</guid></item><item><title><![CDATA[iMCP: An MCP server for your Mac Messages, Contacts, and more]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/loopwork-ai/iMCP">https://github.com/loopwork-ai/iMCP</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43294282">https://news.ycombinator.com/item?id=43294282</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 07 Mar 2025 20:34:02 +0000</pubDate><link>https://github.com/loopwork-ai/iMCP</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=43294282</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43294282</guid></item><item><title><![CDATA[New comment by apike in "Compare OpenAI Models"]]></title><description><![CDATA[
<p>This is interesting, but the values could use some refinement:<p><pre><code>  gpt-4o: speed 3/5
  gpt-4.5-preview: speed 3/5
  gpt-3.5-turbo: speed 2/5
</code></pre>
In practice 3.5 turbo is, what, 5-10x <i>faster</i> than 4.5 preview?</p>
]]></description><pubDate>Wed, 05 Mar 2025 22:55:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43273902</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=43273902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43273902</guid></item><item><title><![CDATA[New comment by apike in "JavaScript Fatigue Strikes Back"]]></title><description><![CDATA[
<p>I mentioned Ruby and Python in this vein, but PHP with Laravel would count in my book too. That said, PHP may be nearing the stage Java has gotten to, where it’s well-understood but is perceived as outdated enough that it causes hiring and retention friction.</p>
]]></description><pubDate>Sat, 01 Mar 2025 23:41:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43225337</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=43225337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43225337</guid></item><item><title><![CDATA[Don't Animate Height]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.granola.ai/blog/dont-animate-height">https://www.granola.ai/blog/dont-animate-height</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42938557">https://news.ycombinator.com/item?id=42938557</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 04 Feb 2025 20:58:21 +0000</pubDate><link>https://www.granola.ai/blog/dont-animate-height</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=42938557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42938557</guid></item><item><title><![CDATA[Once a Month]]></title><description><![CDATA[
<p>Article URL: <a href="https://allenpike.com/2024/once-a-month">https://allenpike.com/2024/once-a-month</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42915281">https://news.ycombinator.com/item?id=42915281</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 03 Feb 2025 05:30:49 +0000</pubDate><link>https://allenpike.com/2024/once-a-month</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=42915281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42915281</guid></item><item><title><![CDATA[New comment by apike in "Narrative Jailbreaking for Fun and Profit"]]></title><description><![CDATA[
<p>While this can be done in principle (it's not a foolproof enough method to, for example, ensure an LLM doesn't leak secrets) it is much harder to fool the supervisor than the generator because:<p>1. You can't get output from the supervisor, other than the binary enforcement action of shutting you down (it can't leak its instructions)<p>2. The supervisor can judge the conversation on the merits of the most recent turns, since it doesn't need to produce a response that respects the full history (you can't lead the supervisor step by step into the wilderness)<p>3. LLMs, like humans, are generally better at judging good output than generating good output</p>
]]></description><pubDate>Mon, 23 Dec 2024 21:48:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=42497879</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=42497879</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42497879</guid></item><item><title><![CDATA[New comment by apike in "LLMs aren't "trained on the internet" anymore"]]></title><description><![CDATA[
<p>Yeah this is an interesting point. Other threads make the point about the "bitter lesson", and how expert-trained ML has historically not scaled, and human-generated LLM training data may just be repeating that dead end. Maybe so.<p>Something that is new this time around, AFAIK, is that we haven’t previously had general ML systems that businesses and consumers are paying billions of dollars a year to use. So if, say, 10% of revenue goes back in to making better data sets every year, I can imagine continued improvement on certain economically valuable use cases – though likely with diminishing returns.</p>
]]></description><pubDate>Sat, 01 Jun 2024 23:43:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=40550100</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=40550100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40550100</guid></item><item><title><![CDATA[New comment by apike in "LLMs aren't "trained on the internet" anymore"]]></title><description><![CDATA[
<p>I think this is a valid criticism. I weighed a few different titles that would fit in my (arbitrary) title length limit, but on reflection the one I chose was too glib.<p>My core point is that the “Trained On the Internet” mental model is becoming less true over time, which makes it a poor model for predicting the long term performance of models. These titles would be better:<p>1. LLMs Aren’t Just “Trained On the Internet” Anymore<p>2. LLMs Aren’t Simply Being “Trained On the Internet”<p>3. Future LLMs Won’t Just Be “Trained On the Internet”<p>I’ve swapped in the first one. Thanks for the feedback.</p>
]]></description><pubDate>Sat, 01 Jun 2024 23:33:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=40550037</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=40550037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40550037</guid></item><item><title><![CDATA[New comment by apike in "LLMs aren't "trained on the internet" anymore"]]></title><description><![CDATA[
<p>> Also "generate some better examples" sounds like fudging data to fit the expected outcome.<p>LLMs are tools. As a tool author, you have certain desired outcomes for certain use cases. If the current data you’re training on isn’t giving you those outcomes, it is absolutely reasonable to "fudge" the data. This might mean reducing bias, or adding bias, or any number of nudges. Training an LLM is not a scientific study, it’s a product development effort.</p>
]]></description><pubDate>Sat, 01 Jun 2024 23:18:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=40549950</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=40549950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40549950</guid></item><item><title><![CDATA[New comment by apike in "Apple explores home robotics as potential 'next big thing'"]]></title><description><![CDATA[
<p><a href="https://archive.is/Yi6p3" rel="nofollow">https://archive.is/Yi6p3</a></p>
]]></description><pubDate>Thu, 04 Apr 2024 20:47:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=39935613</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=39935613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39935613</guid></item><item><title><![CDATA[Apple explores home robotics as potential 'next big thing']]></title><description><![CDATA[
<p>Article URL: <a href="https://www.bloomberg.com/news/articles/2024-04-03/apple-explores-home-robots-after-abandoning-car-efforts">https://www.bloomberg.com/news/articles/2024-04-03/apple-explores-home-robots-after-abandoning-car-efforts</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39935597">https://news.ycombinator.com/item?id=39935597</a></p>
<p>Points: 69</p>
<p># Comments: 155</p>
]]></description><pubDate>Thu, 04 Apr 2024 20:46:10 +0000</pubDate><link>https://www.bloomberg.com/news/articles/2024-04-03/apple-explores-home-robots-after-abandoning-car-efforts</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=39935597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39935597</guid></item><item><title><![CDATA[New comment by apike in "The Curse of Dialup World"]]></title><description><![CDATA[
<p>Yes, good point – I should have written towns, rather than cities. I just pushed a fix, thanks.</p>
]]></description><pubDate>Mon, 02 Oct 2023 18:17:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=37742241</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=37742241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37742241</guid></item><item><title><![CDATA[New comment by apike in "Do something, so we can change it"]]></title><description><![CDATA[
<p>> be sure to do it strongly in one direction or the other, so that you set the direction of exploration, too.<p>This reminds me of a game design technique Blizzard learned to use back in the day when balancing their RTS games. Sometimes they'd make a small change – say increasing damage by 4% – and playtest the result. It seemed, maybe better? So they'd ship it. Some time later, they'd realize they had way undershot, and the ideal increase would have been 25%. They sometimes found themselves buffing or nerfing the same thing over and over, trying to get it right.<p>The approach that worked better for them was to err on the side of first overcorrecting – say, try increasing damage by 40%. This way, in playtesting they could clearly see the effects of having gone too far, then back off the change as appropriate.</p>
]]></description><pubDate>Fri, 04 Aug 2023 20:31:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=37005479</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=37005479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37005479</guid></item><item><title><![CDATA[New comment by apike in "Ask HN: Could you share your personal blog here?"]]></title><description><![CDATA[
<p><a href="https://allenpike.com/" rel="nofollow noreferrer">https://allenpike.com/</a><p>I’ve been writing monthly for 10 years, and otherwise for 20. Topics have ranged from product development to leadership to breakfast cereal selection techniques.</p>
]]></description><pubDate>Tue, 04 Jul 2023 18:42:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=36590937</link><dc:creator>apike</dc:creator><comments>https://news.ycombinator.com/item?id=36590937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36590937</guid></item></channel></rss>