<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: danenania</title><link>https://news.ycombinator.com/user?id=danenania</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 02:00:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=danenania" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by danenania in "Elon Musk pushes out more xAI founders as AI coding effort falters"]]></title><description><![CDATA[
<p>It seems like that could change the math quite a bit, since you’d presumably be losing a lot of capacity to failures. I’d assume you would have a much higher failure rate in space, and component failure is already pretty common on earth.</p>
]]></description><pubDate>Sun, 15 Mar 2026 04:27:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384361</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47384361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384361</guid></item><item><title><![CDATA[New comment by danenania in "Elon Musk pushes out more xAI founders as AI coding effort falters"]]></title><description><![CDATA[
<p>What about maintenance? I’d naively assume that’s the killer.</p>
]]></description><pubDate>Fri, 13 Mar 2026 23:42:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47371524</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47371524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47371524</guid></item><item><title><![CDATA[New comment by danenania in "AI Agent Hacks McKinsey"]]></title><description><![CDATA[
<p>> I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.<p>These folks have found a bunch: <a href="https://www.promptarmor.com/resources">https://www.promptarmor.com/resources</a><p>But I guess you mean one that has been exploited in the wild?</p>
]]></description><pubDate>Wed, 11 Mar 2026 15:09:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47336648</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47336648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47336648</guid></item><item><title><![CDATA[New comment by danenania in "GPT-5.4"]]></title><description><![CDATA[
<p>tmux makes it easy for terminal based agents to talk to each other, while also letting you see output and jump into the conversation on either side. It’s a natural fit.</p>
]]></description><pubDate>Sat, 07 Mar 2026 17:17:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47289498</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47289498</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47289498</guid></item><item><title><![CDATA[New comment by danenania in "GPT-5.4"]]></title><description><![CDATA[
<p>Gemini 1.5 Pro actually has 2M!<p>No other model from a major lab has matched it since afaik.<p>Edit: err, I see in the comment below mine that Grok has 2M as well. Had no idea!</p>
]]></description><pubDate>Fri, 06 Mar 2026 14:27:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47275283</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47275283</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47275283</guid></item><item><title><![CDATA[New comment by danenania in "GPT-5.4"]]></title><description><![CDATA[
<p>I built a tool at work that allows claude code and codex to communicate with each other through tmux, using skills. It works quite well.</p>
]]></description><pubDate>Fri, 06 Mar 2026 14:22:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47275234</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47275234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47275234</guid></item><item><title><![CDATA[New comment by danenania in "Nobody gets promoted for simplicity"]]></title><description><![CDATA[
<p>The correct answer is “Postgres would handle it, but if it needed to scale even higher, I’d…”<p>The point of a system design interview is to have a discussion that examines possibilities and tradeoffs.</p>
]]></description><pubDate>Wed, 04 Mar 2026 15:59:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47249404</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47249404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47249404</guid></item><item><title><![CDATA[New comment by danenania in "If AI writes code, should the session be part of the commit?"]]></title><description><![CDATA[
<p>I have a similar process and have thought about committing all the planning files, but I've found that they tend to end up in an outdated state by the time the implementation is done.<p>Better imo is to produce a README or dev-facing doc at the end that distills all the planning and implementation into a final authoritative overview. This is easier for both humans and agents to digest than bunch of meandering planning files.</p>
]]></description><pubDate>Mon, 02 Mar 2026 16:16:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47219942</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47219942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47219942</guid></item><item><title><![CDATA[New comment by danenania in "Launch HN: Cardboard (YC W26) – Agentic video editor"]]></title><description><![CDATA[
<p>Very cool! A noob question about how models handle video: do you do everything via sending frames as images to the model at some framerate? Are there tricks to avoid what it seems like would be massive token use from this approach?</p>
]]></description><pubDate>Fri, 27 Feb 2026 16:49:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47182678</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47182678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47182678</guid></item><item><title><![CDATA[New comment by danenania in "Why I don't think AGI is imminent"]]></title><description><![CDATA[
<p>I’m very pro AI coding and use it all day long, but I also wouldn’t say “the code it writes is correct”. It will produce all kinds of bugs, vulnerabilities, performance problems, memory leaks, etc unless carefully guided.</p>
]]></description><pubDate>Mon, 16 Feb 2026 01:48:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47029950</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=47029950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47029950</guid></item><item><title><![CDATA[New comment by danenania in "Why more companies are recognizing the benefits of keeping older employees"]]></title><description><![CDATA[
<p>There are different kinds of tribal knowledge. Some is company-specific, some is role-specific or domain-specific.</p>
]]></description><pubDate>Thu, 05 Feb 2026 05:30:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46896023</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46896023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46896023</guid></item><item><title><![CDATA[New comment by danenania in "How I estimate work"]]></title><description><![CDATA[
<p>The date is just a useful fiction to:<p>- Create urgency<p>- Keep scope creep under control<p>- Prioritize whatever is most valuable and/or can stand on its own<p>If you just say “I don’t know” and have no target, even if that’s more honest, the project is less likely to ever be shipped at all in any useful form.</p>
]]></description><pubDate>Sat, 24 Jan 2026 21:59:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46748160</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46748160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46748160</guid></item><item><title><![CDATA[New comment by danenania in "Show HN: I built a tool to assist AI agents to know when a PR is good to go"]]></title><description><![CDATA[
<p>I don’t think “ready to merge” necessarily means the agent actually merges. Just that it’s gone as far as it can automatically. It’s up to you whether to review at that point or merge, depending on the project and the stakes.<p>If there are CI failures or obvious issues that another AI can identify, why not have the agent keep going until those are resolved? This tool just makes that process more token efficient. Seems pretty useful to me.</p>
]]></description><pubDate>Sat, 17 Jan 2026 18:09:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46660363</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46660363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46660363</guid></item><item><title><![CDATA[New comment by danenania in "Don't fall into the anti-AI hype"]]></title><description><![CDATA[
<p>Humans make subtle errors all the time too though. AI results still need to be checked over for anything important, but it's on a vector toward being <i>much</i> more reliable than a human for any kind of repetitive task.<p>Currently, if you ask an LLM to do something small and self-contained like solve leetcode problems or implement specific algorithms, they will have a much lower rate of mistakes, in terms of implementing the actual code, than an experienced human engineer. The things it does badly are more about architecture, organization, style, and taste.</p>
]]></description><pubDate>Mon, 12 Jan 2026 05:10:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46584330</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46584330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46584330</guid></item><item><title><![CDATA[New comment by danenania in "Trump says Venezuela’s Maduro captured after strikes"]]></title><description><![CDATA[
<p>I think it’s just realpolitik grand chessboard strategy. Knocking out an unfriendly/uncooperative leader of a strategically important country. That’s always been the real justification for US foreign policy. It’s a game of risk, without moral considerations beyond optics. There isn’t much more to it than that.<p>You can be socialist if you cooperate. You can be a dictator if you cooperate. It’s not about political philosophy or forms of government, just playing ball with the hegemon.</p>
]]></description><pubDate>Sat, 03 Jan 2026 16:14:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46478312</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46478312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46478312</guid></item><item><title><![CDATA[New comment by danenania in "I rebooted my social life"]]></title><description><![CDATA[
<p>Oh man, I relate so hard on the sports conversations.</p>
]]></description><pubDate>Thu, 01 Jan 2026 18:37:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46456721</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46456721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46456721</guid></item><item><title><![CDATA[New comment by danenania in "I rebooted my social life"]]></title><description><![CDATA[
<p>I definitely see your point. I'd just say though that it can put a lot of pressure on the romantic relationship. Some can handle it; others might not. And also it makes it much more difficult to recover if things don't work out.<p>Social life is a bit like SEO. To get the full benefits, you needed to start on it years ago. Trying to do it just-in-time is generally a very frustrating experience. I think there's wisdom in doing casual cultivation when you <i>don't</i> feel you need it. It's like keeping your skills/résumé up-to-date just in case.</p>
]]></description><pubDate>Thu, 01 Jan 2026 18:25:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46456598</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46456598</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46456598</guid></item><item><title><![CDATA[New comment by danenania in "Resistance training load does not determine hypertrophy"]]></title><description><![CDATA[
<p>Going further, you don't even need to count your reps or track how much weight you're lifting. Literally just do any exercise with any weight per muscle group to near failure for 2-5 sets. Rest the muscle groups you targeted the next 1-3 days, and be consistent every week. Bodyweight, free weights, machines, bands, kettlebells, etc. are all fine. That gets you 80-90% of the benefit with no stress.</p>
]]></description><pubDate>Thu, 01 Jan 2026 18:03:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46456321</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46456321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46456321</guid></item><item><title><![CDATA[New comment by danenania in "Salesforce regrets firing 4000 experienced staff and replacing them with AI"]]></title><description><![CDATA[
<p>That’s because gods are a mythical/supernatural invention. No technology can ever really be omniscient or omnipotent. It will always have limitations.<p>In reality, even an ASI won’t know your intent unless you communicate it clearly and unambiguously.</p>
]]></description><pubDate>Thu, 25 Dec 2025 16:44:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46385465</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46385465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46385465</guid></item><item><title><![CDATA[New comment by danenania in "Building a Security Scanner for LLM Apps"]]></title><description><![CDATA[
<p>Thanks for the comment.<p>- On precision vs. noise: yeah, this is a core challenge. Quick answer is the scanner tries to be conservative and lean towards not flagging borderline issues. There's a custom guidance field in the config that lets users adjust sensitivity and severity based on domain/preferences.<p>- CI times: on a medium-sized PR (say 10k lines) in a fairly large codebase (say a few hundred K lines), it will generally run in 5-15 minutes, and run in parallel with other CI actions. In our case, we have other actions that already take this long, so it doesn't increase total CI time at all.<p>- Vulnerability types: the post goes into this a bit, but I would look at scanning and red teaming as working together for defense in depth. RAG and tool misuse vulnerabilities are definitely things the scanner can catch. Red teaming is better for vulnerabilities that might not be visible in the code and/or require complex setup state or back and forth to successfully attack.</p>
]]></description><pubDate>Wed, 17 Dec 2025 19:06:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46304015</link><dc:creator>danenania</dc:creator><comments>https://news.ycombinator.com/item?id=46304015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46304015</guid></item></channel></rss>