<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ofjcihen</title><link>https://news.ycombinator.com/user?id=ofjcihen</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 06:44:41 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ofjcihen" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>A 0 day is just a vulnerability that wasn’t known before now.<p>What’s the criticality of these? Are they realistically exploitable? En mass? Through a complex and highly contextual set of actions? What’s the impact? Etc etc etc.<p>Yes those numbers are a big change but they’re also not spelling doom for us in the security world until we actually know what they mean.<p>The demonstrated ones that they have on the red team blog are neat, the kernel chain is impressive and fun. But nothing I’m seeing here is as world ending as the presser implies.</p>
]]></description><pubDate>Wed, 08 Apr 2026 20:09:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695632</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47695632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695632</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Hahaha I think we might have the same toothbrush.<p>That makes sense and I like the analogy.</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:27:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691561</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47691561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691561</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Oh I agree with you on that. But that’s partially why the language in the presser falls flat for me.<p>I mean as an example while web app pen testing I’ve been running and proxying all my traffic through it with instructions to find vulnerabilities with instructions telling it it’s a senior web app security export looking over my shoulder. It’s already great at that.<p>Ive even told it to do recon and run pen tests on lists of subdomains before (please for the love of god have the right harnesses and guardrails before you do this) and woken up to paid findings before.<p>So like I’m in a weird place where this was already happening and Mythos is being sold like it wasn’t good before?<p>End ramble :/</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:26:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691540</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47691540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691540</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Don’t forget, it also depends on the complexity of the work and the experiences of the operator.<p>The less complex the work and the less experienced the operator means more perceived “wow” factor :)<p>There’s definitely an aspect of how you use it though. In my work it’s mostly been chaining to reduce non-determinism.</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:19:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691436</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47691436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691436</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>I mean if it helps I support the move to not release mythos right off the bat yeah? That makes sense, treat new models like new vulnerabilities and give companies time to scan with them etc.<p>But you have to admit it does serve a savvy business purpose of creating a moat where one wasn’t by getting these tech companies on board and the threat does make for good marketing yeah?</p>
]]></description><pubDate>Wed, 08 Apr 2026 07:09:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686423</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47686423</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686423</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Homie chill. I use Opus every day and I love it. I’m not saying it’s all hype, just that these companies are here to make money and that every advertisement should be taken with salt yeah?<p>Also maybe consider what this kind of visceral reaction indicates on a personal level :/</p>
]]></description><pubDate>Wed, 08 Apr 2026 06:50:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686277</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47686277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686277</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>That feels like a very complex way of looking at it. Another way would be to say “potentially profit seeking companies have an incentive to oversell products even if they’re good”.</p>
]]></description><pubDate>Wed, 08 Apr 2026 06:36:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686198</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47686198</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686198</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>I mean, yes? And my point is that this isn’t exactly a new capability. Sure it’s probably better but we’ve been able to do this. They didn’t just suddenly “turn on the security”. LLMs have excelled at code since widely being released. I have no idea why that’s news and the fact that they’re treating it as such makes it seem like hype.</p>
]]></description><pubDate>Wed, 08 Apr 2026 06:30:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686152</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47686152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686152</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>I mean I work in this world and overhype is constant.<p>Additionally those numbers are somewhat meaningless without more context.</p>
]]></description><pubDate>Wed, 08 Apr 2026 06:21:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686087</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47686087</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686087</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>> I think you are a bit dishonest about how objectively you are measuring<p>As someone who has made a sizable amount of money in security research while using Claude you might be right but not in the way you think.</p>
]]></description><pubDate>Wed, 08 Apr 2026 05:50:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685870</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685870</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>I mean yeah. I’ve had these successes without scaffolding or really anything past Claude CLI and a small prompt as well?</p>
]]></description><pubDate>Wed, 08 Apr 2026 05:47:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685836</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685836</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Could this potentially be because more researches are becoming accustomed to the tools/adding them in their pipelines?<p>The reason I ask is because I’ve been using them to snag bounties to great effect for quite a while and while other models have of course improved they’ve been useful for this kind of work before now.</p>
]]></description><pubDate>Wed, 08 Apr 2026 05:38:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685780</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685780</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>I’m in the same boat as you. I believe the model is an improvement of course but I’ve been successfully bug finding 0 day hunting and red teaming with models for the last two years and while that’s impressive I have a feeling that this doomsaying/overhype is mostly marketing being that’s being amplified by non-security folks.</p>
]]></description><pubDate>Wed, 08 Apr 2026 05:34:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685742</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685742</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>This was true before LLMs though.<p>I mean even going back to Sasser.</p>
]]></description><pubDate>Wed, 08 Apr 2026 05:21:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685631</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685631</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685631</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>With the right prompting (mostly creating a narrative that justifies the subject matter as okay to perform) other models have already been doing this for me though. That’s another confusing bit for me about how this is portrayed and I refuse to believe I’m a revolutionary user right?<p>I mean I’m sitting on $10k worth of bug payouts right now partially because that was already a thing.</p>
]]></description><pubDate>Wed, 08 Apr 2026 05:02:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685505</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685505</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Are you saying that LLMs can use fuzzers or are you saying that they work like fuzzers? Because one of those is less…deterministic? Then the other.<p>Regardless and in the spirit of my original response my answer would be to give the LLM access to a fuzzer (plus other tools etc) but also have fuzzers in the pipeline. Partially because that increases the determinism in the mix and partially because why not? Layering is almost always better than not.<p>But again more than anything I’m focusing on the accusations of cope. People SHOULD have measured reactions to claims about any product. People SHOULD be asking questions like this. I know that the LLM debate is often “spicy” but man let’s just try to lower the temperature a bit yeah?</p>
]]></description><pubDate>Wed, 08 Apr 2026 04:41:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685339</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685339</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>I’m sure the new model is a step above the old one but I can’t be the only person who’s getting tired of hearing about how every new iteration is going to spell doom/be a paradigm shift/change the entire tech industry etc.<p>I would honestly go so far as to say the overhype is detrimental to actual measured adoption.</p>
]]></description><pubDate>Wed, 08 Apr 2026 04:33:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685285</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685285</guid></item><item><title><![CDATA[New comment by ofjcihen in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Good lord, why such a virulent response to something that seems like we should be considering?<p>As someone in cybersecurity for 10+ years my immediate assumption is why not both? I don’t think considering that they could both have their uses is “cope”.</p>
]]></description><pubDate>Wed, 08 Apr 2026 04:30:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685260</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=47685260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685260</guid></item><item><title><![CDATA[New comment by ofjcihen in "OpenAI eats jobs, then offers to help you find a new one at Walmart"]]></title><description><![CDATA[
<p>Salesforces CEO needs to consider replacing them with security product architects so they can figure out a way to send me logs that aren’t crap</p>
]]></description><pubDate>Fri, 05 Sep 2025 13:10:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45138129</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=45138129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45138129</guid></item><item><title><![CDATA[New comment by ofjcihen in "A high schooler writes about AI tools in the classroom"]]></title><description><![CDATA[
<p>I think your assumption is where this falls apart. To be clear, your assumption about where time is spent and how there can only be 2 outcomes.</p>
]]></description><pubDate>Fri, 05 Sep 2025 13:06:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45138089</link><dc:creator>ofjcihen</dc:creator><comments>https://news.ycombinator.com/item?id=45138089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45138089</guid></item></channel></rss>