<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: pinfloyd</title><link>https://news.ycombinator.com/user?id=pinfloyd</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 10 May 2026 08:45:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=pinfloyd" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: External admission gate for GitHub Actions before execution]]></title><description><![CDATA[
<p>Built this around one simple idea:<p>the workflow that wants to execute should not be the same place that decides whether execution may continue.<p>This project puts an external allow/deny boundary before action.<p>Public entry points:<p>* live pilot
* commercial request
* private deployment<p>There is also a GitHub Marketplace action install surface, but the main point is the boundary model itself: decision stays outside the workflow that is asking to proceed.<p>Looking for feedback from people working on CI/CD, security controls, approval boundaries, and automated execution.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47811161">https://news.ycombinator.com/item?id=47811161</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 17 Apr 2026 22:14:02 +0000</pubDate><link>https://ai-admissibility.com/</link><dc:creator>pinfloyd</dc:creator><comments>https://news.ycombinator.com/item?id=47811161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47811161</guid></item><item><title><![CDATA[Scanners are too late for AI-driven actions]]></title><description><![CDATA[
<p>Most AI and automation stacks still rely on monitoring, logs, and post-event review.<p>That is not control if the action is irreversible.<p>A scanner can tell you what happened.
A boundary can decide whether it may happen.<p>The core problem I keep running into is this:
once agents can call tools, trigger workflows, move data, spend money, or change state, “observe after the fact” stops being enough.<p>What seems missing is a practical pre-execution decision layer:
an external allow/deny boundary between intent and execution.<p>Questions I’m interested in:<p>* How are people handling this today for agentic workflows in production?
* Are monitoring + approvals actually enough once execution becomes fast and autonomous?
* Where do existing policy engines break down for AI-driven actions?
* What would a real pre-execution control layer need to verify before allowing action?<p>I’m less interested in theory here and more in what people have actually seen fail in production.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47723275">https://news.ycombinator.com/item?id=47723275</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 10 Apr 2026 20:27:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47723275</link><dc:creator>pinfloyd</dc:creator><comments>https://news.ycombinator.com/item?id=47723275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47723275</guid></item></channel></rss>