<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: marcfisc</title><link>https://news.ycombinator.com/user?id=marcfisc</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 06:37:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=marcfisc" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by marcfisc in "I'm helping my dog vibe code games"]]></title><description><![CDATA[
<p>Amazing story and work!<p>You mentioned Claude not being able to see the games. What I really like for this is the Claude Code Chrome Extension. You can easily make godot build a web version, and then have Claude debug it interactively in the browser.</p>
]]></description><pubDate>Wed, 25 Feb 2026 07:27:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47148483</link><dc:creator>marcfisc</dc:creator><comments>https://news.ycombinator.com/item?id=47148483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47148483</guid></item><item><title><![CDATA[New comment by marcfisc in "GitHub MCP exploited: Accessing private repositories via MCP"]]></title><description><![CDATA[
<p>Sadly, these ideas have been explored before, e.g.: <a href="https://simonwillison.net/2022/Sep/17/prompt-injection-more-ai/" rel="nofollow">https://simonwillison.net/2022/Sep/17/prompt-injection-more-...</a><p>Also, OpenAI has proposed ways of training LLMs to trust tool outputs less than User instructions (<a href="https://arxiv.org/pdf/2404.13208" rel="nofollow">https://arxiv.org/pdf/2404.13208</a>). That also doesn't work against these attacks.</p>
]]></description><pubDate>Mon, 26 May 2025 20:37:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44101432</link><dc:creator>marcfisc</dc:creator><comments>https://news.ycombinator.com/item?id=44101432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44101432</guid></item><item><title><![CDATA[New comment by marcfisc in "Show HN: MCP-Shield – Detect security issues in MCP servers"]]></title><description><![CDATA[
<p>Cool work! Thanks for citing our (InvariantLabs) blog posts!
I really like the identify-as feature!<p>We recently launched a similar tool ourselfs, called mcp-scan: <a href="https://github.com/invariantlabs-ai/mcp-scan">https://github.com/invariantlabs-ai/mcp-scan</a></p>
]]></description><pubDate>Tue, 15 Apr 2025 08:26:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=43690315</link><dc:creator>marcfisc</dc:creator><comments>https://news.ycombinator.com/item?id=43690315</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43690315</guid></item><item><title><![CDATA[MCP Security Notification: Tool Poisoning Attacks]]></title><description><![CDATA[
<p>Article URL: <a href="https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks">https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43547127">https://news.ycombinator.com/item?id=43547127</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 01 Apr 2025 14:23:22 +0000</pubDate><link>https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks</link><dc:creator>marcfisc</dc:creator><comments>https://news.ycombinator.com/item?id=43547127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43547127</guid></item><item><title><![CDATA[Show HN: Agent Benchmark Repository and Viewer]]></title><description><![CDATA[
<p>We have created a public registry of AI agent benchmarks and agent runtime traces to help everyone better understand how AI agents work and fail these days.<p>Our team and many agent builders we talked to wanted a better way of viewing what agents in these benchmarks do, e.g., how a particular coding agent approaches SWE-bench (<a href="https://www.swebench.com/" rel="nofollow">https://www.swebench.com/</a>). Right now, there are two reasons why this is difficult: benchmark traces are distributed on many different websites, and they are really hard to read. Often, they are huge raw JSON dumps of the agent in formats that are hard to read. 
 To alleviate this, we build this repository, where it is easy to see what individual agents do and how they solve tasks (or fail to).<p>We hope that putting these agent traces in one place makes it easier to understand and progress on AI agent development in both industry and academia. We are happy to add more benchmarks and agents – let us know if you have something in mind.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42246135">https://news.ycombinator.com/item?id=42246135</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 26 Nov 2024 14:37:47 +0000</pubDate><link>https://explorer.invariantlabs.ai/benchmarks/</link><dc:creator>marcfisc</dc:creator><comments>https://news.ycombinator.com/item?id=42246135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42246135</guid></item><item><title><![CDATA[New comment by marcfisc in "LMQL is a programming language for language model interaction"]]></title><description><![CDATA[
<p>Author here. Happy to answer any questions you have.</p>
]]></description><pubDate>Sat, 08 Apr 2023 07:11:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=35491250</link><dc:creator>marcfisc</dc:creator><comments>https://news.ycombinator.com/item?id=35491250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35491250</guid></item></channel></rss>