<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: deathmonger5000</title><link>https://news.ycombinator.com/user?id=deathmonger5000</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 10:33:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=deathmonger5000" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>I see what you're saying. Yes, what you described sounds like a much better approach in terms of being terminal agnostic. It would be awesome to have a tool like iterm-mcp that supports any terminal, any OS, etc. iterm-mcp is limited specifically to iTerm.</p>
]]></description><pubDate>Thu, 30 Jan 2025 23:50:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=42883434</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42883434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42883434</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>It sounds like iterm-mcp isn’t a tool that would fit in your organization. I’m totally not trying to change that or sell you anything.<p>I’m curious what your thoughts are around Cursor, Windsurf, etc. Those are IDE’s that provide the model with limited access to the terminal. Where do you feel like those tools and their AI features - terminal access specifically, fall in an org like yours? Are they disallowed due to terminal access or are the limitations of those tools safe enough?</p>
]]></description><pubDate>Thu, 30 Jan 2025 23:08:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=42883152</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42883152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42883152</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>I think you might mean use the tail command? See my other comments about not wanting to change the user’s workflow. I don’t want to get in between the user and their commands in any way. That’s what drove my design decisions.<p>Would you mind elaborating if I misunderstood what you meant?</p>
]]></description><pubDate>Thu, 30 Jan 2025 23:02:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=42883111</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42883111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42883111</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>> I just can’t imagine giving AI any rights to actually run commands without oversight<p>We’re 100% on the same page here. No one should ask Claude (or any model) to do something using their terminal and then just walk away. I hope that’s clear from the safety section of what I posted (and in the project README).<p>Claude REALLY wants to help, and it will go on a journey to the end of the earth to accomplish your task. If you delegate tasks to this tool then you’re going to have to babysit it.</p>
]]></description><pubDate>Thu, 30 Jan 2025 22:59:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=42883087</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42883087</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42883087</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>I wonder how much of what iTerm knows about the current terminal state is exposed. I got it working "good enough for me" and moved on to other pieces of the puzzle. I'm sure what I did could be improved upon quite a bit. I bet you're right that there's additional juice to squeeze out of whatever iTerm exposes.</p>
]]></description><pubDate>Thu, 30 Jan 2025 20:48:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42882013</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42882013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42882013</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>> Why would you favor this approach over say, a command line tool that can pipe input into and out of a configurable AI backend, fork subprocesses [...]<p>I think what you're describing is something that's built to perform agent based tasks. iterm-mcp isn't intended to be that. It's intended to be a bridge from something like Claude Desktop to iTerm. The REPL use case is a key thing to understand here.<p>What you're describing is great if you want to delegate "install python on my system" for example, but it doesn't support the REPL use case where you want to work with the REPL through something like Claude Desktop.<p>The other key use case iterm-mcp addresses is asking questions about what's sitting in the terminal right now. For example, you ran `brew install ffmpeg` and something didn't work: you can ask Claude using iterm-mcp.<p>> This tool seems like it locks you into iTerm2.<p>This tool is intended for use with iTerm2. It's not that it "locks you into iTerm2" - iterm-mcp is something that you would choose to use if you already use iTerm2.</p>
]]></description><pubDate>Thu, 30 Jan 2025 20:17:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42881753</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42881753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42881753</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>Hi, thanks for your comment! I haven't explored that approach. Can you say more? Will using the approach that you suggested support interactive CLI utilities like a REPL? Those are use cases that I definitely want to support with this project.</p>
]]></description><pubDate>Thu, 30 Jan 2025 19:24:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=42881199</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42881199</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42881199</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2"]]></title><description><![CDATA[
<p>Hi, thanks for the comment! wcgw looks really cool - nice job with it!<p>> I wonder if there's really a need for separate write to terminal and read output functions? I was hoping that write command itself would execute and return the output of the command, saving back and forth latency.<p>I traded back and forth latency for lower token use. I didn't want to return gobs of output from `brew install ffmpeg` when the model really only needs to see the last line of output in order to know what to do next.<p>> The way I solved it is by setting a special PS1 prompt. So as soon as I get that prompt I know the task is done. I wonder if a similar thing can be done in your mcp?<p>What you suggested with changing the prompt is a good idea, but it breaks down in certain scenarios - particularly if the user is using a REPL. Part of my goal for this is to not have to modify the shell prompt or introduce visual indicators for the AI because I don't want the user to have to work around the AI. I want the AI to help as requested as if it's sitting at your keyboard. I don't want to introduce any friction or really any unwanted change to the user's workflow at all.<p>It's important to me that this work with REPL's and other interactive CLI utilities. If that weren't a design concern then I'd definitely explore the approach that you suggested.</p>
]]></description><pubDate>Thu, 30 Jan 2025 19:18:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=42881128</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42881128</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42881128</guid></item><item><title><![CDATA[Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2]]></title><description><![CDATA[
<p>Hi HN! Ever wish you could just point your AI assistant at your terminal and say 'what's wrong with this output?' That's why I built iterm-mcp. It lets MCP clients like Claude Desktop directly interact with your iTerm2 terminal - reading logs, running commands, using REPLs, and helping debug issues. Want to explore data or debug using a REPL? The AI can start the REPL, run commands, and help interpret the results.<p>This is an MCP server that integrates with Claude Desktop, LibreChat, and other Model Context Protocol compatible clients.<p><a href="https://github.com/ferrislucas/iterm-mcp">https://github.com/ferrislucas/iterm-mcp</a><p><i>Note: Independent project, not officially affiliated with iTerm2</i><p>## Features<p>*Efficient Token Use:* iterm-mcp gives the model the ability to inspect only the output that the model is interested in. The model typically only wants to see the last few lines of output even for long running commands.<p>*Natural Integration:* You share iTerm with the model. You can ask questions about what's on the screen, or delegate a task to the model and watch as it performs each step.<p>*Full Terminal Control and REPL support:* The model can start and interact with REPL's as well as send control characters like ctrl-c, ctrl-z, etc.<p>*Easy on the Dependencies:* iterm-mcp is built with minimal dependencies and is runnable via npx. It's designed to be easy to add to Claude Desktop and other MCP clients. It should just work.<p>## Real-World Example: Debugging Sidekiq Jobs<p>I needed to debug a Sidekiq job with complex arguments. The arguments were partially obfuscated in the logs. I asked Claude: "open rails console, show me arguments for the latest XYZ job". The model:<p>1. Launched Rails console
2. Retrieved job details
3. Displayed the arguments that I was looking for<p>## Architectural Journey<p>This project had a couple interesting constraints around command execution:<p>### 1. Token Efficiency Challenge<p>I wanted to constrain tokens as much as possible. I didn't want to send the entire output of a long running command to the model, but there's not a great way to know which parts of the output are important to what the model is doing. Sampling could be used here, but it's not well supported yet.<p>*Solution:* I arrived at a pull-based solution for this. The command from the model is sent to the terminal, and the model is made aware of how many lines of output were generated. The model can choose to retrieve as many lines of the buffer that it thinks are relevant.<p>### 2. Long-Running Process Support<p>I wanted to support long running processes. It turns out that when you run `brew install ffmpeg` - it takes a while, and it's not always clear when the job is done. In early proof of concepts, the model would assume the command completed successfully and begin sending additional commands to the terminal before the first command had finished.<p>*Solution:* iTerm provides a way to ask if the terminal is waiting for user input, but I found that it tended to show false positives in certain situations. For example, a long running command would result in iTerm reporting that the terminal was waiting for input when in fact the command was still running. I found that inspecting the processes associated with the terminal and waiting until the most interesting of those processes settles to a low resource usage is a fair indicator of long running commands being ready for input.<p>## Requirements<p>* iTerm2 must be running<p>* Node version 18 or greater<p>## Safety Considerations<p>* The user is responsible for using the tool safely.<p>* No built-in restrictions: iterm-mcp makes no attempt to evaluate the safety of commands that are executed.<p>* Models can behave in unexpected ways. The user is expected to monitor activity and abort when appropriate.<p>* For multi-step tasks, you may need to interrupt the model if it goes off track. Start with smaller, focused tasks until you're familiar with how the model behaves.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42880449">https://news.ycombinator.com/item?id=42880449</a></p>
<p>Points: 43</p>
<p># Comments: 21</p>
]]></description><pubDate>Thu, 30 Jan 2025 18:14:46 +0000</pubDate><link>https://github.com/ferrislucas/iterm-mcp</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42880449</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42880449</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Draw.Audio – A musical sketchpad using the Web Audio API"]]></title><description><![CDATA[
<p>Really nice work! Thanks for sharing this. Odd time signatures would be a nice addition. Thanks again!</p>
]]></description><pubDate>Fri, 08 Nov 2024 14:46:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42087224</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=42087224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42087224</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Show HN: Agent.exe, a cross-platform app to let 3.5 Sonnet control your machine"]]></title><description><![CDATA[
<p>Can you point out where telemetry or other spying can be found in this codebase?</p>
]]></description><pubDate>Wed, 23 Oct 2024 17:42:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41927416</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=41927416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41927416</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What are you working on (August 2024)?"]]></title><description><![CDATA[
<p>I’m working on a tool called Together Gift It. Together Gift It makes managing group gift events like Christmas easier. I’m making it because my family was sending an insane amount of group texts during the holidays, and it was getting ridiculous. My favorite one was when someone included the gift recipient in the group text about what gift we were getting for the recipient.<p>Together Gift It solves the problem just the way you’d think: with AI.<p>Just kidding. It solves the problem by keeping everything in one place. No more group texts! You can have private or shared gift lists, and there are some AI features like gift idea collaboration and product search. But the AI stuff is still a work in progress.<p>I’m grateful for any constructive feedback.<p><a href="https://www.togethergiftit.com/" rel="nofollow">https://www.togethergiftit.com/</a></p>
]]></description><pubDate>Sun, 25 Aug 2024 00:00:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=41343001</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=41343001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41343001</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Language models on the command line"]]></title><description><![CDATA[
<p>I created a CLI tool called Promptr which is an open source developer tool that allows the user to modify their codebase using plain language. The tool sends the user’s query as well as the relevant source code to an LLM. The changes from the LLM are applied directly to the user’s filesystem eliminating the need for copy pasting. Promptr is implemented in Javascript, and it incorporates liquidjs templating so users can build a library of reusable prompt templates for common tasks and contexts.<p>You can find out more here: <a href="https://github.com/ferrislucas/promptr">https://github.com/ferrislucas/promptr</a></p>
]]></description><pubDate>Tue, 25 Jun 2024 13:25:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=40788334</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=40788334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40788334</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What is the most useless project you have worked on?"]]></title><description><![CDATA[
<p>I created a tool called Together Gift It because my family was sending an insane amount of gift related group texts during the holidays. My favorite one was when someone included the gift recipient in the group text about what gift we were getting for the person.<p>Together Gift It solves the problem the way you’d think: with AI. Just kidding. It solves the problem by keeping everything in one place. No more group texts. There are wish lists and everything you’d want around that type of thing. There is also AI.<p><a href="https://www.togethergiftit.com/" rel="nofollow">https://www.togethergiftit.com/</a></p>
]]></description><pubDate>Sun, 07 Apr 2024 14:50:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=39961150</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=39961150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39961150</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?"]]></title><description><![CDATA[
<p>Sorry wrong thread :(</p>
]]></description><pubDate>Sun, 07 Apr 2024 03:03:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=39957809</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=39957809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39957809</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?"]]></title><description><![CDATA[
<p>I created a tool called Together Gift It because my family was sending an insane amount of gift related group texts during the holidays. My favorite one was when someone included the gift recipient in the group text about what gift we were getting for the person.<p>Together Gift It solves the problem the way you’d think: with AI. Just kidding. It solves the problem by keeping everything in one place. No more group texts. There are wish lists and everything you’d want around that type of thing. There is also AI.<p><a href="https://www.togethergiftit.com/" rel="nofollow">https://www.togethergiftit.com/</a></p>
]]></description><pubDate>Sat, 06 Apr 2024 23:44:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=39956750</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=39956750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39956750</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What are some actual use cases of AI Agents right now?"]]></title><description><![CDATA[
<p>Here's the fork of Open Interpreter that I was experimenting with: <a href="https://github.com/ferrislucas/open-interpreter/pull/1/files">https://github.com/ferrislucas/open-interpreter/pull/1/files</a><p>The system prompt that adds the Promptr CLI tool is here: <a href="https://github.com/ferrislucas/open-interpreter/pull/1/files#diff-84e60b3b4939012c48b4d4a4fe07cf56fe76abe8edddc92063565097c75957cbR2">https://github.com/ferrislucas/open-interpreter/pull/1/files...</a></p>
]]></description><pubDate>Fri, 16 Feb 2024 03:55:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=39393008</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=39393008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39393008</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What are some actual use cases of AI Agents right now?"]]></title><description><![CDATA[
<p>OI fixed the factories and config by attempting to run the tests. The test run would fail because there's no test suite configured, so OI inspected the Gemfile using `cat`. Then it used Promptr with a prompt like "add the rspec gem to Gemfile". Then OI tries again and again - addressing each error as encountered until the test suite was up and running.<p>In the case of generating unit tests using Promptr, I have an "include" file that I include from every prompt. The "include" file is specific to the project that I'm using Promptr in. It says something like "This is a rails 7 app that serves as an API for an SPA front end. Use rspec for tests. etc. etc."<p>Somewhere in that "include" file there is a summary of the main entities of the codebase, so that every request has a general understanding of the main concepts that the codebase is dealing with. In the case of the rspec tests that it generated, I included the relevant files in the prompt by including the path to the files in the prompt I give to Promptr.<p>For example, if a test is for the Book model then I mention book.rb in the prompt. Perhaps Book uses some services in app/services - if that's relevant for the task then I'll include a glob of files using a command line argument  - something like `promptr -p prompt.liquid app/services/book*.rb` where prompt.liquid has my prompt mentioning book.rb<p>You have to know what to include in the prompts and don't be shy about stuffing it full of files. It works until it doesn't, but I've been surprised at well it works in a lot of cases.</p>
]]></description><pubDate>Wed, 14 Feb 2024 21:58:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=39376129</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=39376129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39376129</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What are some actual use cases of AI Agents right now?"]]></title><description><![CDATA[
<p>I think I have the prompts still, but not on my work machine. I'll look tonight and edit this comment with whatever I can find.<p>I actually forked OI and baked in a prompt that was something like "Promptr is a CLI etc. etc., give Promptr conceptual instructions to make codebase and configuration changes". I think I put this in the system message that OI uses on every request to the OpenAI API.<p>Once I had OI using Promptr then I worked on a prompt for OI that was something like "create a test suite for the rails in ~/rails-app - use rspec, use this or that dependency, etc.".<p>Thanks for your interest! I'll try to add more details later.</p>
]]></description><pubDate>Wed, 14 Feb 2024 21:03:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=39375297</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=39375297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39375297</guid></item><item><title><![CDATA[New comment by deathmonger5000 in "Ask HN: What are some actual use cases of AI Agents right now?"]]></title><description><![CDATA[
<p>I taught <a href="https://github.com/KillianLucas/open-interpreter">https://github.com/KillianLucas/open-interpreter</a> how to use <a href="https://github.com/ferrislucas/promptr">https://github.com/ferrislucas/promptr</a><p>Then I asked it to add a test suite to a rails side project. It created missing factories, corrected a broken test database configuration, and wrote tests for the classes and controllers that I asked it to.<p>I didn't have to get involved with mundane details. I did have to intervene here and there, but not much. The tests aren't the best in the world, but IMO they're adding value by at least covering the happy path. They're not as good as an experienced person would write.<p>I did spend a non-trivial amount of time fiddling with the prompts I used to teach OI about Promptr as well as the prompts I used to get it to successfully create the test suite.<p>The total cost was around $11 using GPT4 turbo.<p>I think in this case it was a fun experiment. I think in the future, this type of tooling will be ubiquitous.</p>
]]></description><pubDate>Wed, 14 Feb 2024 20:10:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=39374615</link><dc:creator>deathmonger5000</dc:creator><comments>https://news.ycombinator.com/item?id=39374615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39374615</guid></item></channel></rss>