<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ghm2199</title><link>https://news.ycombinator.com/user?id=ghm2199</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 09:23:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ghm2199" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ghm2199 in "Google broke its promise to me – now ICE has my data"]]></title><description><![CDATA[
<p>Nice. I want to do the same too. What process/workflow did you use to move all the websites you had given your email addresses to, to move to your proton email? I am guessing it will take several years, but I would like to start the move of my gmail.</p>
]]></description><pubDate>Wed, 15 Apr 2026 21:36:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47785602</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47785602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47785602</guid></item><item><title><![CDATA[New comment by ghm2199 in "Thomson Reuters Fired Worker for Speaking Out About ICE, Former Employee Says"]]></title><description><![CDATA[
<p>I would rank it quite close to The worst company to work for tech wise. 
I've worked for a bit as a contract worker in their Eikon/Elektron product division(the Bloomberg equivalent).<p>They have "everything": silos, fiefdoms. Systems team that operated as if the devops model never happened. A CTO they hired @~2015/2016 that had no experience in tech whatsoever. A board which instituted and instructed a cost cutting policy by,for e.g. , but not limited to outsourcing every job possible with little to no tradeoff on quality of engineering. And this was way before AI was a thing.<p>This left staff engineers with no bargaining power to hire engineers to write good systems.<p>If you are considering a job there, I would recommend doing some due diligence.</p>
]]></description><pubDate>Wed, 15 Apr 2026 19:05:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47783690</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47783690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47783690</guid></item><item><title><![CDATA[New comment by ghm2199 in "We have a 99% email reputation, but Gmail disagrees"]]></title><description><![CDATA[
<p>If you did that then you are better off never being targeted by emails/messages by the company at all. It is in the benefit of the company to know that. Right now its a tedious unsubscribe process that requires me to keep doing it all the time and company that does not know just blasts everyone who signed up. Its a ridiculous thing to do.</p>
]]></description><pubDate>Mon, 13 Apr 2026 12:04:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750809</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47750809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750809</guid></item><item><title><![CDATA[New comment by ghm2199 in "Show HN: We built a camera only robot vacuum for less than $300 (well almost)"]]></title><description><![CDATA[
<p>To get an idea of the thing. I would find something similar to  <a href="https://docs.nvidia.com/learning/physical-ai/getting-started-with-isaac-lab/latest/an-introduction-to-robot-learning-and-isaac-lab/03-available-robots-and-environments/02-available-environments.html" rel="nofollow">https://docs.nvidia.com/learning/physical-ai/getting-started...</a><p>Caveat here is you may not be able to use their environments or you may or may not have their kind or robots to train your roomba. But at-least you could get an idea of how RL training is done for robots like yours.</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:15:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743300</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47743300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743300</guid></item><item><title><![CDATA[New comment by ghm2199 in "We have a 99% email reputation. Gmail disagrees"]]></title><description><![CDATA[
<p>This would require an inversion of dynamics based on quantification and collective realization of a couple of things:<p>0. Emails suffer from a "misclassification" of intent issue on a time*attention scale. Imagine time of the day/week/year on one axis and their attention on email inbox on the other. Emails have to arrive at the right (x,y) point for a user to act on. But they rarely do.<p>1. Well being of a user is proportional to their current state of mind to receive an message from X. Which is proportional to how likely they are to listen what you have to say.<p>Both of these suggest a negotiation of messages between two parties, much like when a bartender asks you if you want a refill and you can say yes/no.</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:07:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743210</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47743210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743210</guid></item><item><title><![CDATA[New comment by ghm2199 in "We have a 99% email reputation, but Gmail disagrees"]]></title><description><![CDATA[
<p>Like it seems one needs to re-think email from first principles here. One idea is to use a the idea of "theory of mind"(ToM). e.g. The ToM between me and a sender would be for <i>both</i> to know: "I am not as excited as you about your product launch, so sending it is a 'spam' from my PoV".<p>We could use two negotiating agent, e.g. my agent that knows what I care about now/today/1-week ago and negotiates with an aspirant sender's agent <i>before</i> they send me any messages. e.g. I could set a policy based (my ToM) for my agent like "Between 1-1:15PM every day I want to read about all product announcements I subscribed to for XYZ product type". My agent would go talk to the aspirant's sender agent and gets messages right then.<p>An alternative policy could be "I have some free time now, create a summary/gist of all announcements on products I might be interested in.". The agents would negotiate with the sender to do the same.<p>Signups emails would be to replaced by an agent which "creates" a ToM with sender on hard-stop dates. I would tell my agent : "I am interested in this logging service to compare different ones, I will not be interested once ENG-123 is closed" and mine would not just tell the sender that they are not interested when the time comes (which is  when ENG-123 is closed).<p>Longer term policies would just age out any message negotiations because I don't like/care about those products anymore.</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:02:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743160</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47743160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743160</guid></item><item><title><![CDATA[New comment by ghm2199 in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>For indie developers like myself, I often use chat GPT desktop and Claude desktop for arbitrary tasks, though my main workhorse is a customized coding harness with CC daemons on my nas. With the apps, b I missed having access to my Nas server where my dev environment is. So I wrote a file system MCP and hosted it with a reverse proxy on my Truenas with auth0. I wanted access to it from all platforms CharGPT mobile, desktop. Same for CC.<p>For chatgpt desktop and Claude desktop my experience with MCPs connected to my home NAS is pretty poor. It(as in the app) often times out fetching data(even though there is no latency for serving the request in the logs), often the existing connection gets invalidated between 2 chat turns and chat gpt just moves on answering without the file in hand.<p>I am not using it for writing code, its mostly read only access to Fs. Has anyone surmounted these problems for this access patterns and written about how to build mcps to be reliable?</p>
]]></description><pubDate>Fri, 10 Apr 2026 02:51:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47713042</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47713042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47713042</guid></item><item><title><![CDATA[New comment by ghm2199 in "Instant 1.0, a backend for AI-coded apps"]]></title><description><![CDATA[
<p>One thing I have always wanted  to do is cancel an AI Agent executing remotely that I kicked off as it streamed its part by part response(part could words, list of urls or whatever you want the FE to display). A good example is web-researcher agent that searches and fetches web pages remotely and sends it back to the local sub-agent to summarize the results. This is something claude-code in the terminal does not quite provide. In Instant would this be trivial to build?<p>Here is how I built it in a WUI: I sent SSE events from Server -> Client streaming web-search progress, but then the client could update a `x` box on "parent" widget using the `id` from a SSE event using a simple REST call. The `id` could belong to parent web-search or to certain URLs which are being fetched.
And then whatever is yielding your SSE lines would check the db would cancel the send(assuming it had not sent all the words already).</p>
]]></description><pubDate>Thu, 09 Apr 2026 23:06:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711463</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47711463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711463</guid></item><item><title><![CDATA[New comment by ghm2199 in "Instant 1.0, a backend for AI-coded apps"]]></title><description><![CDATA[
<p>For people like me — who are kind of familiar with how react/jetpack compose/flutter like frameworks work — I recall using react-widget/composables which seamlessly update when these register to receive updates to the underlying datamodel. The persistence boundary in these apps was the app/device where it was running. The datamodel was local. You still had to worry about making the data updates to servers and back to get to other devices/apps.<p>Instant crosses that persistence boundary, your app can propagate updates to any one who has subscribed to the abstract datastore — which is on a server somewhere, so you the engineer don't have to write that code. Right?<p>But how is this different/better than things like, i wanna say, vercel/nextjs or the like that host similar infra?</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:33:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711150</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47711150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711150</guid></item><item><title><![CDATA[New comment by ghm2199 in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>I read the TCP patch they submitted for BSD linux. Maybe I don't understand it well enough, but optimizing the use of a fuzzer to discover vulnerabilities  — while releasing a model is a threat for sure — sounds something reducible/generalizable to maze solving abilities like in ARC. Except here the problem's boundaries are well defined.<p>Its quite hard to believe why it took this much inference power ($20K i believe) to find the TCP and H264 class of exploits. I feel like its just the training data/harness based traces for security that might be the innovation here, not the model.</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:10:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692190</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47692190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692190</guid></item><item><title><![CDATA[New comment by ghm2199 in "Show HN: We built a camera only robot vacuum for less than $300 (well almost)"]]></title><description><![CDATA[
<p>Here is thought, this is a fixed 3d environment and you lack training data or at least an algorithm to train. Why not use RL to learn good trajectories?
Like build a 3d environment of your home/room and generate images and trajectories in a game engine to generate image data to pretrain/train it, then for each run hand label only promising trajectories i.e. where the robot actually did better cleaning. That might make it a good RL exercise. You could also place some physical flags in the room that when the camera gets close enough it gets rewarded to automate these trajectory rewards.<p>I would begin in one room to practice this.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:33:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690009</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47690009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690009</guid></item><item><title><![CDATA[New comment by ghm2199 in "Anthropic's Project Glasswing sounds necessary to me"]]></title><description><![CDATA[
<p>So my home router, all my iot devices attached to it from printers to projectors, not to mention custom stacks like Lutron. BLE based locks, car key fobs.<p>All of these technically could have zero day vulnerabilities and people/companies who made it don't have the resources to buy 20000$ of tokens to go debug them... Maybe they don't care but if they do, what if they can't afford such models or get access in time.<p>I would like to know how can someone like me defend against them?</p>
]]></description><pubDate>Wed, 08 Apr 2026 00:55:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47683346</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47683346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47683346</guid></item><item><title><![CDATA[New comment by ghm2199 in "Show HN: Brutalist Concrete Laptop Stand (2024)"]]></title><description><![CDATA[
<p>If you want to get a feel of what brutalist architecture is like up close, go to the Barbican in london if you can.<p>Its quite surreal. Very much in-your-face concrete exposure. Yet, to walk and experience it with your eyes is a study of contrasts: a giant, comparitively modern, greenhouse, has a glass roof open to the sky and yet many floors have no light or windows at all.  And in the outdoor spaces, like the fountain/canal running through the complex the concrete will sort of be in the background and lets you focus on everything else: the water, the swans and the people around.<p>Juxtapose that to low hanging exposed concrete roofs and walls in closed passages could make one feel constrained/claustrophobic/yearning for light.</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:44:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47676243</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47676243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47676243</guid></item><item><title><![CDATA[New comment by ghm2199 in "Show HN: Ghost Pepper – Local hold-to-talk speech-to-text for macOS"]]></title><description><![CDATA[
<p>I've been using handy since a month and its awesome. I mainly use it with coding agents or when I don't want to type into text boxes. How is this different?<p>Part of the reason handy is awesome is because it uses some of the same rust infra for integrating with the model, so that actually makes it possible to use the code as a library in android or iOS. I have an android app that runs on a local model on the phone too using this.</p>
]]></description><pubDate>Tue, 07 Apr 2026 02:50:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670185</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47670185</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670185</guid></item><item><title><![CDATA[New comment by ghm2199 in "Launch HN: Freestyle – Sandboxes for Coding Agents"]]></title><description><![CDATA[
<p>O(1) What! What might bring it down to say 10's of ms? Looks like its some kind of optimizable wall that its 500 for everything.<p>Like with 10ms then online replication/backup — analogus to litestream for sqlite — but for in memory processes becomes feasible, no?</p>
]]></description><pubDate>Tue, 07 Apr 2026 02:45:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670157</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47670157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670157</guid></item><item><title><![CDATA[New comment by ghm2199 in "Launch HN: Freestyle – Sandboxes for Coding Agents"]]></title><description><![CDATA[
<p>For postgres there are pg containers, we use them in pytest fixtures for 1000's of unit-tests running concurrently. I imagine you could run them for integration test purposes too. What kind of testing would you run with these that can't be run with pg containers or not covered by conventional testing?<p>I'll say this is still quite useful win for browser control usecases and also for debugging their crashes.</p>
]]></description><pubDate>Tue, 07 Apr 2026 02:14:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47669947</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47669947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47669947</guid></item><item><title><![CDATA[Ask HN: How do you use AI coding harnesses for individual development?]]></title><description><![CDATA[
<p>I'd like to limit the Scope to Indie development of products and services.<p>1. Cost: I use humanlayer which uses the claude code go binary with the max/pro plan. My costs are capped. Is this popular with medium(ish) use(I find the max plan sufficient for my use). 
But I fear API use might be too expensive. Does someone have some way to compare cost vs. benefit of these two cases based on usage and comparison?<p>2. The other main thing I am value is visibility e.g. display thought tokens, sub agent input and output in the WUI. Humanlayer and Pi provide this. Which one do you use and what do you like about it? I find that humanlayer using the claude code sdk binary is quite good at this, though SDK based harnesses might be easier to be coded and customized.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47641994">https://news.ycombinator.com/item?id=47641994</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 04 Apr 2026 18:45:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47641994</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47641994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47641994</guid></item><item><title><![CDATA[New comment by ghm2199 in "DRAM pricing is killing the hobbyist SBC market"]]></title><description><![CDATA[
<p>You are people. You are also in the  minority.</p>
]]></description><pubDate>Sat, 04 Apr 2026 18:34:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47641902</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47641902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47641902</guid></item><item><title><![CDATA[New comment by ghm2199 in "DRAM pricing is killing the hobbyist SBC market"]]></title><description><![CDATA[
<p>People hate AI. This will make them hate it even more. Yet somehow the market has convinced/forced us to use their products even though we might not want to.<p>A recent nationwide poll[1] shows AI has a poorer approval rating than ICE — ICE! — probably due to their overlords being "those" SV types. Everyday AI features are being shoved down our throats. I can't even choose to not install Gemini related apps on my Android when I select "which apps to install" when booting a new phone.<p>But people are a weird bunch. They largely don't buy products aligning with their values. No one is jumping up and down for Graphene phones even if they had amazing privacy first software. People buy 6mi/gal hummers and iPhones for fashion, brand, money, convenience/function. The pain threshold of all bad effects still is not high enough to quit their products in a meaningful way. Values and privacy are way down in their list. I wish people would not buy/install AI related features by big tech and be more discerning, but that is likely a pipe dream.<p>[1] <a href="https://pos.org/wp-content/uploads/2026/03/260072-NBC-March-2026-Poll-03-08-2026-Release.pdf" rel="nofollow">https://pos.org/wp-content/uploads/2026/03/260072-NBC-March-...</a></p>
]]></description><pubDate>Thu, 02 Apr 2026 03:02:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47609510</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47609510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47609510</guid></item><item><title><![CDATA[New comment by ghm2199 in "Iran war sparks renewables boom as Europeans rush to buy solar, heat pumps, EVs"]]></title><description><![CDATA[
<p>If it were to be so I would exclaim "What a poor vessel have we found to do this work." Sigh..</p>
]]></description><pubDate>Wed, 01 Apr 2026 16:26:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47603018</link><dc:creator>ghm2199</dc:creator><comments>https://news.ycombinator.com/item?id=47603018</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47603018</guid></item></channel></rss>