<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: pgroves</title><link>https://news.ycombinator.com/user?id=pgroves</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 16:23:11 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=pgroves" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by pgroves in "Vibe coding kills open source"]]></title><description><![CDATA[
<p>I was expecting this to be the point of the article when I saw the title. Popular projects appear to be drowning in PRs that are almost certainly AI generated. OpencodeCli has 1200 open at the moment[1]. Aider, which is sort of abandoned has 200 [2]. AFAIK, both projects are mostly one maintainer.<p>[1] <a href="https://github.com/anomalyco/opencode/pulls" rel="nofollow">https://github.com/anomalyco/opencode/pulls</a>
[2] <a href="https://github.com/Aider-AI/aider/pulls" rel="nofollow">https://github.com/Aider-AI/aider/pulls</a></p>
]]></description><pubDate>Mon, 26 Jan 2026 15:34:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46766898</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=46766898</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46766898</guid></item><item><title><![CDATA[New comment by pgroves in "Show HN: Terminal UI for AWS"]]></title><description><![CDATA[
<p>All the use cases that popped into my head when I saw this were around how nice it would be to be able to quickly see what was really happening without trying to flop between logs and the AWS console. That's really how I use k9s and wouldn't be able to stand k8s without it. I almost never make any changes from inside k9s. But yeah... I could see using this with a role that only has Read permissions on everything.</p>
]]></description><pubDate>Mon, 05 Jan 2026 04:52:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46495450</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=46495450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46495450</guid></item><item><title><![CDATA[New comment by pgroves in "AI is forcing us to write good code"]]></title><description><![CDATA[
<p>Thinking about this some more, maybe I wasn't considering simulators (aka digital twins), which are supposed to be able to create fairly reliable feedback loops without building things in reality.  Eg will this plane design be able to take off? Still, I feel fortunate I only have to write unit tests to get a bit of contact with reality.</p>
]]></description><pubDate>Tue, 30 Dec 2025 06:50:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46430302</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=46430302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46430302</guid></item><item><title><![CDATA[New comment by pgroves in "AI is forcing us to write good code"]]></title><description><![CDATA[
<p>This is sort of why I think software development might be the only real application of LLMs outside of entertainment. We can build ourselves tight little feedback loops that other domains can't. I somewhat frequently agree on a plan with an LLM and a few minutes or hours later find out it doesn't work and then the LLM is like "that's why we shouldn't have done it like that!". Imagine building a house from scratch and finding out that it was using some american websites to spec out your electric system and not noticing the problem until you're installing your candadian dishwasher.</p>
]]></description><pubDate>Mon, 29 Dec 2025 23:27:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46427349</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=46427349</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46427349</guid></item><item><title><![CDATA[New comment by pgroves in "Gemini 3"]]></title><description><![CDATA[
<p>I was hoping Bash would go away or get replaced at some point. It's starting to look like it's going to be another 20 years of Bash but with AI doodads.</p>
]]></description><pubDate>Tue, 18 Nov 2025 17:38:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45969435</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=45969435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45969435</guid></item><item><title><![CDATA[New comment by pgroves in "Show HN: Semantic Grep – A Word2Vec-powered search tool"]]></title><description><![CDATA[
<p>How fast is it?</p>
]]></description><pubDate>Sat, 27 Jul 2024 21:09:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=41089355</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=41089355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41089355</guid></item><item><title><![CDATA[New comment by pgroves in "Ask HN: How do you deal with slight memory impairment?"]]></title><description><![CDATA[
<p>I've never had that great of a memory. The upside is that you can have a bad memory and good note taking skills and be more effective than the 'good memory' people. Really it's just that I forget in a day what other people forget in a week so it's not that big of a gap. But some considerations:<p>1. Put everything in the issue tracker that you can. This includes notes on what actually happened when you did the work. Include technical details.<p>2. Try to push everyone else to use the issue tracker. Also makes you sound like the professional in the room.<p>3. Have a very lightweight note taking mechanism and use it as much as possible. I am gud at vim so I use the Voom plugin (which just treats markdown headings as an outline but it's enough to store a ton of notes in a single .md file). Don't try to make these notes good enough to share as that adds too much overhead.<p>4. Always take your own notes in a meeting.<p>5. I will revisit my notes on a project from time to time, and sometimes walk through all of them, but I'm not really treating them like flashcards to memorize. I'm just looking for things that might need some renewed attention. Same with the backlog.<p>6. In general, I don't try to improve my memory because I don't know what I need to know for a week vs. what I won't look at again for a year. So I focus on being systematic about having good-enough notes on everything and don't really expect to remember anything. (I do remember some things but it's random.)</p>
]]></description><pubDate>Tue, 07 Jun 2022 23:30:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=31661325</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=31661325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31661325</guid></item><item><title><![CDATA[New comment by pgroves in "We analyzed 100K technical interviews to see where the best performers work"]]></title><description><![CDATA[
<p>And the implication is the 'quality' of engineers at the companies is actually reversed - the top performers at Dropbox are struggling and leaving while the under performers at FANG are struggling and leaving.</p>
]]></description><pubDate>Thu, 31 Mar 2022 18:42:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=30870821</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=30870821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30870821</guid></item><item><title><![CDATA[New comment by pgroves in "Should you use Let's Encrypt for internal hostnames?"]]></title><description><![CDATA[
<p>This looks kind of interesting. I might try this. Thanks.</p>
]]></description><pubDate>Wed, 05 Jan 2022 17:24:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=29811903</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=29811903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29811903</guid></item><item><title><![CDATA[New comment by pgroves in "Should you use Let's Encrypt for internal hostnames?"]]></title><description><![CDATA[
<p>Fair enough. Although that seems rather complicated for those of us just trying to get a quick cert for an internal host. The LetsEncrypt forums are full of this discussion:<p>[1] <a href="https://community.letsencrypt.org/t/whitelisting-le-ip-addresses-ranges-in-firewall/45190/6" rel="nofollow">https://community.letsencrypt.org/t/whitelisting-le-ip-addre...</a>
[2] <a href="https://community.letsencrypt.org/t/whitelist-hostnames-for-certbot-validation/115842/2" rel="nofollow">https://community.letsencrypt.org/t/whitelist-hostnames-for-...</a>
[3]<a href="https://community.letsencrypt.org/t/letsencrypt-ip-addresses-its-actually-pretty-important/129760/2" rel="nofollow">https://community.letsencrypt.org/t/letsencrypt-ip-addresses...</a></p>
]]></description><pubDate>Wed, 05 Jan 2022 17:21:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=29811863</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=29811863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29811863</guid></item><item><title><![CDATA[New comment by pgroves in "Should you use Let's Encrypt for internal hostnames?"]]></title><description><![CDATA[
<p>Another nuisance is that unencrypted port 80 must be open to the outside world to do the acme negotiation (LE servers must be able to talk to your acme client running at the subdomain that wants a cert). They also intentionally don't publish a list of IPs that LetsEncrypt might be coming from [1]. So opening firewall ports on machines that are specifically internal hosts has to be a part of any renewal scripts that run every X days. Kinda sucks IMO.<p>[1]<a href="https://letsencrypt.org/docs/faq/#what-ip-addresses-does-let-s-encrypt-use-to-validate-my-web-server" rel="nofollow">https://letsencrypt.org/docs/faq/#what-ip-addresses-does-let...</a><p>UPDATE: Apparently there is a DNS based solution that I wasn't aware of.</p>
]]></description><pubDate>Wed, 05 Jan 2022 17:10:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=29811701</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=29811701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29811701</guid></item><item><title><![CDATA[New comment by pgroves in "Realtime Postgres Row Level Security"]]></title><description><![CDATA[
<p>There are lots of simple things that are normally easier to do in the web framework that are suddenly easier to do in the database (with the side effect that you can do DB optimizations much easier).<p>But the other consideration is that you likely need to do a lot with a reverse-proxy like traefik to have much control of what you are really exposing to the outside world. PostgREST is not Spring, it doesn't have explicit control over every little thing so you're likely to need something in front of it. Anyway, point is that having a simple Flask server with a few endpoints running wouldn't complicate the architecture very much b/c you are better off with something in front of it doing routing already (and ssl termination, etc).</p>
]]></description><pubDate>Wed, 01 Dec 2021 18:39:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=29407421</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=29407421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29407421</guid></item><item><title><![CDATA[New comment by pgroves in "Realtime Postgres Row Level Security"]]></title><description><![CDATA[
<p>I'm on a POC project that's using PostgREST and it's been extremely fast to get a big complicated data model working with an API in front of it. But I guess I don't get how to really use this thing in reality? What does devops look like? Do you have sophisticated db migrations with every deploy? Is all the SQL in version control?<p>I also don't really get where the users get created in postgres that have all the row-level permissions. The docs are all about auth for users that are already in there.</p>
]]></description><pubDate>Wed, 01 Dec 2021 16:45:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=29405795</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=29405795</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29405795</guid></item><item><title><![CDATA[New comment by pgroves in "Minus"]]></title><description><![CDATA[
<p>That's what I want... this would force me to make a different account for every topic I might comment/post on, and they can have their own local networks. If it's a topic that I know a lot about (eg what I do at my day job), it would force a fresh start every few years.<p>This is in contrast to my twitter account, which is such a mess that I don't like posting b/c "most" people who will see it followed me for some other topic.</p>
]]></description><pubDate>Mon, 06 Sep 2021 19:44:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=28437223</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=28437223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28437223</guid></item><item><title><![CDATA[New comment by pgroves in "Mozilla VPN Completes Independent Security Audit by Cure53"]]></title><description><![CDATA[
<p>Ok but how do I know I should trust Cure53?</p>
]]></description><pubDate>Wed, 01 Sep 2021 18:36:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=28383887</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=28383887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28383887</guid></item><item><title><![CDATA[New comment by pgroves in "Ask HN: What problem are you close to solving and how can we help?"]]></title><description><![CDATA[
<p>This is a research university that moves very slow, so waiting two years for something better is actually a possibility (and prerendering to S3 works ok for now). I'll keep this bookmarked.</p>
]]></description><pubDate>Sun, 29 Aug 2021 19:13:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=28349582</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=28349582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28349582</guid></item><item><title><![CDATA[New comment by pgroves in "Ask HN: What problem are you close to solving and how can we help?"]]></title><description><![CDATA[
<p>Thanks, I hadn't heard of that and I will look into it. This is a research setting with plenty of hardware we can request and not a huge number of users so that part doesn't worry me.</p>
]]></description><pubDate>Sun, 29 Aug 2021 19:12:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=28349571</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=28349571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28349571</guid></item><item><title><![CDATA[New comment by pgroves in "Ask HN: What problem are you close to solving and how can we help?"]]></title><description><![CDATA[
<p>This is on our list of possibilities. It would take a little more time than I'd like to spend on this problem but it would work.</p>
]]></description><pubDate>Sun, 29 Aug 2021 19:10:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=28349562</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=28349562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28349562</guid></item><item><title><![CDATA[New comment by pgroves in "Ask HN: What problem are you close to solving and how can we help?"]]></title><description><![CDATA[
<p>How to make png encoding much faster? I'm working with large medical images and after a bit of work we can do all the needed processing in under a second (numpy/scipy methods). But then the encoding to png is taking 9-15secs. As a result we have to pre-render all possible configurations and put them on S3 b/c we can't do the processing on demand in a web request.<p>Is there a way to use multiple threads or GPU to encode pngs? I haven't been able to find anything. The images are 3500x3500px and compress from roughly 50mb to 15mb with maximum compression (so don't say to use lower compression).</p>
]]></description><pubDate>Sun, 29 Aug 2021 16:46:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=28348262</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=28348262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28348262</guid></item><item><title><![CDATA[New comment by pgroves in "Why are hyperlinks blue?"]]></title><description><![CDATA[
<p>My boss did UI/UX on Mosaic (we are both at NCSA today). I will ask her on Tuesday when I see her. She has lots of wild stories about why things are the way they are.</p>
]]></description><pubDate>Thu, 26 Aug 2021 18:32:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=28318736</link><dc:creator>pgroves</dc:creator><comments>https://news.ycombinator.com/item?id=28318736</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28318736</guid></item></channel></rss>