<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: inder1</title><link>https://news.ycombinator.com/user?id=inder1</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 21:39:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=inder1" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by inder1 in "Debian decides not to decide on AI-generated contributions"]]></title><description><![CDATA[
<p>debian's stance actually makes more sense than a ban. you can't detect the origin of a contribution reliably. what you can do is hold the contributor accountable for what they submit. the problem is when people use that accountability structure without the skill to back it up. the asymmetric warfare framing in the comments is right: the cost to submit low-quality PRs is near zero now. but the cost to review them didn't change. the maintainer burden goes up, not down. that's the pattern worth paying attention to: AI compresses the cost of producing output, but the responsibility for output quality doesn't compress with it.</p>
]]></description><pubDate>Thu, 12 Mar 2026 17:08:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47354013</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47354013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47354013</guid></item><item><title><![CDATA[New comment by inder1 in "Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy"]]></title><description><![CDATA[
<p>the skills that protect against displacement long-term are exactly what vibe coding erodes. an engineer who built with AI but never developed the instincts to spot its mistakes has a gap they don't know they have. this maintainer problem is a preview: when you can't tell the difference between a PR from someone who understood the code and one from someone who just prompted into it, the verification burden doesn't disappear. it shifts to whoever has enough skill to catch the errors.</p>
]]></description><pubDate>Tue, 10 Mar 2026 16:30:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47325482</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47325482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47325482</guid></item><item><title><![CDATA[New comment by inder1 in "AI doesn't replace white collar work"]]></title><description><![CDATA[
<p>The capability vs adoption gap is the real story. Anthropic's data shows LLMs can theoretically handle 94% of computer and math tasks but actual usage is around 33%. Entry-level hiring has slowed most in exposed roles. Not because AI is already doing those jobs, but because companies stopped hiring while they wait and see.</p>
]]></description><pubDate>Mon, 09 Mar 2026 18:45:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47313518</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47313518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47313518</guid></item><item><title><![CDATA[Show HN: Salvobase – MongoDB-compatible DB in Go maintained by AI agents]]></title><description><![CDATA[
<p>MongoDB is great until you read the SSPL. Then you're either paying Atlas prices, running an old 4.x build, or pretending FerretDB is production-ready. We built a third option.<p><pre><code>  Salvobase is a MongoDB wire-protocol-compatible document database written in Go. Point any Mongo driver and it works. No driver changes, no config changes. It's Apache 2.0, so you can embed it in a commercial product without a legal conversation.

  What it does:

  - Full CRUD, indexes (single, compound, unique, text, TTL, partial, wildcard), and most of the aggregation pipeline ($match, $group, $lookup, $unwind, $facet, etc.)
  - SCRAM-SHA-256 auth
  - bbolt storage engine: one .db file per database, Snappy-compressed BSON
  - Built-in Prometheus metrics at :27080/metrics (no exporter needed)
  - Built-in REST/JSON API at :27080/api/v1/ (MongoDB's equivalent is paid Atlas)
  - Per-tenant rate limiting, audit logging, 1-second TTL precision, SIGHUP hot reload
  - make build && make dev and you're running

  What it doesn't do:

  No replication. No sharding. No change streams. No multi-document transactions (stubbed). No $where or mapReduce (intentional: security + complexity). Single-node only. If you need a distributed MongoDB replacement, this isn't it yet. But we hope one day it will become that, built by agents.

  The weird part:

  The codebase is maintained by AI agents. Not "AI-assisted" - the agents pick issues from the backlog, write code, submit PRs, review each other's PRs, and merge. There's a formal protocol (https://github.com/inder/salvobase/blob/master/AGENT_PROTOCOL.md) covering identity, trust tiers, anti-collusion rules for reviews, claim timeouts, and a kill switch. Humans set direction; agents do the execution.

  We're curious whether autonomous agent maintenance can sustain a real open source project over time, not just generate initial code.
</code></pre>
* If you want to donate an agent just drop this prompt into Claude Code, Cursor, Aider, Devin, whatever: Fork/clone github.com/inder/salvobase, read QUICKSTART.md, and start contributing<p>GitHub: <a href="https://github.com/inder/salvobase" rel="nofollow">https://github.com/inder/salvobase</a><p>Thank you.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47304607">https://news.ycombinator.com/item?id=47304607</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 09 Mar 2026 03:35:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304607</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47304607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304607</guid></item><item><title><![CDATA[New comment by inder1 in "We might all be AI engineers now"]]></title><description><![CDATA[
<p>The "K-shaped workforce" framing is real and probably underappreciated. Senior engineers get more out of AI because they can evaluate the output, catch the architectural mistakes, and debug the edge cases. Juniors using AI to write code they can't read aren't building the debugging instinct that makes senior engineers valuable. That gap compounds over 2-3 years. The question isn't whether to use AI. It's whether you're actually understanding what it produces.</p>
]]></description><pubDate>Sun, 08 Mar 2026 06:16:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47295008</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47295008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47295008</guid></item><item><title><![CDATA[New comment by inder1 in "Building an Elite AI Engineering Culture in 2026"]]></title><description><![CDATA[
<p>The "senior gets better results" dynamic is real and probably the most underappreciated fact in the AI-jobs debate. The question isn't whether AI writes code. It's whether the person steering it has enough context to catch what's wrong. Juniors learning on AI-generated output may end up with surface fluency but no debugging instincts. That gap will matter a lot in 2-3 years.</p>
]]></description><pubDate>Thu, 05 Mar 2026 16:57:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47264075</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47264075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47264075</guid></item><item><title><![CDATA[New comment by inder1 in "India's top court angry after junior judge cites fake AI-generated orders"]]></title><description><![CDATA[
<p>The failure mode here is predictable. Junior practitioners in any domain are being asked to use AI tools before they've developed the professional judgment to validate the outputs. You can't spot a hallucinated court order if you don't know what real court orders look like. The tool isn't the problem. The training pipeline that skips fundamentals is.</p>
]]></description><pubDate>Tue, 03 Mar 2026 19:01:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47237078</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47237078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47237078</guid></item><item><title><![CDATA[New comment by inder1 in "Ask HN: My YC company is hiring one engineer/day but there's not enough work"]]></title><description><![CDATA[
<p>the correction you're describing is real. the timing is unknowable but the direction isn't.<p>two things tend to protect people in this scenario: domain knowledge that's hard to hand off, and a visible productivity multiplier with AI tools. engineers who can demonstrate 5-10x throughput are the last to go. the ones doing standard work at standard pace are not, regardless of seniority.<p>the actionable move is to close that gap now, while you still have access to good tooling and time to build the habit. the market can stay irrational longer than your job security can hold, but the productivity gap is something you can actually control.</p>
]]></description><pubDate>Mon, 02 Mar 2026 00:27:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47212354</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47212354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47212354</guid></item><item><title><![CDATA[New comment by inder1 in "OpenAI raises $110B on $730B pre-money valuation"]]></title><description><![CDATA[
<p>$730B on roughly $11B ARR is ~66x. Microsoft trades at 12x. The market is pricing OpenAI as infrastructure, not software. That's probably right. The implication is that the productivity gains accrue to whoever controls the infrastructure layer. The workers and companies building on top of it are taking the displacement risk without the upside.</p>
]]></description><pubDate>Sat, 28 Feb 2026 07:31:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191705</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47191705</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191705</guid></item><item><title><![CDATA[New comment by inder1 in "Layoffs at Block"]]></title><description><![CDATA[
<p>The $2M+ gross profit per person target is the number that actually explains this. That's 4x their pre-covid baseline. Every company that over-hired during ZIRP is running this same math correction. The AI framing helps the stock. The underlying correction was always coming.</p>
]]></description><pubDate>Fri, 27 Feb 2026 18:05:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47183487</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47183487</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47183487</guid></item><item><title><![CDATA[New comment by inder1 in "Ask HN: In the Age of AI, How Do I Grow as a Software Engineer?"]]></title><description><![CDATA[
<p>The "AI kills software jobs" framing and the "AI is just a tool" framing are both wrong in the same way — they treat it as uniform.<p>What actually seems to be happening is that AI compresses the value of generic output and amplifies the value of domain judgment. A developer who knows how a specific industry works, what the edge cases are, why the legacy system was built the way it was — that context isn't in the training data. It compounds.<p>The engineers I've seen struggle most aren't the ones in AI-exposed roles. They're the ones in AI-exposed roles who've optimized for output volume rather than judgment depth. Those two things used to correlate. They no longer do.</p>
]]></description><pubDate>Wed, 25 Feb 2026 06:06:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47147923</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47147923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47147923</guid></item><item><title><![CDATA[New comment by inder1 in "Global Intelligence Crisis"]]></title><description><![CDATA[
<p>The feedback loop described here is what stuck with me — AI improves, companies cut headcount, savings go back into AI, AI improves. No natural brake.<p>The article puts a specific number on it: a $180K PM replaced by a $200/mo AI agent. I've been building a tool that lets you run this kind of scenario on your own career — scores your AI exposure and simulates paths that reduce it.<p>One thing I've found from running hundreds of simulations: augmenting your current career with AI consistently leads to better financial outcomes over 5-10 years than pivoting to a new field entirely.<p>The best move isn't to run — it's to adapt in place.
Free to try: parallaxapp.world</p>
]]></description><pubDate>Mon, 23 Feb 2026 01:09:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47116752</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47116752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47116752</guid></item><item><title><![CDATA[New comment by inder1 in "Show HN: Parallax – See how exposed you are to AI disruption and make a plan"]]></title><description><![CDATA[
<p>use code PROMO for a 100% discount on Stripe payment. Makes paying for the app optional.</p>
]]></description><pubDate>Thu, 19 Feb 2026 18:59:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47077620</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47077620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47077620</guid></item><item><title><![CDATA[New comment by inder1 in "Show HN: Parallax – See how exposed you are to AI disruption and make a plan"]]></title><description><![CDATA[
<p>My son had questions about his career and couldn't answer them. So I built Parallax — a tool that helps you explore career options and arrive at your own answers.<p>Upload your resume, get an AI exposure score (0-100), then run simulations to see what happens if you: stay and upskill, pivot to adjacent work, or reset entirely. Year-by-year timelines. Trade-offs made visceral.<p>* The problem
People know AI is disrupting careers. What they don't know is:<p>How exposed they specifically are, given their actual career history
What their realistic options are — not generic advice, but paths modeled on their timeline
What each option costs them in money, stress, and life satisfaction<p>* What Parallax does
It's a flight simulator for your career. Not advice — consequence exploration.
The flow (takes ~5 minutes):<p>* Upload resume → AI calculates your AI exposure score
* Five-question interview (finances, learning appetite, stability preference)
* LLM generates up to 9 futures: 3 strategies × 3 timing scenarios
* Pick which paths to compare
* Compare dashboard shows net worth, stress, fulfillment, AI exposure, drift<p>* Three recovery strategies:<p>1) Augment (blue): Stay, upskill with AI tools
2) Pivot (amber): Lateral move, lower AI exposure
3) Reset (purple): Start something fundamentally different<p>* Tech<p>Next.js 14 + TypeScript, LLM-powered
No database — fully client-side (localStorage)
Zero tracking. Your career data never leaves your browser.
Privacy-first by design<p>* Pricing<p>Free: AI exposure score + interview + briefing
Pro ($10/mo): Full compare dashboard + timeline exploration 
This app is compute intensive, so I added a paywall to pay for my compute costs.<p>Built solo over 2 months. Would love your feedback.</p>
]]></description><pubDate>Thu, 19 Feb 2026 18:51:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47077517</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47077517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47077517</guid></item><item><title><![CDATA[Show HN: Parallax – See how exposed you are to AI disruption and make a plan]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.parallaxapp.world/">https://www.parallaxapp.world/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47077405">https://news.ycombinator.com/item?id=47077405</a></p>
<p>Points: 1</p>
<p># Comments: 2</p>
]]></description><pubDate>Thu, 19 Feb 2026 18:44:41 +0000</pubDate><link>https://www.parallaxapp.world/</link><dc:creator>inder1</dc:creator><comments>https://news.ycombinator.com/item?id=47077405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47077405</guid></item></channel></rss>