<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: grandimam</title><link>https://news.ycombinator.com/user?id=grandimam</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 06:21:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=grandimam" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: Building your first ASGI framework – step-by-step lessons]]></title><description><![CDATA[
<p>I am writing a series of lessons on building an ASGI framework from scratch. The goal is to develop a deeper understand of how frameworks like FastAPI and Starlette work.<p>A strong motivation for doing this is because - I have been using AI to write code lately. I prompt, I get code, it works. But somewhere along the way I see I stopped caring about what is actually happening. So, this is an attempt to think beyond prompts and build deeper mental models of things we use in our day to day lives. I am not sure about the usefulness of this but I believe there are good lessons to be learnt doing this.<p>The series works more as a follow along where each lesson builds on the previous one. By the end, you will have built something similar to Starlette - and actually understand how it works.<p>- <a href="https://github.com/grandimam/papuli/blob/main/docs/01-foundation.md" rel="nofollow">https://github.com/grandimam/papuli/blob/main/docs/01-founda...</a><p>- <a href="https://github.com/grandimam/papuli/blob/main/docs/02-helloworld.md" rel="nofollow">https://github.com/grandimam/papuli/blob/main/docs/02-hellow...</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47547791">https://news.ycombinator.com/item?id=47547791</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 27 Mar 2026 20:26:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47547791</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=47547791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47547791</guid></item><item><title><![CDATA[Show HN: Pure Python web framework using free-threaded Python]]></title><description><![CDATA[
<p>Barq is an experimental HTTP framework built entirely in pure Python, designed for free-threaded Python 3.13 (PEP 703). No async/await, no C extensions - just threads with true parallelism. The question I wanted to answer: now that Python has a no-GIL mode, can a simple threaded server beat async frameworks?<p>Results against FastAPI (100 concurrent clients):<p>- JSON: 8,400 req/s vs 4,500 req/s (+87%)<p>- CPU-bound: 1,425 req/s vs 266 req/s (+435%)<p>The CPU-bound result is the interesting one. Async can't parallelize CPU work - it's fundamentally single-threaded. With free-threaded Python, adding more threads actually scales:<p>- 4 threads: 608 req/s<p>- 8 threads: 1,172 req/s (1.9x)<p>- 16 threads: 1,297 req/s (2.1x)<p>The framework is ~500 lines across 5 files. Key implementation choices:<p>- ThreadPoolExecutor for workers<p>- HTTP/1.1 keep-alive connections<p>- Radix tree router for O(1) matching<p>- Pydantic for validation<p>- Optional orjson for faster serialization<p>This is experimental and not production-ready, but it's an interesting datapoint for what's possible when Python drops the GIL.<p>Code: <a href="https://github.com/grandimam/barq" rel="nofollow">https://github.com/grandimam/barq</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47200761">https://news.ycombinator.com/item?id=47200761</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 28 Feb 2026 22:01:28 +0000</pubDate><link>https://github.com/grandimam/barq</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=47200761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47200761</guid></item><item><title><![CDATA[Show HN: A pure Python HTTP Library built on free-threaded Python]]></title><description><![CDATA[
<p>Hey HN,<p>I built a small HTTP framework to experiment with free-threaded and wanted to share some observations. Barq is ~500 lines of pure Python, no C extensions, no Rust, no Cython. It uses only the standard library plus Pydantic.<p>Benchmarks (Barq 4 threads vs FastAPI 4 worker processes):<p>- JSON: Barq 10,114 req/s vs FastAPI 5,665 req/s → Barq +79%<p>- DB query: Barq 9,962 req/s vs FastAPI 1,015 req/s → Barq +881%<p>- CPU bound: Barq 879 req/s vs FastAPI 1,231 req/s → FastAPI +29%</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47185550">https://news.ycombinator.com/item?id=47185550</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 27 Feb 2026 21:01:16 +0000</pubDate><link>https://github.com/grandimam/barq</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=47185550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47185550</guid></item><item><title><![CDATA[Tell HN: Thoughts on the Future]]></title><description><![CDATA[
<p>Purpose first. Everything else is optional.<p>People who worry about AI taking their jobs often lack ambition, or worse, a sense of mission. Too many engineers tie their identity to a title or a role, mistaking employment for purpose.<p>Building in the age of AI means stepping away from that mindset. It means pursuing things that matter deeply to you, to the people around you, or simply ideas you want to see exist.<p>If your current role supports that, it’s a bonus. If it doesn’t, then it’s just income fuel to invest in what you actually care about.<p>The same goes for technology. Engineers cling to stacks the way they cling to roles. If a tool solves your problem, great. If not, use it as a learning fuel for something you want to build.<p>Note: Thoughts are personal, rephrased using LLM</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46942651">https://news.ycombinator.com/item?id=46942651</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 09 Feb 2026 07:45:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46942651</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46942651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46942651</guid></item><item><title><![CDATA[New comment by grandimam in "Build a Deep Learning Library"]]></title><description><![CDATA[
<p>This is good. Its well positioned for software engineers to understand DL stuff beyond the frameworks.</p>
]]></description><pubDate>Thu, 01 Jan 2026 19:52:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46457405</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46457405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46457405</guid></item><item><title><![CDATA[Ask HN: Would you hire someone who codes only using agents?]]></title><description><![CDATA[

<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46400990">https://news.ycombinator.com/item?id=46400990</a></p>
<p>Points: 1</p>
<p># Comments: 3</p>
]]></description><pubDate>Sat, 27 Dec 2025 11:16:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46400990</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46400990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46400990</guid></item><item><title><![CDATA[ChatGPT Apps]]></title><description><![CDATA[
<p>Article URL: <a href="https://techcrunch.com/2025/12/18/chatgpt-launches-an-app-store-lets-developers-know-its-open-for-business/">https://techcrunch.com/2025/12/18/chatgpt-launches-an-app-store-lets-developers-know-its-open-for-business/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46323375">https://news.ycombinator.com/item?id=46323375</a></p>
<p>Points: 1</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 19 Dec 2025 07:59:30 +0000</pubDate><link>https://techcrunch.com/2025/12/18/chatgpt-launches-an-app-store-lets-developers-know-its-open-for-business/</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46323375</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46323375</guid></item><item><title><![CDATA[Journaling and Prompting]]></title><description><![CDATA[
<p>I used Notion for several years for journaling, but I found the cognitive cost of switching into its DSL wasn’t worth it for me. Notion is built on blocks, things like databases built on top. Even when I exported my notes to Markdown, it still reflected Notion’s internal data structure instead of giving me something clean and portable.<p>For example, the inline database ends up as a table with href links to other parts of the document - nice, but not very useful when I want plain text I can actually work with.<p>Meanwhile, I have been doing a lot of prompting, and Markdown makes more sense for my workflow. It is not a journaling tool but it is simple and widely supported - GitHub, VSCode, etc and it eliminated a lot of the context switching that came with using dedicated note-taking apps.<p>What I would miss probably is the inline database, and other rich content which I have learned to stop using. But, I have optimized my journalling workflows to a lot of my prompting techniques. I use regular tables and split documents more deliberately. I reference them across journals when needed, kind of like having dedicated prompts for each part of a workflow.<p>I use the a lot of the prompting techniques in journalling as well - instead of creating inline database, I use the regular tables (more flattened and un-linked), and started splitting documents more, and referencing them in my journals.<p>I also sometimes put YAML frontmatter at the top for metadata and descriptions. That way, if I ever want to run an LLM over my journals - for summarizing the year or building a semantic search - I am already set up. (Might even turn that into a feature for https://gpt.qalam.dev)<p>I have realised the tool must matter less than how I structure my thoughts.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46261443">https://news.ycombinator.com/item?id=46261443</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 14 Dec 2025 07:44:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46261443</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46261443</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46261443</guid></item><item><title><![CDATA[Ask HN: What is Fullstack Engineering?]]></title><description><![CDATA[
<p>I recently came across an argument related to this, and it made me rethink my own definition.<p>The way I define Fullstack is that it is not limited to frontend and backend work. It’s closer to the hardware-engineer argument: can one person actually build an entire computer? Maybe not every part from scratch but a capable engineer can assemble the pieces into a working system.<p>By that logic, a full-stack engineer is someone who can pull together everything needed to turn an idea into a product. I measure their skill by how fast and effectively they are able to deliver: design, engineering, requirements, and even a bit of SEO when the product calls for it.<p>Where I separate a full-stack engineer from a product engineer is in the focus.<p>A full-stack engineer focus is almost entirely technical - think optimizing page speed, bundle sizes, etc. A product engineer is maybe 70% technical, but adds extra 30% of domain thinking - think competitor analysis, customer empathy, and product sense. A product engineer is the kind of person you would actually put in front of customers.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46243111">https://news.ycombinator.com/item?id=46243111</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 12 Dec 2025 11:29:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46243111</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46243111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46243111</guid></item><item><title><![CDATA[Thoughts on Cursor]]></title><description><![CDATA[
<p>I believe Cursor just rolled out its two major features: Debug and Design.<p>I had an understanding of what I wanted from the IDEs, but I could not fully articulate it before the launch. Now that it’s here, it makes complete sense.<p>The way I see the future of programming, everything is going to be live: debugging, coding, designing, etc. Not that the idea is new, but the difference is that now it will be fully autonomous.<p>Recently, I worked on a feature that required redesigning part of our legacy flow built with Django templates and plain JavaScript for interactivity. In theory, this should not be a difficult task for current models. But they struggled to produce the right output, and I think there are two reasons for that:<p>Design is inherently hard to express purely in text.<p>Models are great at generating new code, but not so great at modifying large, existing codebases.<p>Honestly, the best workflow I found for updating the legacy UI was to operate directly off screenshots. I simply take the screenshots of the existing UI and the expected change, and ask the model to write code that matches that design, given the context of existing design. Models understand the context way faster this way.<p>With this new Design feature, I imagine this whole process become faster because I can make the edits directly on the browser, and model simply codes the expected outcome. Its what I always wanted - a custom headless Puppeteer running in the background, watching what I am doing, and helping with the design in real time.<p>And then there’s debugging. I have always preferred logs over a traditional debugger. What I have really wanted is something like an ELK parser at runtime something that just understands my logs as the system runs, and can point out when things drift off the expected path.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46241492">https://news.ycombinator.com/item?id=46241492</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 12 Dec 2025 06:47:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46241492</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46241492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46241492</guid></item><item><title><![CDATA[AI-Native vs. Anti-AI Engineers]]></title><description><![CDATA[
<p>One of the key differences I an seeing between AI-native engineers and Anti-AI ones: the idea of "fully understanding" what you ship.<p>Before LLMs, we did not fully understand the libraries we read, the kernels we touched, the networks we only grasped conceptually. We' have always been outsourcing intelligence to other engineers, teams, and systems for decades.<p>One possible reason is that we use a library, we can tell ourselves we could read it. With an LLM, the fiction of potential understanding collapses. The real shift I am seeing isn't from "understanding" to "not understanding."<p>It is towatds "I understand the boundaries, guarantees, and failure modes of what I'm responsible for." If agentic coding is the future, mastery becomes the ability to steer, constrain, test, and catch failures - not the ability to manually type every line.<p>Full piece:<p>https://open.substack.com/pub/grandimam/p/ai-native-and-anti-ai-engineers?utm_campaign=post-expanded-share&utm_medium=post%20viewer</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46153251">https://news.ycombinator.com/item?id=46153251</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 04 Dec 2025 21:20:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46153251</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46153251</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46153251</guid></item><item><title><![CDATA[Ask HN: What has been your experience with Agentic Coding?]]></title><description><![CDATA[
<p>I have been experimenting more deeply with agentic coding, and it’s made me rethink how I approach building software.<p>One key difference I have noticed is the upfront cost. With agentic coding, I felt a higher upfront cost: I have to think architecture, constraints, and success criteria before the model even starts generating code. I have to externalize the mental model I normally keep in my head so the AI can operate with it.<p>In “precision coding,” that upfront cost is minimal but only because I carry most of the complexity mentally. All the design decisions, edge cases, and contextual assumptions live in my head as I write. Tests become more of a final validation step.<p>What I have realized is that agentic coding shifts my cognitive load from on-demand execution to more pre-planned execution (I am behaving more like a researcher than a hacker). My role is less about 'precisely' implementing every piece of logic and more about defining the problem space clearly enough that the agent can assemble the solution reliably.<p>Another observation has been that since the cost of writing code is minimal as agents are delegated to write them, there is a need for me to shift and context and also take up the QA role to evaluate the agents output.<p>Would love to hear your thoughts?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46125341">https://news.ycombinator.com/item?id=46125341</a></p>
<p>Points: 7</p>
<p># Comments: 7</p>
]]></description><pubDate>Tue, 02 Dec 2025 19:15:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46125341</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46125341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46125341</guid></item><item><title><![CDATA[Show HN: Qalam - a CLI that remembers commands]]></title><description><![CDATA[
<p>I kept running into the same problem as a developer: I forget commands I’ve already figured out.<p>Docker cleanup sequences. Deployments with 15 flags. Test commands that finally worked. Every time, I ended up digging through bash history or Googling. It was wasting mental energy.<p>So I built Qalam, a CLI that actually remembers commands.<p>What it does:<p>- Ask in natural language: “How do I kill the process on port 3000?”<p>- Save commands with meaningful names: “deploy” instead of cryptic abbreviations<p>- Automate workflows: my 5-command morning setup is now one command<p>- Keep everything local: no cloud, no privacy worries<p>- Zero configuration: works immediately<p>I’ve been using it for weeks. Deployments are foolproof. Morning setup is one command. When something breaks, I ask my terminal instead of Googling.<p><a href="https://docs.qalam.dev" rel="nofollow">https://docs.qalam.dev</a><p>I’d be curious what others do to remember complex commands or automate repeated workflows.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46033700">https://news.ycombinator.com/item?id=46033700</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 24 Nov 2025 13:08:02 +0000</pubDate><link>http://docs.qalam.dev/</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=46033700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46033700</guid></item><item><title><![CDATA[Ask HN: Builders vs. Mercenaries – does this distinction make sense?]]></title><description><![CDATA[
<p>I have been noticing two types of engineers on teams I have worked with, and I'm trying to figure out if this is a real pattern or just confirmation bias.<p>- Builders are focused on users and the domain problem. Code is just a means to an end. They'll ship something imperfect if it unblocks a real user need. Ask them to spend time on optimizations that don't affect the user experience? Hard pass.<p>- Mercenaries are focused on the craft itself. They care about clean architecture, performance, elegant abstractions. They'll go deep on technical problems whether or not the business or users actually need it solved right now. The quality of the work matters independent of impact.<p>But I'm not confident I have this framed correctly. A few questions:<p>- Does this distinction resonate with your experience?<p>- Which type are you, and has that changed over your career?<p>- How do you balance these mindsets on a team?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45989643">https://news.ycombinator.com/item?id=45989643</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 20 Nov 2025 06:39:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45989643</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=45989643</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45989643</guid></item><item><title><![CDATA[Ask HN: How to orchestrate multi-agent workflows (beyond one-shot prompts)?]]></title><description><![CDATA[
<p>I am exploring patterns for orchestrating multi-agent systems with LLMs and wondering how others are approaching this.<p>Most examples today rely on in-prompt chaining — e.g., a single call where “Agent A does X, then Agent B uses A’s output,” all within one synchronous prompt. This works, but it doesn’t scale well and mixes orchestration logic with prompt logic.<p>I’m more interested in asynchronous, decoupled orchestration, where:<p>- Agent A runs independently, produces an artifact/state,<p>- and Agent B is invoked later (event- or task-driven) to pick up that output.<p>Curious how people are handling this in practice:<p>- Are you using message queues, event buses, CRON/temporal workflows, serverless functions, or custom schedulers?<p>- How are you persisting and passing state between agents?<p>- Any patterns emerging for error handling, retries, or versioning agent behaviors?<p>- Are you treating LLM “agents” like microservices, or is there a better abstraction?<p>- Would appreciate hearing what architectures or frameworks have worked (or not worked) for you.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45955997">https://news.ycombinator.com/item?id=45955997</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 17 Nov 2025 17:49:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=45955997</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=45955997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45955997</guid></item><item><title><![CDATA[Ask HN: What would an ideal matchmaking platform look like today?]]></title><description><![CDATA[
<p>Most dating and matchmaking apps still use the same model — profiles, photos, swipes, filters but human relationships and expectations have evolved a lot in the last decade.<p>If you were to design a matchmaking platform from scratch today, what would it look like?<p>How would you handle:<p>- Trust, authenticity, and privacy in an age of AI and deepfakes?
- Cultural and regional diversity without stereotyping?
- Real compatibility beyond surface-level traits?
- Balancing data-driven matching with human intuition?
- Building something that encourages long-term relationships, not just short-term engagement?<p>Curious to hear from people who think about product design, social systems, ethics, and human connection.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45630405">https://news.ycombinator.com/item?id=45630405</a></p>
<p>Points: 7</p>
<p># Comments: 14</p>
]]></description><pubDate>Sat, 18 Oct 2025 21:17:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45630405</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=45630405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45630405</guid></item><item><title><![CDATA[Ask HN: Is it okay to stop chasing expertise?]]></title><description><![CDATA[
<p>I’ve been thinking a lot about the difference between expertise and value. I'm someone who just wants to build products. I learn things like TypeScript or React only to the extent that I need to get something working. I don’t dive deep unless my product demands it.<p>But most of the industry seems to reward broad or deep expertise, knowledge of systems, protocols, or architectures, even when it’s not directly tied to delivering user value. This makes me wonder: am I doing it wrong?<p>It feels like we often judge engineers by how much they know, not by what they’ve shipped or how much impact they’ve had. It creates this pressure to keep learning things that might not ever help with what I’m actually trying to build. Has anyone else struggled with this? Is optimizing exclusively for value a valid path long term?<p>Would love to hear how others think about this.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44586331">https://news.ycombinator.com/item?id=44586331</a></p>
<p>Points: 5</p>
<p># Comments: 7</p>
]]></description><pubDate>Wed, 16 Jul 2025 20:13:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44586331</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=44586331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44586331</guid></item><item><title><![CDATA[New comment by grandimam in "Show HN: I made a JSFiddle-style playground to test and share prompts fast"]]></title><description><![CDATA[
<p>> Then came the pricing. The last quote I got for one of the tools on the market was $6,000/year for a team of 16 people in a use-it-or-loose-it way. For a tool we use maybe 2–3 times per sprint.<p>What tool was this?</p>
]]></description><pubDate>Sun, 13 Jul 2025 08:26:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44548549</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=44548549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44548549</guid></item><item><title><![CDATA[Ask HN: Using AI daily but not seeing productivity gains – is it just me?]]></title><description><![CDATA[

<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44352762">https://news.ycombinator.com/item?id=44352762</a></p>
<p>Points: 23</p>
<p># Comments: 41</p>
]]></description><pubDate>Mon, 23 Jun 2025 05:44:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44352762</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=44352762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44352762</guid></item><item><title><![CDATA[Ask HN: Anyone else lose interest right after proving an idea works?]]></title><description><![CDATA[
<p>I've noticed a recurring pattern in myself: I get excited about an idea (often AI-related lately), prototype it quickly, and once I’ve built the core functionality or proven it works, I completely lose interest. The initial curiosity and momentum vanish, and I find myself asking, “Do I even want to pursue this long term?”<p>It feels like once the challenge or novelty is gone, so is the motivation — even if the idea has potential. I end up with a graveyard of working demos and half-baked side projects.<p>Is this just dopamine-driven behavior? A multipotentialite thing? Or is this more common among builders, especially with tools like AI making the prototype stage so fast?<p>Curious if others experience this and how you manage it — do you force yourself to push through, hand it off, or just accept that exploration is the goal?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44089411">https://news.ycombinator.com/item?id=44089411</a></p>
<p>Points: 6</p>
<p># Comments: 3</p>
]]></description><pubDate>Sun, 25 May 2025 17:40:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44089411</link><dc:creator>grandimam</dc:creator><comments>https://news.ycombinator.com/item?id=44089411</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44089411</guid></item></channel></rss>