<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sramam</title><link>https://news.ycombinator.com/user?id=sramam</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 26 Apr 2026 06:09:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sramam" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sramam in "AI is killing B2B SaaS"]]></title><description><![CDATA[
<p>These examples are going to be lagging indicators of the underlying sentiment.<p>Just because it cannot be done today, doesn't mean there is not a real appetite in large enterprises to do exactly this.<p>Without naming names, I know of at least one public company with a real hunger for exactly this eventuality.</p>
]]></description><pubDate>Wed, 04 Feb 2026 21:44:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46892245</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=46892245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46892245</guid></item><item><title><![CDATA[New comment by sramam in "Will AIs take all our jobs and end human history, or not? (2023)"]]></title><description><![CDATA[
<p>What fraction of the remaining population would be able to pay for these services?</p>
]]></description><pubDate>Wed, 28 Jan 2026 17:49:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46798906</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=46798906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46798906</guid></item><item><title><![CDATA[New comment by sramam in "Show HN: Why write code if the LLM can just do the thing? (web app experiment)"]]></title><description><![CDATA[
<p>I work in enterprise IT and sometimes wonder if we should add the equivalent energy calculations of human effort - both productive and unproductive - that underlies these "output/cost" comparisons.<p>I realize it sounds inhuman, but so is working in enterprise IT! :)</p>
]]></description><pubDate>Sat, 01 Nov 2025 20:56:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45785266</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=45785266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45785266</guid></item><item><title><![CDATA[New comment by sramam in "DeepSeek OCR"]]></title><description><![CDATA[
<p>Interesting - have you tried sending the image and 'hallucinated' text together to a review LLM to fix mistakes?<p>I don't have a use case of 100s or 1000s of hand-written notes have to  be transcribed. I have only done this with whiteboard discussion snapshots and it has worked really well.</p>
]]></description><pubDate>Mon, 20 Oct 2025 09:01:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45641563</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=45641563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45641563</guid></item><item><title><![CDATA[New comment by sramam in "Survey: a third of senior developers say over half their code is AI-generated"]]></title><description><![CDATA[
<p>I think there is a difference between type system or Language Server completions and AI generated completion.<p>When the AI tab completion fills in full functions based on the function definition you have half typed, or completes a full test case the moment you start type - mock data values and all, that just feels mind-reading magical.</p>
]]></description><pubDate>Mon, 01 Sep 2025 00:53:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45088436</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=45088436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45088436</guid></item><item><title><![CDATA[Things I Learned About Information Retrieval in Two Years at a Vector DB Co]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.leoniemonigatti.com/blog/what_i_learned.html">https://www.leoniemonigatti.com/blog/what_i_learned.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44489546">https://news.ycombinator.com/item?id=44489546</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 07 Jul 2025 12:13:51 +0000</pubDate><link>https://www.leoniemonigatti.com/blog/what_i_learned.html</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=44489546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44489546</guid></item><item><title><![CDATA[New comment by sramam in "A flat pricing subscription for Claude Code"]]></title><description><![CDATA[
<p>Aren't the insufficiencies of the LLMs a temporary condition?<p>And as with any automation, there will be a select few who will understand it's inner workings, and a vast majority that will enjoy/suffer the benefits.</p>
]]></description><pubDate>Fri, 09 May 2025 09:18:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43935099</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=43935099</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43935099</guid></item><item><title><![CDATA[New comment by sramam in "Claude's system prompt is over 24k tokens with tools"]]></title><description><![CDATA[
<p>do tools like cursor get a special pass? Or do they do some magic?<p>I'm always amazed at how well they deal with diffs. 
especially when the response jank clearly points to a "... + a change", 
and cursor maps it back to a proper diff.</p>
]]></description><pubDate>Wed, 07 May 2025 01:18:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43911296</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=43911296</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43911296</guid></item><item><title><![CDATA[New comment by sramam in "Legged Locomotion Meets Skateboarding"]]></title><description><![CDATA[
<p>I know very little in this field, but does this mean they the LED color is serving as debug/log messages for the training process? It sure seems so to my naive reading, and seems so clever.<p><pre><code>    We use different LED lights to indicate transitions between dynamic modes
    in the automata. Similar to segmentation techniques in computer vision, 
    the learned hybrid modes can help us analyze motion patterns more 
    systematically, improve interpretability in decision-making, and refine
    control strategies for enhanced adaptability.</code></pre></p>
]]></description><pubDate>Sat, 22 Mar 2025 12:36:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43445309</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=43445309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43445309</guid></item><item><title><![CDATA[New comment by sramam in "Smallpond – A lightweight data processing framework built on DuckDB and 3FS"]]></title><description><![CDATA[
<p>This is so funny!<p>However it can't even be called hallucinating. Imagine the incident "postmortem":<p><pre><code>    But the AI was trained on White House press briefings
</code></pre>
Made my day...</p>
]]></description><pubDate>Sun, 02 Mar 2025 20:00:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43234410</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=43234410</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43234410</guid></item><item><title><![CDATA[New comment by sramam in "Why LLMs still have problems with OCR"]]></title><description><![CDATA[
<p>Have you looked at <a href="https://moondream.ai/" rel="nofollow">https://moondream.ai/</a>?</p>
]]></description><pubDate>Sat, 08 Feb 2025 11:56:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=42982344</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42982344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42982344</guid></item><item><title><![CDATA[New comment by sramam in "Chat is a bad UI pattern for development tools"]]></title><description><![CDATA[
<p>Interesting take. Completely agree that product requirements document is a good mental models for system description. However, aren't bug-reports+PRs approximating a chat-interface?</p>
]]></description><pubDate>Tue, 04 Feb 2025 21:21:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=42938917</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42938917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42938917</guid></item><item><title><![CDATA[New comment by sramam in "Ask HN: What is interviewing like now with everyone using AI?"]]></title><description><![CDATA[
<p>That is a fair point.<p>This was the final technical screen so definitely something worth doing in my case.<p>The reason I posted a reply was there is a lot of negativity around AI in the hiring process. This was an excellent example of using AI to the benefit of all parties.<p>Instead of nit-picking on stylistic things from a smaller code-sample, 
one can nit-pick on the implemented complexity. I think it is a higher quality signal.</p>
]]></description><pubDate>Mon, 03 Feb 2025 07:02:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=42915702</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42915702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42915702</guid></item><item><title><![CDATA[New comment by sramam in "Ask HN: What is interviewing like now with everyone using AI?"]]></title><description><![CDATA[
<p>I recently completed a take-home assignment with the following instructions:<p><instructions><p>This project is designed to evaluate your ability to:<p><pre><code>  - Deconstruct complex problems into actionable steps.
  - Quickly explore and adopt new frameworks.
  - Implement a small but impactful proof of concept (PoC).
  - Demonstrate coding craftsmanship through clean, well-architected code.
</code></pre>
We estimate this project will take approximately 5–7 hours. If you find that it requires more time, let us know so we can adjust the scope.<p>Feel free to use any tools, libraries, frameworks, or LLMs during this exercise. Also, you’re welcome to reach out to us at any time with questions or for clarification.<p></instructions><p>I used LLM-as-a-junior-dev to generate 95+% of the code and documentation. 
I'm just an average programmer, but tried to set a bar that if I was on the other
side of the table, I'd hire anyone who demonstrated the quality of output submitted.<p><pre><code>  - The 5-7 hour estimate was exceeded (however, I was the first one through this exercise). 
  - IMHO the quality of the submission could NOT have been met in lesser time.
  - They had 3 tasks/projects:
     - a data science project, 
     - a CLI based project and
     - a web app
  - They wanted each to be done in a different language. 
  - I submitted my solution <38 hours of receipt of the assignment. 
  - In any other world, the intensity of this exercise would cause a panic-attack/burn-out. 
  - I slept well (2 nights of sleep), took care of family responsibilities and felt good enough to attack the next work-day.
</code></pre>
I've been on both sides of the table of many interviews.<p>This was by far the most fun and one to replicate every chance I get.<p>[EDITS]: Formatting and typos.</p>
]]></description><pubDate>Mon, 03 Feb 2025 02:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42914242</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42914242</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42914242</guid></item><item><title><![CDATA[New comment by sramam in "Show HN: libmodulor – An opinionated TS library to build multi-platform apps"]]></title><description><![CDATA[
<p>vramework looks really well thought out. Is it in use by anyone yet?</p>
]]></description><pubDate>Thu, 23 Jan 2025 16:50:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=42805605</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42805605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42805605</guid></item><item><title><![CDATA[New comment by sramam in "Solving the first 100 Project Euler problems using 100 languages"]]></title><description><![CDATA[
<p>> using a different--and, ideally, new to me--language for each problem<p>Perhaps this is why?</p>
]]></description><pubDate>Thu, 16 Jan 2025 22:35:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=42731775</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42731775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42731775</guid></item><item><title><![CDATA[New comment by sramam in "Things we learned about LLMs in 2024"]]></title><description><![CDATA[
<p>I concur that asking devs how they use AI is a great idea.<p>Recently, I shared a code base with a junior dev and she was surprised with the speed and sophistication of the code. The LLM did 80+% of the "coding".<p>What was telling was as she was grokking the code (for helping the ~20%), she was surprised at the quality of the code - her use of the LLM did not yield code of similar quality.<p>I find that the more domain awareness one brings to the table, the better the output is. Basically the clearer one's vision of the end-state, the better the output.<p>One other positive side-effect of using "LLMs as a junior-dev" for me has been that my ambitions are greater. I want it all - better code, more sophisticated capabilities even for relatively not-important projects, documentation, tests, debug-ability. And once the basic structure is in place, many a time it is trivial to get the rest.<p>It's never 100%, but even with 80+%, I am faster than ever before, deliver better quality code, and can switch domains multiple times a week and never feel drained.<p>Sharing best AI hacks within a team will have the same effect as code-reviews do in ensuring consistency. Perhaps an "LLM chat review", especially when something particularly novel was accomplished!</p>
]]></description><pubDate>Tue, 31 Dec 2024 23:26:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=42562636</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42562636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42562636</guid></item><item><title><![CDATA[New comment by sramam in "Show HN: Cut the crap – remove AI bullshit from websites"]]></title><description><![CDATA[
<p>Isn't this shortsighted in the sense that it removes all incentive for the creators to create?<p>A pre-click quality signal is more interesting and fair I imagine. 
Though I don't know how one can build a solution that is not game-able.</p>
]]></description><pubDate>Sun, 08 Dec 2024 14:22:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=42357305</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42357305</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42357305</guid></item><item><title><![CDATA[New comment by sramam in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>Very interesting. Thank you getting into the details. 
Do you chunk the text that goes into the BM25 index?
For the hypothetical answer, do you also prompt for "chunk size" responses?</p>
]]></description><pubDate>Mon, 18 Nov 2024 23:09:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178169</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=42178169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178169</guid></item><item><title><![CDATA[New comment by sramam in "Founder Mode"]]></title><description><![CDATA[
<p>In the enterprise, the cost of failure to ones career/reputation is unreasonably high. 
pg's reference to "most skillful liars in the world" stuck out to me.<p>The extreme conservatism employed by managers to prevent failure - can only be summed up as "success at any cost". The consequence is decisions that spread the pain far and wide.<p>Unfortunately, these managers are not accountable for these consequences.<p>It's no winder that solutions take longer, cost more, are sub-optimal at almost every level. Furthermore, they very painful for the poeple who have to suffer these solutions.
But hey, some unrelated manager-chain can claim success.<p>The worst of it is, these managers rinse-and-repeat at their next gig!</p>
]]></description><pubDate>Mon, 02 Sep 2024 02:58:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=41422256</link><dc:creator>sramam</dc:creator><comments>https://news.ycombinator.com/item?id=41422256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41422256</guid></item></channel></rss>