<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: 2bsinha</title><link>https://news.ycombinator.com/user?id=2bsinha</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 07:24:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=2bsinha" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by 2bsinha in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>absolutely loved the idea of mockingjay, i was also working on a similar kind of project which also had function to upload files, photos with end to end encryption, anti sniffing, zipping it and then uploading or working like bit torrent, breaking it into multiple files and storing in server in batches.</p>
]]></description><pubDate>Tue, 10 Feb 2026 02:23:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46954545</link><dc:creator>2bsinha</dc:creator><comments>https://news.ycombinator.com/item?id=46954545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46954545</guid></item><item><title><![CDATA[New comment by 2bsinha in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Are AI failures really model problems, or governance problems?<p>Over the past year, we’ve seen AI systems hallucinate in courts, leak internal prompts, get manipulated by praise, or make decisions they were never meant to make.<p>Most discussions focus on:<p>better models<p>alignment<p>prompt design<p>But I’m starting to think many of these failures aren’t intelligence issues at all.<p>They’re governance issues.<p>In most real systems, we separate:<p>capability from permission<p>intelligence from authority<p>generation from action<p>AI systems often skip this and let agents act by default, then try to clean up afterward with filters.<p>Curious how others here think about:<p>eligibility checks before AI actions<p>graduated authority for agents<p>limiting influence rather than outputs<p>system-level governance outside the model<p>Is anyone building or experimenting with this kind of control layer?</p>
]]></description><pubDate>Tue, 10 Feb 2026 01:59:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46954393</link><dc:creator>2bsinha</dc:creator><comments>https://news.ycombinator.com/item?id=46954393</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46954393</guid></item></channel></rss>