<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vrajat</title><link>https://news.ycombinator.com/user?id=vrajat</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 19:53:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vrajat" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vrajat in "Show HN: Honker – Postgres NOTIFY/LISTEN Semantics for SQLite"]]></title><description><![CDATA[
<p>I wrote a simple queue implementation after reading the Turbopuffer blog on queues on S3. In my implementation, I wrote complete sqlite files to S3 on every enqueue/dequeue/act. it used the previous E-Tag for Compare-And-Set.<p>The experiment and back-of-the-envelope calculations show that it can only support ~ 5 jobs/sec. The only major factor to increase throughput is to increase the size of group commits.<p>I dont think shipping CDC instead of whole sqlite files will change the calculations as the number of writes mattered in this experiment.<p>So yes, the number of writes (min. of 3) can support very low throughputs.</p>
]]></description><pubDate>Fri, 24 Apr 2026 09:07:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47887621</link><dc:creator>vrajat</dc:creator><comments>https://news.ycombinator.com/item?id=47887621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47887621</guid></item><item><title><![CDATA[New comment by vrajat in "Ask HN: What Are You Working On? (December 2025)"]]></title><description><![CDATA[
<p>I’m building an open-source project to reduce GitHub Actions CI costs by running jobs on self-hosted runners on owned hardware.
The motivation is to fill the gap between local workflow execution by projects like <a href="https://github.com/nektos/act" rel="nofollow">https://github.com/nektos/act</a> and self-hosted runner setups on the cloud.
My team’s requirements are simple and we don’t require all the features. We hope to keep ops simple and save costs. Any efficiency boost due to caching will be A bonus</p>
]]></description><pubDate>Mon, 15 Dec 2025 06:00:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46270946</link><dc:creator>vrajat</dc:creator><comments>https://news.ycombinator.com/item?id=46270946</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46270946</guid></item><item><title><![CDATA[New comment by vrajat in "Ask HN: What is your blog and why should I read it?"]]></title><description><![CDATA[
<p>I write about technology behind data governance, privacy and security at <a href="https://dbadminnews.substack.com/" rel="nofollow">https://dbadminnews.substack.com/</a><p>I started the newsletter because information is either by commercial vendors (and biased) or in research papers that are not easily discoverable.</p>
]]></description><pubDate>Tue, 07 Apr 2020 14:44:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=22803633</link><dc:creator>vrajat</dc:creator><comments>https://news.ycombinator.com/item?id=22803633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22803633</guid></item><item><title><![CDATA[New comment by vrajat in "Ask HN: What interesting problems are you working on?"]]></title><description><![CDATA[
<p>I am creating a couple of open source tools for data governance. The first one is a data catalog (1) with tags for PII data. The second one is a data lineage application (2). The goal is to keep these as simple as possible to install and use.<p>IMO the current options are too complicated or expensive and appropriate for the largest companies. I cannot hack a simple application for data discovery or usage statistics. So I am building a dead simple data catalog that I can reuse. The data lineage app is the first app on it.<p>(1) <a href="https://github.com/tokern/piicatcher" rel="nofollow">https://github.com/tokern/piicatcher</a>
(2) <a href="https://github.com/tokern/lineage" rel="nofollow">https://github.com/tokern/lineage</a></p>
]]></description><pubDate>Wed, 29 Jan 2020 03:50:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=22177185</link><dc:creator>vrajat</dc:creator><comments>https://news.ycombinator.com/item?id=22177185</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22177185</guid></item></channel></rss>