<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ngalstyan4</title><link>https://news.ycombinator.com/user?id=ngalstyan4</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 06:18:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ngalstyan4" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ngalstyan4 in "$50 PlanetScale Metal Is GA for Postgres"]]></title><description><![CDATA[
<p>Sounds cool!<p>Would be curious to know what the underlying aws ec2 instance is.<p>Is each DB on a dedicated instance?<p>If not, are there per-customer iops bounds?</p>
]]></description><pubDate>Mon, 15 Dec 2025 16:17:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46276479</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=46276479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46276479</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Select 1 Touches 5,583 Lines of Postgres Source Code"]]></title><description><![CDATA[
<p>They would need to handle all the translation changes as well, no?<p><<a href="https://github.com/search?q=repo%3Apostgres%2Fpostgres+%22major+version+%25s%2C+server+major+version%22&type=code" rel="nofollow">https://github.com/search?q=repo%3Apostgres%2Fpostgres+%22ma...</a>><p>I agree code change is simple, but I guess the task is complex for other reasons</p>
]]></description><pubDate>Tue, 11 Nov 2025 17:14:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45890015</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=45890015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45890015</guid></item><item><title><![CDATA[Select 1 Touches 5,583 Lines of Postgres Source Code]]></title><description><![CDATA[
<p>Article URL: <a href="https://narekg.me/blog/2025/11/pg-select-1-lines/">https://narekg.me/blog/2025/11/pg-select-1-lines/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45889261">https://news.ycombinator.com/item?id=45889261</a></p>
<p>Points: 3</p>
<p># Comments: 6</p>
]]></description><pubDate>Tue, 11 Nov 2025 16:31:11 +0000</pubDate><link>https://narekg.me/blog/2025/11/pg-select-1-lines/</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=45889261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45889261</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "European.cloud: A Curated Directory of EU-Based Cloud Providers"]]></title><description><![CDATA[
<p>Surprised to not see Ubicloud in there, which provides cloud services on top of various (including European) infra providers.<p><a href="https://www.ubicloud.com/">https://www.ubicloud.com/</a></p>
]]></description><pubDate>Thu, 16 Oct 2025 13:29:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45605118</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=45605118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45605118</guid></item><item><title><![CDATA[Debugging Hetzner: Uncovering failures with powerstat, sensors, and dmidecode]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.ubicloud.com/blog/debugging-hetzner-uncovering-failures-with-powerstat-sensors-and-dmidecode">https://www.ubicloud.com/blog/debugging-hetzner-uncovering-failures-with-powerstat-sensors-and-dmidecode</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43101430">https://news.ycombinator.com/item?id=43101430</a></p>
<p>Points: 345</p>
<p># Comments: 108</p>
]]></description><pubDate>Wed, 19 Feb 2025 12:40:58 +0000</pubDate><link>https://www.ubicloud.com/blog/debugging-hetzner-uncovering-failures-with-powerstat-sensors-and-dmidecode</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=43101430</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43101430</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Launch HN: Promptless (YC W25) – Automatic updates for customer-facing docs"]]></title><description><![CDATA[
<p>This is really cool, congrats on launch!<p>I am curious how you prevent private data from getting leaked to the auto-generated public docs. I imagine this problem does not exist in open source projects, but would become an issue if not everything discussed in company's private messenger should be used as context for generating docs.</p>
]]></description><pubDate>Tue, 18 Feb 2025 19:45:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=43094126</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=43094126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43094126</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Postgres vs. Pinecone"]]></title><description><![CDATA[
<p>Author here.<p>> I don’t think “get moar ram” is a good response to that particular critique.<p>I do not think the blog post suggested "get more ram" as a response, but happy to clarify if you could share more details!<p>> Indexing in Postgres is legitimately painful<p>Lantern is here to make the process seamless and remove most of the pain for people building LLM/AI applications. Examples:<p>1. We build tools to remove the guesswork of HNSW index sizing. E.g. <a href="https://lantern.dev/blog/calculator">https://lantern.dev/blog/calculator</a><p>2. We analyze typical patterns people use when building LLM apps and suggest better practices. E.g. <a href="https://lantern.dev/blog/async-embedding-tables">https://lantern.dev/blog/async-embedding-tables</a><p>3. We build alerts and triggers into our cloud database that automate the discovery of many issues via heuristics.</p>
]]></description><pubDate>Sat, 20 Jul 2024 07:14:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=41014729</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=41014729</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41014729</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Postgres vs. Pinecone"]]></title><description><![CDATA[
<p>Author here. We will benchmark this thoroughly in the future for our vector indexes.<p>But at least anecdotally, it made a ton of difference.<p>We met <200ms latency budget with Ubicloud NVMes but had to wait seconds to get an answer from the same query with GCP persistent disks or local SSDs</p>
]]></description><pubDate>Sat, 20 Jul 2024 07:02:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=41014685</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=41014685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41014685</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Embeddings are a good starting point for the AI curious app developer"]]></title><description><![CDATA[
<p>I have tried CLIP on my personal photo album collection and it worked really well there - I could write detailed scene descriptions of past road trips, and the photos I had in mind would pop up. Probably the model is better for everyday photos than for icons</p>
]]></description><pubDate>Wed, 17 Apr 2024 20:28:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=40069731</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=40069731</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40069731</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Embeddings are a good starting point for the AI curious app developer"]]></title><description><![CDATA[
<p>We provide this functionality in Lantern cloud via our Lantern Extras extension: <<a href="https://github.com/lanterndata/lantern_extras">https://github.com/lanterndata/lantern_extras</a>><p>You can generate CLIP embeddings locally on the DB server via:<p><pre><code>  SELECT abstract,
       introduction,
       figure1,
       clip_text(abstract) AS abstract_ai,
       clip_text(introduction) AS introduction_ai,
       clip_image(figure1) AS figure1_ai
  INTO papers_augmented
  FROM papers;
</code></pre>
Then you can search for embeddings via:<p><pre><code>  SELECT abstract, introduction FROM papers_augmented ORDER BY clip_text(query) <=> abstract_ai LIMIT 10;
</code></pre>
The approach significantly decreases search latency and results in cleaner code.
As an added bonus, EXPLAIN ANALYZE can now tell percentage of time spent in embedding generation vs search.<p>The linked library enables embedding generation for a dozen open source models and proprietary APIs (list here: <<a href="https://lantern.dev/docs/develop/generate">https://lantern.dev/docs/develop/generate</a>>, and adding new ones is really easy.</p>
]]></description><pubDate>Wed, 17 Apr 2024 18:10:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=40068218</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=40068218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40068218</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Why CockroachDB doesn't use EvalPlanQual"]]></title><description><![CDATA[
<p>For similar isolation level anomalies in real world applications check out this SIGMOD '17 paper:<p>ACIDRain: Concurrency-Related Attacks on
Database-Backed Web Applications: <a href="http://www.bailis.org/papers/acidrain-sigmod2017.pdf" rel="nofollow">http://www.bailis.org/papers/acidrain-sigmod2017.pdf</a></p>
]]></description><pubDate>Sat, 06 Apr 2024 02:47:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=39949585</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=39949585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39949585</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Greenmask: PostgreSQL Dump and Obfuscation Tool"]]></title><description><![CDATA[
<p>Not sure what the approach of this library is, but can't you generate a nonce from a larger alphabet, hash the column values with the nonce `hash(nonce || column)`, and crypto-shred the nonce in the end.<p>Then, during hashing you just need a constant immutable state, which effectively expands the hash space, without incurring the mutable state overhead of replacement strings strategy.</p>
]]></description><pubDate>Sat, 17 Feb 2024 20:42:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=39413365</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=39413365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39413365</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Show HN: An open source performance monitoring tool"]]></title><description><![CDATA[
<p>this is cool!<p>Does this only collect logs from frontend?<p>Or it can also collect the backend and DB latency data related to a frontend interaction?</p>
]]></description><pubDate>Fri, 02 Feb 2024 22:30:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=39235329</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=39235329</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39235329</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "90x Faster Than Pgvector – Lantern's HNSW Index Creation Time"]]></title><description><![CDATA[
<p>My understanding is <i>Trusted Language Extensions</i> refer to extensions written in PL/Rust - a Postgres extension mechanism to write user defined functions and use them in SQL queries.<p>PL/Rust is a more performant and more feature-rich alternative to PL/pgSQL, which is the traditional UDF scripting language for Postgres.<p>Building a vector index (or any index for that matter) inside Postgres is a more involved process and can not be done via the UDF interface, be it Rust, C or PL/pgSQL.<p>So, I think even if Lantern was written in Rust, it would not be a Trusted Language Extension under this definition.</p>
]]></description><pubDate>Wed, 03 Jan 2024 00:33:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=38849167</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=38849167</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38849167</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "90x Faster Than Pgvector – Lantern's HNSW Index Creation Time"]]></title><description><![CDATA[
<p>cofounder here.<p>You are right that there are many trade-offs between HNSW and IVFFLAT.<p>E.g. IVFFLAT requires there be significant amount of data in the table, before the index is created, and assumes data distribution does not change with additional inserts (since it chooses centroids during the initial creation and never updates them)<p>We have also generally had harder time getting high recall with IVFFLAT on vectors from embedding models such as ada-002.<p>There are trade-offs, some of which we will explore in later blog posts.<p>This post is about one thing - HNSW index creation time across two systems, at a fixed 99% recall.</p>
]]></description><pubDate>Wed, 03 Jan 2024 00:07:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=38848978</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=38848978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38848978</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Armenia's Deep-Tech Dream"]]></title><description><![CDATA[
<p>It is strange that there is no mention of Ruben Vardanyan, even when FAST foundation is mentioned.<p>But still cool to read about Armenia's tech sector here.</p>
]]></description><pubDate>Mon, 13 Nov 2023 17:22:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=38252645</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=38252645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38252645</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Show HN: Lantern – a PostgreSQL vector database for building AI applications"]]></title><description><![CDATA[
<p>We have not run microbenchmarks to see what dimension ranges perform best but those are coming soon! Below is an anecdotal answer:<p>We run our ci/cd benchmarks on 128dim sift vectors. We have some demos using clip embeddings (512dim) and baai/bge 768 dimensional embeddings.<p>Generally, smaller vectors allow higher throughput and result in smaller indexes. But the effect on performance is small.
Once we merge the PR implementing vector element casts to 1 and 2 byte floats, the effect of this on throughput should be even smaller.</p>
]]></description><pubDate>Thu, 14 Sep 2023 03:27:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=37504627</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=37504627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37504627</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Show HN: Lantern – a PostgreSQL vector database for building AI applications"]]></title><description><![CDATA[
<p>This sounds like a very useful feature, and we will prioritize this.<p>You’re correct that IVFFLAT would be faster for your use case. However, IVFFLAT’s shortcoming is bad recall, which means less relevant results for your application. We believe that our HNSW implementation (or other indexes) can handle use cases like yours.<p>We currently handle a similar use-case by rerunning our index searches with exponentially increasing LIMITs and dropping the results which are not needed. Could you send us an email at support@lantern.dev? We can generate the numbers by this weekend, and get back to you with concrete results.<p>By the way – not sure if you saw in our blog post, if you’re using pgvector in production and switch to Lantern, we’ll help you every single step of the way. It’s very quick, and we’ll also send you some free AirPods Pro at the end of it!</p>
]]></description><pubDate>Wed, 13 Sep 2023 21:00:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=37501677</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=37501677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37501677</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Show HN: Lantern – a PostgreSQL vector database for building AI applications"]]></title><description><![CDATA[
<p>>When you say "produced locally", do you mean on the client?<p>Sorry for the confusion. By “produced locally” I meant “produced on your DB server” as opposed to being an API call to a third party service such as OpenAI or HuggingFace.<p>(But, like... doing it remotely--on the database server as part of the query plan--frankly seems kind of crazy to me, as it is going to be so slow and add a massive CPU load to what should be an I/O workload. Makes for good demos I bet, but otherwise unusable in a database context.)<p>It seems like you’re worried about these workflows being on the Postgres server, which may lead to performance issues.<p>However, if performance becomes an issue, the functions can be executed on another server. In this approach, whether or not the functions run on the Postgres server, the end user gets access to a better developing experience as all the functions they need are available within SQL.<p>>...this frankly shouldn't be part of the same extension
We agree. These functions are already in another repository, and not part of the same extension. The repository is here: <a href="https://github.com/lanterndata/lantern_extras">https://github.com/lanterndata/lantern_extras</a></p>
]]></description><pubDate>Wed, 13 Sep 2023 20:36:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=37501419</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=37501419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37501419</guid></item><item><title><![CDATA[New comment by ngalstyan4 in "Show HN: Lantern – a PostgreSQL vector database for building AI applications"]]></title><description><![CDATA[
<p>Our index access method will be called lantern_hnsw if pgvector or any other provider has already taken the hnsw access method name.<p>btw, we did not create our own vector type and just use size-enforced real[] arrays to represent embeddings. However, you can use our index with pgvector's vector type. 
So, if you already have a table with pgvector's vector column type, you can start using Lantern by just creating an index on the same column.</p>
]]></description><pubDate>Wed, 13 Sep 2023 19:45:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=37500863</link><dc:creator>ngalstyan4</dc:creator><comments>https://news.ycombinator.com/item?id=37500863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37500863</guid></item></channel></rss>