<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: estreeper</title><link>https://news.ycombinator.com/user?id=estreeper</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 00:42:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=estreeper" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by estreeper in "Failed Soviet Venus lander Kosmos 482 crashes to Earth after 53 years in orbit"]]></title><description><![CDATA[
<p>It wouldn’t be HN if your joke wasn’t met with pedantry, so I’ll mention the heat and pressure at the surface means the atmosphere is a supercritical fluid of 96.5% carbon dioxide and 3.5% nitrogen.<p>Buyers should have all the facts</p>
]]></description><pubDate>Wed, 14 May 2025 01:54:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43979906</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=43979906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43979906</guid></item><item><title><![CDATA[New comment by estreeper in "Llama.vim – Local LLM-assisted text completion"]]></title><description><![CDATA[
<p>Very exciting - I'm a long-time vim user but most of my coworkers use VSCode, and I've been wanting to try out in-editor completion tools like this.<p>After using it for a couple hours (on Elixir code) with Qwen2.5-Coder-3B and no attempts to customize it, this checks a lot of boxes for me:<p><pre><code>  - I pretty much want fancy autocomplete: filling in obvious things and saving my fingers the work, and these suggestions are pretty good
  - the default keybindings work for me, I like that I can keep current line or multi-line suggestions
  - no concerns around sending code off to a third-party
  - works offline when I'm traveling
  - it's fast!
</code></pre>
So I don't need to remember how to run the server, I'll probably set up a script to check if it's running and if not start it up in the background and run vim, and alias vim to use that. I looked in the help documents but didn't see a way to disable the "stats" text after the suggestions, though I'm not sure it will bother me that much.</p>
]]></description><pubDate>Fri, 24 Jan 2025 05:56:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=42810842</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=42810842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42810842</guid></item><item><title><![CDATA[New comment by estreeper in "Llama.vim – Local LLM-assisted text completion"]]></title><description><![CDATA[
<p>This is a tough question to answer, because it depends a lot on what you want to do! One way to approach it may be to look at what models you want to run and check the amount of VRAM they need. A back-of-the-napkin method taken from here[0] is:<p><pre><code>    VRAM (GB) = 1.2 * number of parameters (in billions) * bits per parameter / 8
</code></pre>
The 1.2 is just an estimation factor to account for the VRAM needed for things that <i>aren't</i> model parameters.<p>Because quantization is often nearly free in terms of output quality, you should usually look for quantized versions. For example, Llama 3.2 uses 16-bit parameters but has a 4-bit quantized version, and looking at the formula above you can see that will allow you to run a 4x larger model.<p>Having enough VRAM will allow you to <i>run</i> a model, but performance is dependent on a lot of other factors. For a much deeper dive into how all of this works along with price/dollar recommendations (though from last year!), Tim Dettmers wrote this excellent article: <a href="https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/" rel="nofollow">https://timdettmers.com/2023/01/30/which-gpu-for-deep-learni...</a><p>Worth mentioning for the benefit of those who don't want to buy a GPU: there are also models which have been converted to run on CPU.<p>[0] <a href="https://blog.runpod.io/understanding-vram-and-how-much-your-llm-needs/" rel="nofollow">https://blog.runpod.io/understanding-vram-and-how-much-your-...</a></p>
]]></description><pubDate>Fri, 24 Jan 2025 05:27:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42810753</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=42810753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42810753</guid></item><item><title><![CDATA[New comment by estreeper in "Interview with Jeff Atwood, Co-Founder of Stack Overflow"]]></title><description><![CDATA[
<p>Total healthcare spending per capita includes <i>insurance premiums</i> too though, which is $8,951 for single coverage and $25,572 for family coverage[0], and needs to be added to that ~$1,400 to get the total cost. Some amount of this may be paid by employers, but is still theoretically money that could go to you otherwise.<p>[0] <a href="https://www.kff.org/report-section/ehbs-2024-section-1-cost-of-health-insurance/" rel="nofollow">https://www.kff.org/report-section/ehbs-2024-section-1-cost-...</a></p>
]]></description><pubDate>Thu, 23 Jan 2025 02:11:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=42799868</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=42799868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42799868</guid></item><item><title><![CDATA[New comment by estreeper in "PostgreSQL Anonymizer"]]></title><description><![CDATA[
<p>A nice tradeoff in many cases is to have a separate schema rather than a separate database, which allows preserving referential integrity and using the database’s RBAC to restrict access to the schemas. This also means things like cascading deletes can still work.</p>
]]></description><pubDate>Sat, 18 Jan 2025 01:06:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42744848</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=42744848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42744848</guid></item><item><title><![CDATA[New comment by estreeper in "Why Canada Should Join the EU"]]></title><description><![CDATA[
<p>> it’s been a net negative for taxpayers<p>This analysis[1] from the Cato Institute found the opposite: “immigrants pay more in taxes than they consume in benefits”.<p>[1] <a href="https://www.cato.org/blog/fiscal-impact-immigration-united-states" rel="nofollow">https://www.cato.org/blog/fiscal-impact-immigration-united-s...</a></p>
]]></description><pubDate>Sat, 04 Jan 2025 01:10:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42591286</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=42591286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42591286</guid></item><item><title><![CDATA[New comment by estreeper in "Phoenix LiveView 1.0.0 is here"]]></title><description><![CDATA[
<p>We've built many production apps using LiveView. It has some limitations inherent to its design, namely the need to have a semi-reliable WebSocket connection to be able to effectively use the app, but with this tradeoff come a number of advantages:<p><pre><code>  - code generation makes for an extremely productive experience that makes standing up an actually-useful application very fast
  - Elixir is a great language, especially for the web, and using it to render the frontend feels like having the full power of the language plus the simplicity of HTML (with little/no writing JavaScript)
  - it's extremely efficient since only tiny changes are sent over the WebSocket when data is updated on the server
  - you're already using WebSockets, so adding any kind of real-time functionality is very easy (chat, notifications, game state)
</code></pre>
Because of the separation of concerns by convention (i.e. keeping business logic in Contexts), it's also a very viable pathway to build a webapp using LiveView first, and serve an API once you need other types of clients (native apps, API consumers) with minimal changes. Ecto is also great to use for validations, and having that available for "frontend" code is a pleasure. It's also great to be able to have backend <i>and</i> frontend tests in Elixir.<p>We've hit some bugs and gotchas over the years leading up to this 1.0 release, but it has long felt like a stable, well-built library that keeps our codebases simple and maintainable, which lets you move fast.<p>Congratulations to Chris, Jose, and all the other wonderful contributors!</p>
]]></description><pubDate>Tue, 03 Dec 2024 23:44:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=42312982</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=42312982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42312982</guid></item><item><title><![CDATA[New comment by estreeper in "The average age of U.S. homebuyers jumps to 56"]]></title><description><![CDATA[
<p>In the United States, federal tax applies when the estate is valued over $13.6M for 2024[1], though as you mention spousal transfers and other methods can be used for tax avoidance as well.<p>1. <a href="https://www.irs.gov/businesses/small-businesses-self-employed/frequently-asked-questions-on-estate-taxes" rel="nofollow">https://www.irs.gov/businesses/small-businesses-self-employe...</a></p>
]]></description><pubDate>Tue, 05 Nov 2024 07:31:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=42049397</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=42049397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42049397</guid></item><item><title><![CDATA[New comment by estreeper in "Japan’s Temple-Builder Kongō Gumi, Has Survived Nearly 1,500 Years"]]></title><description><![CDATA[
<p>Not to mention various other wars and random fires, such as the Ōnin War 応仁の乱 in the 1400s, a civil war between many feudal lords, which destroyed much of Kyoto among other areas.<p>The world's oldest extant wooden structure is the Kondō (main hall) of the temple Hōryū-ji 法隆寺 in Ikaruga, in the Nara Prefecture of Japan. It was initially built in 607 but completely burned down due to lightning. It was rebuilt in 670, but again nearly burned down by accident in 1949 [1].<p>It's interesting to contemplate how across these timescales war, disasters, and accidents make it so difficult for structures to survive.<p>[1] <a href="https://en.wikipedia.org/wiki/H%C5%8Dry%C5%AB-ji" rel="nofollow">https://en.wikipedia.org/wiki/H%C5%8Dry%C5%AB-ji</a></p>
]]></description><pubDate>Mon, 02 Sep 2024 08:35:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=41423651</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=41423651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41423651</guid></item><item><title><![CDATA[What Is USPS General Delivery?]]></title><description><![CDATA[
<p>Article URL: <a href="https://faq.usps.com/s/article/What-is-General-Delivery">https://faq.usps.com/s/article/What-is-General-Delivery</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40846656">https://news.ycombinator.com/item?id=40846656</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 01 Jul 2024 15:21:45 +0000</pubDate><link>https://faq.usps.com/s/article/What-is-General-Delivery</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=40846656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40846656</guid></item><item><title><![CDATA[New comment by estreeper in "Armor from Mycenaean Greece turns out to have been effective"]]></title><description><![CDATA[
<p>Here’s a short video of this armor in action: <a href="https://youtu.be/rm2ZR25xU8M?si=6HdtRO8cFxB5HO8l" rel="nofollow">https://youtu.be/rm2ZR25xU8M?si=6HdtRO8cFxB5HO8l</a></p>
]]></description><pubDate>Fri, 31 May 2024 23:44:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=40541411</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=40541411</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40541411</guid></item><item><title><![CDATA[Robot dogs with AI-aimed rifles undergo US Marines evaluation]]></title><description><![CDATA[
<p>Article URL: <a href="https://arstechnica.com/gadgets/2024/05/robot-dogs-armed-with-ai-targeting-rifles-undergo-us-marines-special-ops-evaluation/">https://arstechnica.com/gadgets/2024/05/robot-dogs-armed-with-ai-targeting-rifles-undergo-us-marines-special-ops-evaluation/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40324323">https://news.ycombinator.com/item?id=40324323</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 10 May 2024 22:22:47 +0000</pubDate><link>https://arstechnica.com/gadgets/2024/05/robot-dogs-armed-with-ai-targeting-rifles-undergo-us-marines-special-ops-evaluation/</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=40324323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40324323</guid></item><item><title><![CDATA[New comment by estreeper in "SentenceTransformers: Python framework for sentence, text and image embeddings"]]></title><description><![CDATA[
<p>I'm curious how people are handling multi-lingual embeddings.<p>I've found LASER[1] which originally had the idea to embed all languages in the same vector space, though it's a bit harder to use than models available through SentenceTransformers. LASER2 stuck with this approach, but LASER3 switched to language-specific models. However, I haven't found benchmarks for these models, and they were released about 2 years ago.<p>Another alternative would be to translate everything before embedding, which would introduce some amount of error, though maybe it wouldn't be significant.<p>1. <a href="https://github.com/facebookresearch/LASER">https://github.com/facebookresearch/LASER</a></p>
]]></description><pubDate>Sun, 07 Apr 2024 15:50:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=39961548</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=39961548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39961548</guid></item><item><title><![CDATA[New comment by estreeper in "SentenceTransformers: Python framework for sentence, text and image embeddings"]]></title><description><![CDATA[
<p>Just to add to this, a great resource is the Massive Text Embedding Benchmark (MTEB) leaderboard which you can use to find good models to evaluate, and there are many open models that outperform i.e. OpenAI's text-embedding-ada-002, currently ranked #46 for retrieval, which you can use with SentenceTransformers.<p><a href="https://huggingface.co/spaces/mteb/leaderboard" rel="nofollow">https://huggingface.co/spaces/mteb/leaderboard</a></p>
]]></description><pubDate>Sun, 07 Apr 2024 15:28:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=39961397</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=39961397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39961397</guid></item><item><title><![CDATA[New comment by estreeper in "The Obscene Energy Demands of A.I"]]></title><description><![CDATA[
<p>Energy use should be a concern, but it’s important to understand the magnitude of the problems and not simply conflate crypto and AI. To compare the technologies against each other:<p>Bitcoin: 145B = 145,000M kWh/year<p>ChatGPT: 0.5M * 365 = 182M kWh/year<p>Based on the numbers from the article, ChatGPT is using 3 orders of magnitude less electricity for something that provides high utility for vastly more people.<p>There are of course many other uses of AI aside from ChatGPT and more cryptocurrencies aside from BTC, but these are very different power consumptions.</p>
]]></description><pubDate>Sat, 09 Mar 2024 16:03:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=39652527</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=39652527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39652527</guid></item><item><title><![CDATA[Court of Chancery Opinion: Richard Tornetta vs. Elon Musk]]></title><description><![CDATA[
<p>Article URL: <a href="https://courts.delaware.gov/Opinions/Download.aspx?id=359340">https://courts.delaware.gov/Opinions/Download.aspx?id=359340</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39197917">https://news.ycombinator.com/item?id=39197917</a></p>
<p>Points: 34</p>
<p># Comments: 15</p>
]]></description><pubDate>Wed, 31 Jan 2024 00:12:01 +0000</pubDate><link>https://courts.delaware.gov/Opinions/Download.aspx?id=359340</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=39197917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39197917</guid></item><item><title><![CDATA[New comment by estreeper in "New models and developer products"]]></title><description><![CDATA[
<p>For embeddings specifically, there are multiple open source models that outperform OpenAI’s best model (text-embedding-ada-002) that you can see on the MTEB Leaderboard [1]<p>> embedding-based approach will be cheaper and faster, but worse result than full text<p>I’m not sure results would be worse, I think it depends on the extent to which the models are able to ignore irrelevant context, which is a problem [2]. Using retrieval can come closer to providing only relevant context.<p>1. <a href="https://huggingface.co/spaces/mteb/leaderboard" rel="nofollow noreferrer">https://huggingface.co/spaces/mteb/leaderboard</a><p>2. <a href="https://arxiv.org/abs/2302.00093" rel="nofollow noreferrer">https://arxiv.org/abs/2302.00093</a></p>
]]></description><pubDate>Mon, 06 Nov 2023 22:37:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=38170120</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=38170120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38170120</guid></item><item><title><![CDATA[Open-Source Semantic Vector Search with Pgvector and Instructor]]></title><description><![CDATA[
<p>Article URL: <a href="https://revelry.co/insights/open-source-semantic-vector-search-with-instructor-pgvector-and-flask/">https://revelry.co/insights/open-source-semantic-vector-search-with-instructor-pgvector-and-flask/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=36973090">https://news.ycombinator.com/item?id=36973090</a></p>
<p>Points: 18</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 02 Aug 2023 16:25:16 +0000</pubDate><link>https://revelry.co/insights/open-source-semantic-vector-search-with-instructor-pgvector-and-flask/</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=36973090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36973090</guid></item><item><title><![CDATA[New comment by estreeper in "Which vector database should I use? A comparison cheatsheet"]]></title><description><![CDATA[
<p>I recently wrote a tutorial on making a vector driven semantic search app using all open source tools (pgvector, Instructor, and Flask) that might be helpful: <a href="https://revelry.co/insights/open-source-semantic-vector-search-with-instructor-pgvector-and-flask/" rel="nofollow noreferrer">https://revelry.co/insights/open-source-semantic-vector-sear...</a></p>
]]></description><pubDate>Mon, 31 Jul 2023 15:58:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=36944602</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=36944602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36944602</guid></item><item><title><![CDATA[New comment by estreeper in "Meta to release open-source commercial AI model"]]></title><description><![CDATA[
<p>If you're just looking to play with something locally for the first time, this is the simplest project I've found and has a simple web UI: <a href="https://github.com/cocktailpeanut/dalai">https://github.com/cocktailpeanut/dalai</a><p>It works for 7B/13B/30B/65B LLaMA and Alpaca (fine-tuned LLaMA which definitely works better). The smaller models at least should run on pretty much any computer.</p>
]]></description><pubDate>Fri, 14 Jul 2023 17:23:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=36727024</link><dc:creator>estreeper</dc:creator><comments>https://news.ycombinator.com/item?id=36727024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36727024</guid></item></channel></rss>