<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thelastbender12</title><link>https://news.ycombinator.com/user?id=thelastbender12</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 05:28:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thelastbender12" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thelastbender12 in "Pandas 3.0"]]></title><description><![CDATA[
<p>I think that's a fair opinion, but I'd argue against it being poorly thought out - pandas HAS to stick with older api decisions (dating back to before data science was a mature enough field, and it has pandas to thank for much of it) for backwards compatibility.</p>
]]></description><pubDate>Wed, 28 Jan 2026 13:35:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46795173</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=46795173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46795173</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Zed's Pricing Has Changed: LLM Usage Is Now Token-Based"]]></title><description><![CDATA[
<p>Sorry, how is this new pricing anything but honest? They provide an editor you can use to 
- optimize the context you send to the LLM services
- interact with the output that comes out of them<p>Why does not justify charging a fraction of your spend on the LLM platform? This is pretty much how every service business operates.</p>
]]></description><pubDate>Wed, 24 Sep 2025 17:41:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45363588</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=45363588</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45363588</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Litestar is worth a look"]]></title><description><![CDATA[
<p>Thank you for writing this - there is a very clear split you feel when using fastapi for single script web servers, vs trying to organize it. And I probably share all the mentioned annoyances around writing bigger projects with fastapi.</p>
]]></description><pubDate>Thu, 07 Aug 2025 04:18:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44820528</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=44820528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44820528</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Mercury: Ultra-Fast Language Models Based on Diffusion"]]></title><description><![CDATA[
<p>The speed here is super impressive! I am curious -  are there any qualitative ways in which modeling text using diffusion differs from that using autoregressive models? The kind of problems it works better on, creativity, and similar.</p>
]]></description><pubDate>Mon, 07 Jul 2025 13:53:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=44490388</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=44490388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44490388</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Zig breaking change – initial Writergate"]]></title><description><![CDATA[
<p>Thank you, my bad - I wasn't aware.<p>I still think what drives languages to continuously make changes is the focus on developer UX, or at least the intent to make it better. So, PLs with more developers will always keep evolving.</p>
]]></description><pubDate>Fri, 04 Jul 2025 07:39:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44462113</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=44462113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44462113</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Zig breaking change – Initial Writergate"]]></title><description><![CDATA[
<p>Sorry, I think this comparison is just unfair. Odin might have "shipped" but are there are any projects with significant usage built on it? I can count at least 3 with Zig - Ghostty, Tigerbeetle, and Bun.<p>Programming languages which do get used are always in flux, for good reason - python is still undergoing major changes (free-threading, immutability, and others), and I'm grateful for it.</p>
]]></description><pubDate>Fri, 04 Jul 2025 07:13:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44461929</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=44461929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44461929</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Pyrefly vs. Ty: Comparing Python's two new Rust-based type checkers"]]></title><description><![CDATA[
<p>Do you use Jupyter notebooks in VSCode? It uses the same pylance as regular python files, which actually gets annoying when I want to write throwaway code.</p>
]]></description><pubDate>Tue, 27 May 2025 16:41:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44108552</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=44108552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44108552</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Show HN: US Routing – Python library for fast local routing in the US"]]></title><description><![CDATA[
<p>You need to request a specific python version compatible with this project. Give `uv venv --python 3.11` a try.<p><a href="https://docs.astral.sh/uv/pip/environments/" rel="nofollow">https://docs.astral.sh/uv/pip/environments/</a></p>
]]></description><pubDate>Thu, 08 May 2025 03:26:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43922807</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=43922807</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43922807</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Launch HN: Continue (YC S23) – Create custom AI code assistants"]]></title><description><![CDATA[
<p>I meant Pylance isn't legally available in Cursor (Vscode license restriction, which is justified). It broke very frequently, so I switched to Based pyright, which works but just not as well.</p>
]]></description><pubDate>Fri, 28 Mar 2025 06:38:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43502223</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=43502223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43502223</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Launch HN: Continue (YC S23) – Create custom AI code assistants"]]></title><description><![CDATA[
<p>Yep, exactly that. IMO agent workflows, MCP and tool usage bits are all promising, but the more common usage of LLMs in coding is still chat. AI extensions in editors just make it simple to supply context, and apply diffs.<p>An addon makes it seem like an afterthought, which I'm certain you are not going for! But still making is as seamless as possible would be great. For ex, response time for Claude in Cursor is much better than even the Claude web app for me.</p>
]]></description><pubDate>Thu, 27 Mar 2025 17:37:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43495912</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=43495912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43495912</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Launch HN: Continue (YC S23) – Create custom AI code assistants"]]></title><description><![CDATA[
<p>Congrats on the release! I've been using Cursor but somewhat annoyed with the regular IDE affordances not working quite right (absence of pylance), and would love to go back to VSCode.<p>I'd love it if you lean into pooled model usage, rather than it being an addon. IMO it is the biggest win for Cursor usage - a reasonable num of LLM calls per month, so I never have to do token math or fiddle with api keys. Of course, it is available as a feature already (I'm gonna try Continue) but the difference in response time b/w Cursor and Github copilot (who don't seem to care) is drastic.</p>
]]></description><pubDate>Thu, 27 Mar 2025 16:34:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43495323</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=43495323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43495323</guid></item><item><title><![CDATA[New comment by thelastbender12 in "The best way to use text embeddings portably is with Parquet and Polars"]]></title><description><![CDATA[
<p>This is pretty neat.<p>IMO a hindrance to this was lack of built-in fixed-size list array support in the Arrow format, until recently. Some implementations/clients supported it, while others didn't. Else, it could have been used as the default storage format for numpy arrays, torch tensors, too.<p>(You could always store arrays as variable length list arrays with fixed strides and handle the conversion).</p>
]]></description><pubDate>Mon, 24 Feb 2025 19:41:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43163978</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=43163978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43163978</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Durable execution should be lightweight"]]></title><description><![CDATA[
<p>I see it being a trade-off between how explicit the state persisted for a workflow execution is (rows in a database for Temporal and DBOS) vs how natural it is to write such a workflow (like in your PL/compiler). Given workflows are primarily used for business use-cases, with a lot of non-determinacy coming from interaction with third-party services or other  deployments, the library implementation feels more appropriate.<p>Though I am assuming building durability at a language-level means the whole program state must be serializable, which sounds tricky. Curious if you could share more?</p>
]]></description><pubDate>Mon, 03 Feb 2025 14:15:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=42918402</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=42918402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42918402</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Gemini 2.0: our new AI model for the agentic era"]]></title><description><![CDATA[
<p>I kinda agree with you but I can also see why it isn't that far from "reasoning" in the sense humans do it.<p>To wit, if I am doing a high school geometry proof, I come up with a sequence of steps. If the proof is correct, each step follows logically from the one before it.<p>However, when I go from step 2 to step 3, there are multiple options for step-3 I could have chose. Is it so different from a "most-likely-prediction" an LLM makes? I suppose the difference is humans can filter out logically-incorrect steps, or prune chains-of-steps that won't lead to the actual theorem quicker. But an LLM predictor coupled with a verifier doesn't feel that different from it.</p>
]]></description><pubDate>Wed, 11 Dec 2024 18:09:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=42390786</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=42390786</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42390786</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Running Durable Workflows in Postgres Using DBOS"]]></title><description><![CDATA[
<p>> The succeeded operations would skip because workflow has run completion record for same idempotency key. Is that correct?<p>This sounds about right. But you need to make sure the service being called in that step is indeed idempotent, and will return the same response which it earlier couldn't in time.</p>
]]></description><pubDate>Wed, 11 Dec 2024 06:33:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42385371</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=42385371</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42385371</guid></item><item><title><![CDATA[New comment by thelastbender12 in "My NumPy year: Creating a DType for the next generation of scientific computing"]]></title><description><![CDATA[
<p>PyArrow string arrays store the entries (string values) contiguously in memory, so access is quicker, while object arrays have pointers to scattered memory locations in the heap.<p>I agree, I couldn't really figure how the new numpy string data type makes it work though.</p>
]]></description><pubDate>Thu, 24 Oct 2024 06:11:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=41932436</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=41932436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41932436</guid></item><item><title><![CDATA[New comment by thelastbender12 in "On Impactful AI Research"]]></title><description><![CDATA[
<p>I think that's a little harsh. Imo the reason for difference in popularity/github-stars is just different user bases- an order of magnitude more people use LLM APIs (and can leverage Dspy) vs those who finetune an LLM.<p>Agree about the abstractions btw. I found Dspy very convoluted for what it does, couldn't make sense of Textgrad at all.</p>
]]></description><pubDate>Wed, 25 Sep 2024 04:16:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=41643624</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=41643624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41643624</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Resources for Amateur Compiler Writers"]]></title><description><![CDATA[
<p>This looks great!<p>I'd also love to hear from people working on compilers - what are some real/fun/cool problems to work on for amateur compiler writers?<p>Suspect the obvious candidates are deep-learning, sql engines but those already get a lot of attention.</p>
]]></description><pubDate>Mon, 19 Aug 2024 17:39:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=41292992</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=41292992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41292992</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Rye: A Hassle-Free Python Experience"]]></title><description><![CDATA[
<p>I think this is a fine opinion, we like tools that do exactly how much we want them to. But I'd suggest setting up python (and virtual envs) was actually a big headache for a lot of newer users, and some of the old ones (me that is).<p>I also don't see why leaning into python being a wrapper around rust/cpp/c is a bad thing. Each language has its own niche and packaging/bootstrapping is more of a systems level language problem.</p>
]]></description><pubDate>Tue, 09 Jul 2024 06:32:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=40913079</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=40913079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40913079</guid></item><item><title><![CDATA[New comment by thelastbender12 in "Rye: A Hassle-Free Python Experience"]]></title><description><![CDATA[
<p>For that use-case, you can set up a `virtual project` using Rye. And use it just to create python envs, and sync dependencies.<p>Honestly, the biggest time-saver for me has been Rye automatically fetching python binaries that work everywhere, and setting up clean venvs.<p>- <a href="https://rye.astral.sh/guide/virtual/" rel="nofollow">https://rye.astral.sh/guide/virtual/</a></p>
]]></description><pubDate>Tue, 09 Jul 2024 06:20:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=40913009</link><dc:creator>thelastbender12</dc:creator><comments>https://news.ycombinator.com/item?id=40913009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40913009</guid></item></channel></rss>