<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: qntty</title><link>https://news.ycombinator.com/user?id=qntty</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 03:53:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=qntty" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by qntty in "Mistral AI Releases Forge"]]></title><description><![CDATA[
<p>Pre-training mean exposing an already-trained model to more raw text like PDF extracts etc (aka continued pre-training). You wouldn't be starting from scratch, but it's still pre-training because the objective is just next token prediction of the text you expose it to.<p>Post-training means everything else: SFT, DPO, RL, etc. Anything that involves things like prompt/response pairs, reward models, or benefits from human feedback of any kind.</p>
]]></description><pubDate>Wed, 18 Mar 2026 11:58:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47424579</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=47424579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47424579</guid></item><item><title><![CDATA[New comment by qntty in "Letter from a Birmingham Jail (1963)"]]></title><description><![CDATA[
<p>> Sometimes a law is just on its face and unjust in its application. For instance, I have been arrested on a charge of parading without a permit. Now, there is nothing wrong in having an ordinance which requires a permit for a parade. But such an ordinance becomes unjust when it is used to maintain segregation and to deny citizens the First-Amendment privilege of peaceful assembly and protest.<p>> I hope you are able to see the distinction I am trying to point out. In no sense do I advocate evading or defying the law, as would the rabid segregationist. That would lead to anarchy. One who breaks an unjust law must do so openly, lovingly, and with a willingness to accept the penalty. I submit that an individual who breaks a law that conscience tells him is unjust, and who willingly accepts the penalty of imprisonment in order to arouse the conscience of the community over its injustice, is in reality expressing the highest respect for law.<p>I always have to go back to read this part again because I feel like it's so unexpected. You don't really hear anyone saying quite the same thing today.</p>
]]></description><pubDate>Mon, 19 Jan 2026 19:32:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46683399</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=46683399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46683399</guid></item><item><title><![CDATA[New comment by qntty in "Claude Advanced Tool Use"]]></title><description><![CDATA[
<p>Solved how?</p>
]]></description><pubDate>Mon, 24 Nov 2025 21:04:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46039284</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=46039284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46039284</guid></item><item><title><![CDATA[New comment by qntty in "Calculus for Mathematicians, Computer Scientists, and Physicists [pdf]"]]></title><description><![CDATA[
<p>Writing a calculus book that's more rigorous than typical books is hard because if you go too hard, people will say that you've written a real analysis book and the point of calculus is to introduce certain concepts without going full analysis. This book seems to have at least avoided the trap of trying to be too rigorous about the concept of convergence and spending more time on introducing vocabulary to talk about functions and talking about intersections with linear algebra.</p>
]]></description><pubDate>Sun, 23 Nov 2025 17:04:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46025088</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=46025088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46025088</guid></item><item><title><![CDATA[New comment by qntty in "The half-life of tech skills"]]></title><description><![CDATA[
<p>I tried and failed to find some kind of concrete methodology that they used to get to the number 30 months. I'm still waiting for quadratic algebra to make my knowledge of linear algebra obsolete.</p>
]]></description><pubDate>Tue, 29 Jul 2025 19:37:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44727421</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=44727421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44727421</guid></item><item><title><![CDATA[New comment by qntty in "MCP in LM Studio"]]></title><description><![CDATA[
<p>It's confusing but you just have to read the official docs<p><a href="https://modelcontextprotocol.io/specification/2025-03-26/architecture" rel="nofollow">https://modelcontextprotocol.io/specification/2025-03-26/arc...</a></p>
]]></description><pubDate>Thu, 26 Jun 2025 10:37:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44386034</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=44386034</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44386034</guid></item><item><title><![CDATA[New comment by qntty in "Find Your People"]]></title><description><![CDATA[
<p>I like the subway analogy. I'm sure I've heard some version of it before, but maybe because I was younger I didn't really get it. It really is a little strange to tell kids who have never really directed their own lives before to start doing it all of a sudden.</p>
]]></description><pubDate>Fri, 23 May 2025 18:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44075168</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=44075168</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44075168</guid></item><item><title><![CDATA[New comment by qntty in "By Default, Signal Doesn't Recall"]]></title><description><![CDATA[
<p>Child-proof caps are easy to take off but difficult to accidentally take off.</p>
]]></description><pubDate>Wed, 21 May 2025 18:29:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44054586</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=44054586</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44054586</guid></item><item><title><![CDATA[New comment by qntty in "llm-d, Kubernetes native distributed inference"]]></title><description><![CDATA[
<p>You're right, I was confusing TensorRT with Dynamo. It looks like the relationship between Dynamo and vLLM is actually the opposite of what I was thinking -- Dynamo can use vLLM as a backend rather than vice versa.</p>
]]></description><pubDate>Tue, 20 May 2025 23:52:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44047054</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=44047054</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44047054</guid></item><item><title><![CDATA[New comment by qntty in "llm-d, Kubernetes native distributed inference"]]></title><description><![CDATA[
<p>It sounds like you might be confusing different parts of the stack. NVIDIA Dynamo for example supports vLLM as the inference engine. I think you should think of something like vLLM as more akin to GUnicorn, and llm-d as an application load balancer. And I guess something like NVIDIA Dynamo would be like Django.</p>
]]></description><pubDate>Tue, 20 May 2025 16:08:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44043182</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=44043182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44043182</guid></item><item><title><![CDATA[New comment by qntty in "llm-d, Kubernetes native distributed inference"]]></title><description><![CDATA[
<p>I believe this is a question you should ask about vLLM, not llm-d. It looks like vLLM does support pipeline parallelism via Ray: <a href="https://docs.vllm.ai/en/latest/serving/distributed_serving.html#running-vllm-on-a-single-node" rel="nofollow">https://docs.vllm.ai/en/latest/serving/distributed_serving.h...</a><p>This project appears to make use of both vLLM and Inference Gateway (an official Kubernetes extension to the Gateway resource). The contributions of llm-d itself seems to mostly be a scheduling algorithm for load balancing across vLLM instances.</p>
]]></description><pubDate>Tue, 20 May 2025 16:00:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44043088</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=44043088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44043088</guid></item><item><title><![CDATA[New comment by qntty in "Gandi March 9, 2025 incident postmortem"]]></title><description><![CDATA[
<p>In Hindi, “gandi” means dirty, which I guess is appropriate for marches</p>
]]></description><pubDate>Mon, 05 May 2025 14:25:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43895500</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=43895500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43895500</guid></item><item><title><![CDATA[New comment by qntty in "The Agent2Agent Protocol (A2A)"]]></title><description><![CDATA[
<p>I don't know if it would be right to call MCP pure request/response, since unlike HTTP, it's a stateful protocol.</p>
]]></description><pubDate>Wed, 09 Apr 2025 14:18:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=43632445</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=43632445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43632445</guid></item><item><title><![CDATA[New comment by qntty in "Math Academy pulled me out of the Valley of Despair"]]></title><description><![CDATA[
<p>I don’t see that graph anywhere on the Wikipedia page<p><a href="https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect" rel="nofollow">https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effec...</a><p>I do see other graphs that tell a different story. Namely, that confidence is a monotonically increasing function of competence. If the data supports the idea that there is a valley of despair where confidence decreases as competence increases, I must be missing it.</p>
]]></description><pubDate>Thu, 06 Mar 2025 05:08:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43276626</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=43276626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43276626</guid></item><item><title><![CDATA[New comment by qntty in "Undergraduate shows that searches within hash tables can be much faster"]]></title><description><![CDATA[
<p>A cool result, but it seems like it should be called a computer science conjecture</p>
]]></description><pubDate>Mon, 10 Feb 2025 20:12:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43004620</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=43004620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43004620</guid></item><item><title><![CDATA[New comment by qntty in "100 Or so Books that shaped a Century of Science (1999)"]]></title><description><![CDATA[
<p>This just made me realize that we're 10 years past the 100th anniversary of Einstein publishing about general relativity. Which made me realize that we're a quarter of the way through the 21th century...<p>Also, I think a list made today would have to include some of the early work on deep learning that happened in the 20th century. Which goes to show that sometimes you don't know what's important until much later on.</p>
]]></description><pubDate>Tue, 04 Feb 2025 19:07:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=42937090</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=42937090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42937090</guid></item><item><title><![CDATA[New comment by qntty in "Vanguard's average fee is now 0.07% after biggest-ever cut"]]></title><description><![CDATA[
<p>I see several people complaining about this in this thread. Are you talking about their old interface or the new one they released a couple years ago? I find their new one to be decent. It's a little complicated sometimes, but I think it's hard to build a site that allows you to do so many things without being a little complicated.</p>
]]></description><pubDate>Tue, 04 Feb 2025 18:57:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42936933</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=42936933</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42936933</guid></item><item><title><![CDATA[New comment by qntty in "The Strike Is Coming"]]></title><description><![CDATA[
<p>Collecting names without revealing theirs and vague demands. Seems like a fed honeypot.</p>
]]></description><pubDate>Sat, 01 Feb 2025 18:31:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=42900793</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=42900793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42900793</guid></item><item><title><![CDATA[New comment by qntty in "The Tyranny of Structurelessness (1970)"]]></title><description><![CDATA[
<p><i>The Tyranny of Structurelessness</i> and <i>The Gervais Principle</i> are the two essays that I think about a lot at work.</p>
]]></description><pubDate>Wed, 22 Jan 2025 16:43:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42794697</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=42794697</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42794697</guid></item><item><title><![CDATA[New comment by qntty in "The Origins of Wokeness"]]></title><description><![CDATA[
<p><i>Leftists may claim that their activism is motivated by compassion or by moral principles, and moral principle does play a role for the leftist of the oversocialized type. But compassion and moral principle cannot be the main motives for leftist activism. Hostility is too prominent a component of leftist behavior; so is the drive for power. Moreover, much leftist behavior is not rationally calculated to be of benefit to the people whom the leftists claim to be trying to help. For example, if one believes that affirmative action is good for black people, does it make sense to demand affirmative action in hostile or dogmatic terms? Obviously it would be more productive to take a diplomatic and conciliatory approach that would make at least verbal and symbolic concessions to white people who think that affirmative action discriminates against them. But leftist activists do not take such an approach because it would not satisfy their emotional needs. Helping black people is not their real goal. Instead, race problems serve as an excuse for them to express their own hostility and frustrated need for power. In doing so they actually harm black people, because the activists’ hostile attitude toward the white majority tends to intensify race hatred.</i></p>
]]></description><pubDate>Mon, 13 Jan 2025 22:27:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42690348</link><dc:creator>qntty</dc:creator><comments>https://news.ycombinator.com/item?id=42690348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42690348</guid></item></channel></rss>