<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: 6keZbCECT2uB</title><link>https://news.ycombinator.com/user?id=6keZbCECT2uB</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 18:53:08 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=6keZbCECT2uB" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by 6keZbCECT2uB in "An update on recent Claude Code quality reports"]]></title><description><![CDATA[
<p>"On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6"<p>This makes no sense to me. I often leave sessions idle for hours or days and use the capability to pick it back up with full context and power.<p>The default thinking level seems more forgivable, but the churn in system prompts is something I'll need to figure out how to intentionally choose a refresh cycle.</p>
]]></description><pubDate>Thu, 23 Apr 2026 18:30:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47879561</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=47879561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47879561</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "No one owes you supply-chain security"]]></title><description><![CDATA[
<p>Your reasonable options are:
1. I stop sharing the software I write
2. You take responsibility for the software you use<p>Any software you use with this clause, "THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."<p>Already attests that the authors do not offer guarantees that the software will have the features you need, supply chain security or otherwise.</p>
]]></description><pubDate>Sun, 12 Apr 2026 13:31:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47739424</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=47739424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47739424</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Tailslayer: Library for reducing tail latency in RAM reads"]]></title><description><![CDATA[
<p>Once your cache hit ratios for some data structure go < .1%, I'd rather have 75% less tail latency even if it reduces cache hit rate further.</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:50:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682354</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=47682354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682354</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Tailslayer: Library for reducing tail latency in RAM reads"]]></title><description><![CDATA[
<p>I like the project: taking it from refresh-induced tail latency to racing threads assigned to addresses that are de-correlated by memory channel. Connecting this to a lookup table which is broadcasted across memory channels to let the lookup paths race makes for a nice narrative, but framing this as reducing tail latency confused me because I was expecting this to do a join where a single reader gets the faster of the two racers.<p>From a narrative standpoint, I agree it makes more sense to focus on a duplicated lookup table and fastest wins, however, from an engineering standpoint, framing it in terms of channel de-correlated reads has more possibilities. For example, if you need to evaluate multiple parallel ML models to get a result then by intentionally partitioning your models by channel you could ensure that a model does reads on only fast data or only slow data. ML models might not be that interesting since they are good candidates for being resident in L3.</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:47:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682321</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=47682321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682321</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Waymo seeking about $16B near $110B valuation"]]></title><description><![CDATA[
<p>In Miami, there are several competing companies like Coco Robotics which employ human "pilots" to monitor a small fleet of robot delivery boxes where the restaurant deposits the food in the box and the box unlocks with integration into the app.<p>Just figured you'd want to know anytime soon was at least a year ago.</p>
]]></description><pubDate>Mon, 02 Feb 2026 21:07:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46861556</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=46861556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46861556</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "I built a 2x faster lexer, then discovered I/O was the real bottleneck"]]></title><description><![CDATA[
<p>There's prior work: <a href="https://www.brendangregg.com/FlameGraphs/offcpuflamegraphs.html" rel="nofollow">https://www.brendangregg.com/FlameGraphs/offcpuflamegraphs.h...</a><p>There are a few challenges here.
- Off-cpu is missing the interrupt with integrated collection of stack traces, so you instrument a full timeline when they move on and off cpu or periodically walk every thread for its stack trace
- Applications have many idle threads and waiting for IO is a common threadpool case, so its more challenging to associate the thread waiting for a pool doing delegated IO from idle worker pool threads<p>Some solutions:
- Ive used nsight systems for non GPU stuff to visualize off CPU time equally with on CPU time
- gdb thread apply all bt is slow but does full call stack walking. In python, we have py-spy dump for supported interpreters
- Remember that any thing you can represent as call stacks and integers can be converted easily to a flamegraph. eg taking strace durations by tid and maybe fd and aggregating to a flamegraph</p>
]]></description><pubDate>Sun, 25 Jan 2026 13:25:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46753899</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=46753899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46753899</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Going faster than memcpy"]]></title><description><![CDATA[
<p>I've been meaning to look at Iceoryx as a way to wrap this.<p>Pytorch multiprocessing queues work this way, but it is hard for the sender to ensure the data is already in shared memory, so it often has a copy. It is also common for buffers to not be reused, so that can end up a bottleneck, but it can, in principle, be limited by the rate of sending fds.</p>
]]></description><pubDate>Mon, 11 Aug 2025 05:33:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44861004</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=44861004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44861004</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Go 1.24's go tool is one of the best additions to the ecosystem in years"]]></title><description><![CDATA[
<p>Partly in jest, you can often find a Perl / bash available where you can't find a Python, Ruby, or Cargo.</p>
]]></description><pubDate>Tue, 28 Jan 2025 02:30:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42848269</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=42848269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42848269</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Go 1.24's go tool is one of the best additions to the ecosystem in years"]]></title><description><![CDATA[
<p>Conda unifies by using a sat solver to find versions of software which are mutually compatible regardless of whether they agree on the meaning of semver. So, both approaches require unifying versions. Linking against C gets pretty broken without this.<p>The issue I was referring to is that in Javascript, you can write code which uses multiple versions of the same library which are mutually incompatible. Since they're mutually incompatible, no sat-solve or unifyer is going to help you. You must permit multiple versions of the same library in the same environment. So far, my approach of ignoring some Javascript libraries has worked for my backend development. :)</p>
]]></description><pubDate>Tue, 28 Jan 2025 02:27:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42848252</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=42848252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42848252</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Go 1.24's go tool is one of the best additions to the ecosystem in years"]]></title><description><![CDATA[
<p>I agree. In my opinion, if you can keep the experience of Bazel limited to build targets, there is a low barrier to entry even if it is tedious. Major issues show up with Bazel once you start having to write rules, tool chains, or if your workspace file talks to the Internet.<p>I think you can fix these issues by using a package manager around Bazel. Conda is my preferred choice because it is in the top tier for adoption, cross platform support, and supported more locked down use cases like going through mirrors, not having root, not controlling file paths, etc. What Bazel gets from this is a generic solution for package management with better version solving for build rules, source dependencies and binary dependencies. By sourcing binary deps from conda forge, you get a midpoint between deep investment into Bazel and binaries with unknown provenance which allows you to incrementally move to source as appropriate.<p>Additional notes: some requirements limit utility and approach being partial support of a platform. If you require root on Linux, wsl on Windows, have frequent compilation breakage on darwin, or neglect Windows file paths, your cross platform support is partial in my book.<p>Use of Java for Bazel and Python for conda might be regrettable, but not bad enough to warrant moving down the list of adoption and in my experience there is vastly more Bazel out there than Buck or other competitors. Similarly, you want to see some adoption from Haskell, Rust, Julia, Golang, Python, C++, etc.<p>JavaScript is thorny. You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript. I haven't seen too much demand for JavaScript bindings to C++ wrappers around a Rust core that uses C core libraries, but I do see that for Python bindings.</p>
]]></description><pubDate>Mon, 27 Jan 2025 22:32:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=42846525</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=42846525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42846525</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Programming with ChatGPT"]]></title><description><![CDATA[
<p>Most of my time coding is spent on none of: elegant solutions, complex problems, or precise specifications.<p>In my experience, LLMs are useful primarily as rubber ducks on complex problems and rarely useful as code generation for such.<p>Instead, I spend most of my time between the interesting work doing rote work which is preventing me from getting to the essential complexity, which is where LLM code gen does better. How do I generate a heat map in Python with a different color scheme? How do I parse some logs to understand our locking behavior? What flags do I pass to tshark to get my desired output?<p>So, I spend less time coding the above and more time coding how we should redo our data layout for more reuse.</p>
]]></description><pubDate>Wed, 28 Aug 2024 19:28:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=41383268</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=41383268</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41383268</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Aro – Zig's new C compiler"]]></title><description><![CDATA[
<p>How does this work with things which are expressible only in C? For example, Pascal strings with flexible array members?<p>I guess since you said header, you keep everything opaque and create a header for that which gets translated to Zig.</p>
]]></description><pubDate>Sat, 20 Jul 2024 20:28:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=41019548</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=41019548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41019548</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "How to choose a textbook that is optimal for oneself?"]]></title><description><![CDATA[
<p>In all seriousness, this seems to carry risk of never doing anything deep or hard. In particular, I've been programming for long enough, that I can be casual about many programming languages until I hit something which is actually new, such as in Rust or Prolog.<p>Promiscuous doesn't have to mean having a low tolerance for difficulty, but everything else you wrote seems to support that. So, are you saying that enduring difficulty is unnecessary, or did you mean something different?</p>
]]></description><pubDate>Sat, 20 Jul 2024 20:19:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41019463</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=41019463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41019463</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "C++ patterns for low-latency applications including high-frequency trading"]]></title><description><![CDATA[
<p>Fowler's implementation is written in Java which has a different memory model from C++. To see another example of Java memory model vs a different language, Jon Gjengset ports ConcurrentHashMap to Rust</p>
]]></description><pubDate>Tue, 09 Jul 2024 09:29:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=40914075</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=40914075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40914075</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Leantime: Open-Source Jira Alternative"]]></title><description><![CDATA[
<p>How does the tool ingest task sentiment? As a developer, I would never put in writing that I'm less than enthusiastic about any task.</p>
]]></description><pubDate>Mon, 09 Oct 2023 22:46:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=37826399</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=37826399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37826399</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Clang-expand: Expand function invocations into current scope"]]></title><description><![CDATA[
<p>The choice between replacing and making it an overlay is up to your editor. I think it would be pretty to handle either choice as a plugin in your editor given the returned json.<p>I was surprised it wasn't combined with clangd.</p>
]]></description><pubDate>Wed, 20 Sep 2023 19:12:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=37588664</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=37588664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37588664</guid></item><item><title><![CDATA[Clang-expand: Expand function invocations into current scope]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/goldsborough/clang-expand">https://github.com/goldsborough/clang-expand</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=37586518">https://news.ycombinator.com/item?id=37586518</a></p>
<p>Points: 58</p>
<p># Comments: 16</p>
]]></description><pubDate>Wed, 20 Sep 2023 16:48:01 +0000</pubDate><link>https://github.com/goldsborough/clang-expand</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=37586518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37586518</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Names should be as short as possible while still being clear"]]></title><description><![CDATA[
<p>Spending 8 months when the business value at the end was mostly improved velocity doesn't sound likely to be a good tradeoff, especially if this is done as a big bang effort which either succeeds holistically or fails. You might have better success in the future by finding ways to integrate maintenance improvements incrementally.</p>
]]></description><pubDate>Mon, 03 Jul 2023 14:21:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=36573283</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=36573283</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36573283</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "Names should be as short as possible while still being clear"]]></title><description><![CDATA[
<p>This comes into tension with making names as clear as possible while still being short. Frankly, I don't find coming up with short names for things to be easy, let alone good names even when taken to the extreme.<p>Whether something is too short depends on your context. requests.get might be ambiguous for someone who has never seen the code base before, but will quickly get obviously with a little exposure, so does the code base have dedicated maintainers?<p>The skill of good naming isn't distributed evenly and OP's names are pretty good, so I'd be happy for them to come along to my code base and rename things as long as it was within other constraints (not consuming too much time, doesn't break api, etc).</p>
]]></description><pubDate>Mon, 03 Jul 2023 14:13:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=36573154</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=36573154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36573154</guid></item><item><title><![CDATA[New comment by 6keZbCECT2uB in "GPU Programming: When, Why and How?"]]></title><description><![CDATA[
<p>The tensor core accelerates mostly matrix operations and is the big block you can see has 4 per SM. Cuda core refers to the thread per SM, which you can see as FP32 or INT32 units, so there are (32*4) per SM on that diagram.<p>Like you said, tensor core is similar to a special purpose ALU and is at a lower level of abstraction than something with an instruction pointer.</p>
]]></description><pubDate>Tue, 20 Jun 2023 13:58:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=36404057</link><dc:creator>6keZbCECT2uB</dc:creator><comments>https://news.ycombinator.com/item?id=36404057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36404057</guid></item></channel></rss>