<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: NovaX</title><link>https://news.ycombinator.com/user?id=NovaX</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 08:40:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=NovaX" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by NovaX in "Dissecting the CPU-memory relationship in garbage collection (OpenJDK 26)"]]></title><description><![CDATA[
<p>Go tried that [1], a failed experiment that was a complex NIH version of the generational hypothesis. They currently use a CMS-stye collector.<p>[1] <a href="https://docs.google.com/document/d/1gCsFxXamW8RRvOe5hECz98Ftk-tcRRJcDFANj2VwCB0" rel="nofollow">https://docs.google.com/document/d/1gCsFxXamW8RRvOe5hECz98Ft...</a></p>
]]></description><pubDate>Thu, 26 Feb 2026 08:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47163281</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=47163281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163281</guid></item><item><title><![CDATA[New comment by NovaX in "2026: The Year of Java in the Terminal?"]]></title><description><![CDATA[
<p>Thanks for the corrected evaluation. Just for your awareness, the HotSpot team is working on offering a spectrum of AOT/JIT options under the Leyden project [1]. Currently one has to choose between either a fully open world (JIT) or closed world (AOT) evaluation. The mixed world allows for a tunable knob, e.g. pre-warmup by AOT while retaining dynamic class loading and fast compile times for the application. This will soften the hard edges so developers can choose their constraints that best fit their application's deployment/usage model.<p>[1] <a href="https://openjdk.org/projects/leyden" rel="nofollow">https://openjdk.org/projects/leyden</a></p>
]]></description><pubDate>Thu, 01 Jan 2026 16:58:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46455641</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=46455641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46455641</guid></item><item><title><![CDATA[New comment by NovaX in "2026: The Year of Java in the Terminal?"]]></title><description><![CDATA[
<p>That is just a normal JVM with optional Graal components if enabled, but not being used. The default memory allocation is based on a percentage of available memory and uncommitted (meaning its available for other programs). When people mention Graal they mean an AOT compiled executable that can be run without a JVM installed. Sometimes they may refer to Graal JIT as a replacement for C1/C2 available also in VM mode. You are using a plain HotSpot VM in server mode, as the optimized client mode was removed when desktop use-cases were deprioritized (e.g. JWS discontinued).</p>
]]></description><pubDate>Wed, 31 Dec 2025 21:29:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46448556</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=46448556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46448556</guid></item><item><title><![CDATA[New comment by NovaX in "Microbenchmarks: HashMap, ConcurrentHashMap, and Guava Caches"]]></title><description><![CDATA[
<p>In both the asMap() view unwraps the opinionated Cache facade to the underlying bounded map.<p>There are a lot of little gotchas so it is best to not believe your own results until proven otherwise. A Rust influencer wrote EVMap, eagerly giving a talk about its performance equaling concurrent maps while being much simpler. However since he generated the random numbers as part of the test loop, he did not realize that it uses a lock to compute the next seed. This throttled the benchmark to make the faster maps slower as they created more contention, fully invalidating and inverting the results. Sadly since this was presented as part of a PhD such truths are of little importance and the false claims continue to be shown. That's just to share how innocent mistakes are the norm, but incentives to not correct them are doubly so, and that you should never trust your own results until you've exhausted all attempts to disprove them as invalid.<p>Sounds like you have the right perspective and a great attitude. Feel welcome to ping me if you run into any troubles or questions as I collab'd on Guava's cache and wrote Caffeine.</p>
]]></description><pubDate>Thu, 06 Mar 2025 03:39:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43276131</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=43276131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43276131</guid></item><item><title><![CDATA[New comment by NovaX in "Microbenchmarks: HashMap, ConcurrentHashMap, and Guava Caches"]]></title><description><![CDATA[
<p>This was a good attempt but flawed.<p>1. You should use JMH to handle warmup, jit, timings, averaging of runs, etc. You might enjoy the following talks on the subject, though I am sure there are other gems out there.<p>- Performance Anxiety: <a href="https://wiki.jvmlangsummit.com/images/1/1d/PerformanceAnxiety2010.pdf" rel="nofollow">https://wiki.jvmlangsummit.com/images/1/1d/PerformanceAnxiet...</a><p>- Benchmarking for Good: <a href="https://www.youtube.com/watch?v=SKPdqgD1I2U" rel="nofollow">https://www.youtube.com/watch?v=SKPdqgD1I2U</a><p>- (The Art of) (Java) Benchmarking: <a href="https://shipilev.net/talks/j1-Oct2011-21682-benchmarking.pdf" rel="nofollow">https://shipilev.net/talks/j1-Oct2011-21682-benchmarking.pdf</a><p>- A Crash Course in Modern Hardware: <a href="https://www.youtube.com/watch?v=OFgxAFdxYAQ" rel="nofollow">https://www.youtube.com/watch?v=OFgxAFdxYAQ</a><p>- How NOT to Write a Microbenchmark
<a href="https://www.slideshare.net/slideshow/2002-microbenchmarks/28737615" rel="nofollow">https://www.slideshare.net/slideshow/2002-microbenchmarks/28...</a><p>- Anatomy of a flawed microbenchmark
<a href="https://web.archive.org/web/20110513090823/http://www.ibm.com/developerworks/java/library/j-jtp02225/index.html" rel="nofollow">https://web.archive.org/web/20110513090823/http://www.ibm.co...</a><p>2. An uncontended lock is basically free, as you noticed.<p>3. Guava does implement the Map interface via asMap(). It also have a concurrencyLevel to adjust its performance. It is based on Java 5's CHM.<p>4. Caffeine is a Guava Cache rewrite based on Java 8's CHM rewrite, plus many learnings. You might look at its benchmarks for ideas:
    <a href="https://github.com/ben-manes/caffeine/wiki/Benchmarks">https://github.com/ben-manes/caffeine/wiki/Benchmarks</a><p>5. Data structures are surprisingly tricky. For example see this analysis showing an accidental misunderstanding degrading an LRU to O(n) eviction.
    <a href="https://gist.github.com/ben-manes/6312727adfa2235cb7c5e25cae523ad0" rel="nofollow">https://gist.github.com/ben-manes/6312727adfa2235cb7c5e25cae...</a><p>6. It is important to remember that the goal of a benchmark is never which is faster or by how much. It is to ask (a) is it fast enough? (b) might I reach a point where it will not be? and (c) does it degrade unexpectedly? == This is to say the winner is of little interest, once all the choices are good enough then it is about usability, features, documentation, friendliness, and so on.</p>
]]></description><pubDate>Thu, 06 Mar 2025 01:57:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43275392</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=43275392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43275392</guid></item><item><title><![CDATA[New comment by NovaX in "3,200% CPU Utilization"]]></title><description><![CDATA[
<p>How about the senior engineer level error of forking Doug Lea's concurrent data structures only to make them operate in worst-case time complexity? Found this doozy recently.<p>[1] <a href="https://gist.github.com/ben-manes/6312727adfa2235cb7c5e25cae523ad0" rel="nofollow">https://gist.github.com/ben-manes/6312727adfa2235cb7c5e25cae...</a></p>
]]></description><pubDate>Sun, 02 Mar 2025 18:27:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43233368</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=43233368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43233368</guid></item><item><title><![CDATA[New comment by NovaX in "Analyzing the codebase of Caffeine, a high performance caching library"]]></title><description><![CDATA[
<p>I think the idea is that the cache is so large that hot data won't be forced out by one-hit wonders. In this 2017 talk [1], the speaker says that Twitter's SLA depends on having a 99.9% hit rate. It is very common to have extremely over provisioned remote caching tier for popular sites. That makes eviction not as important and reducing their operational costs comes by purging expired data more proactively. Hence, memcached switched from away from relying on its LRU to discard expired entries to using a sweeper. Caffeine's approach, a timing wheel, was considered but dormando felt it was too much of an internal change for memcached and the sweeper could serve multiple purposes.<p>[1] <a href="https://www.youtube.com/watch?v=kxMKnx__uso" rel="nofollow">https://www.youtube.com/watch?v=kxMKnx__uso</a></p>
]]></description><pubDate>Tue, 04 Feb 2025 04:34:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=42928007</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42928007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42928007</guid></item><item><title><![CDATA[New comment by NovaX in "Analyzing the codebase of Caffeine, a high performance caching library"]]></title><description><![CDATA[
<p>You might be interested in this thread [1] where I described an idea for how to incorporate the latency penalty into the eviction decision. A developer even hacked a prototype that showed promise. The problem is that there is not enough variety in the available trace data to be confident that a design isn't overly fit to a particular workload and doesn't generalize. As more data sets become available it will become possible to experiment with ideas and fix unexpected issues until a correct, simple, elegant design emerges.<p>[1] <a href="https://github.com/ben-manes/caffeine/discussions/1744">https://github.com/ben-manes/caffeine/discussions/1744</a></p>
]]></description><pubDate>Mon, 03 Feb 2025 02:16:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42914212</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42914212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42914212</guid></item><item><title><![CDATA[New comment by NovaX in "Analyzing the codebase of Caffeine, a high performance caching library"]]></title><description><![CDATA[
<p>As the article mentions, Caffeine's approach is to monitor the workload and adapt to these phase changes. This stress test [1] demonstrates shifting back and forth between LRU and MRU request patterns, and the cache reconfiguring itself to maximize the hit rate. Unfortunately most policies are not adaptive or do it poorly.<p>Thankfully most workloads are a relatively consistent pattern, so it is an atypical worry. The algorithm designers usually have a target scenario, like cdn or database, so they generally skip reporting the low performing workloads. That may work for a research paper, but when providing a library we cannot know what our users workloads are nor should we expect engineers to invest in selecting the optimal algorithm. Caffeine's adaptivity removes this burden and broaden its applicability, and other language ecosystems have been slowly adopting similar ideas in their caching libraries.<p>[1] <a href="https://github.com/ben-manes/caffeine/wiki/Efficiency#adaptivity">https://github.com/ben-manes/caffeine/wiki/Efficiency#adapti...</a></p>
]]></description><pubDate>Sun, 02 Feb 2025 21:07:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=42911840</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42911840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42911840</guid></item><item><title><![CDATA[New comment by NovaX in "Analyzing the codebase of Caffeine, a high performance caching library"]]></title><description><![CDATA[
<p>Yep, no bots. A real bug not only means that I wasted someone else’s time, but reporting is a gift for an improvement. If a misunderstanding then it’s motivation that my project is used and deserves a generous reply. This perspective and treating as strictly a hobby, rather than as a criticism or demand for work, makes OSS feel more sustainable.</p>
]]></description><pubDate>Sun, 02 Feb 2025 19:41:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=42911131</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42911131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42911131</guid></item><item><title><![CDATA[New comment by NovaX in "Analyzing the codebase of Caffeine, a high performance caching library"]]></title><description><![CDATA[
<p>Thanks Jonathan!</p>
]]></description><pubDate>Sun, 02 Feb 2025 19:14:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42910926</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42910926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42910926</guid></item><item><title><![CDATA[New comment by NovaX in "Analyzing the codebase of Caffeine, a high performance caching library"]]></title><description><![CDATA[
<p>I tried to reimplement Linux’s algorithm in [1], but I cannot be sure about correctness. They adjust the fixed sizes at construction based on device’s total memory, so it varies if a phone or server. This fast trace simulation in the CI [2] may be informative (see DClock). Segmentation is very common, where algorithms differ by how they promote and how/if they adapt the sizes.<p>[1] <a href="https://github.com/ben-manes/caffeine/blob/master/simulator/src/main/java/com/github/benmanes/caffeine/cache/simulator/policy/irr/DClockPolicy.java">https://github.com/ben-manes/caffeine/blob/master/simulator/...</a><p>[2] <a href="https://github.com/ben-manes/caffeine/actions/runs/13086596566#summary-36518459069">https://github.com/ben-manes/caffeine/actions/runs/130865965...</a></p>
]]></description><pubDate>Sun, 02 Feb 2025 19:07:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42910876</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42910876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42910876</guid></item><item><title><![CDATA[New comment by NovaX in "Analyzing the codebase of Caffeine, a high performance caching library"]]></title><description><![CDATA[
<p>Doesn’t reddit use Cassandra, Solr, and Kafka which uses Caffeine?</p>
]]></description><pubDate>Sun, 02 Feb 2025 17:27:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42910095</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42910095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42910095</guid></item><item><title><![CDATA[New comment by NovaX in "iTerm2 critical security release"]]></title><description><![CDATA[
<p>I've resorted to using Cmd-Shift-J (scrollback buffer) and grepping that, but its flaky about whether it will honor the command and emit a history file.</p>
]]></description><pubDate>Fri, 03 Jan 2025 00:34:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=42580739</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42580739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42580739</guid></item><item><title><![CDATA[Analyzing the codebase of a high performance caching library]]></title><description><![CDATA[
<p>Article URL: <a href="https://adriacabeza.github.io/2024/07/12/caffeine-cache.html">https://adriacabeza.github.io/2024/07/12/caffeine-cache.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42104791">https://news.ycombinator.com/item?id=42104791</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 11 Nov 2024 05:39:58 +0000</pubDate><link>https://adriacabeza.github.io/2024/07/12/caffeine-cache.html</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42104791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42104791</guid></item><item><title><![CDATA[New comment by NovaX in "JVM Anatomy Quarks"]]></title><description><![CDATA[
<p>That’s in the works, where it adapts from 16mb to terabyte heaps. The current GCs have a max, with lazy allocation and ability to release back to the system periodically, but are not as system aware.<p>1. <a href="https://openjdk.org/jeps/8329758" rel="nofollow">https://openjdk.org/jeps/8329758</a><p>2. <a href="https://m.youtube.com/watch?v=wcENUyuzMNM&embeds_referring_euri=https%3A%2F%2Fwww.reddit.com%2F" rel="nofollow">https://m.youtube.com/watch?v=wcENUyuzMNM&embeds_referring_e...</a></p>
]]></description><pubDate>Sun, 10 Nov 2024 22:21:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42102993</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=42102993</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42102993</guid></item><item><title><![CDATA[New comment by NovaX in "Smartphone buyers meh on AI, care more about battery life"]]></title><description><![CDATA[
<p>It will likely become available for application developers to use. At work, we use it to assist warehouse checkins by allowing the guard to take photos of the truck, paperwork, seal, etc and fill out the forms going in and out. If built-in then it can be run on-device, so over time a lot more workflows can be seamless.</p>
]]></description><pubDate>Fri, 25 Oct 2024 20:32:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=41949379</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=41949379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41949379</guid></item><item><title><![CDATA[New comment by NovaX in "Lessons learned from profiling an algorithm in Rust"]]></title><description><![CDATA[
<p>Endian indicates the byte order (MSB->LSB, LSB->MSB), but does not change the representation of a bit. The conditional logic says that an even result is 32 and an odd is 0 after a modulus two. In binary that (x & 1) is 0 for even and 1 for odd. When you shift by 5 that is 0 for even and 32 for odd, which is the opposite of the conditional logic. This is why I suggested using a binary NOT to flip the bits so that you get the same result as the original.</p>
]]></description><pubDate>Mon, 14 Oct 2024 20:39:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=41841726</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=41841726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41841726</guid></item><item><title><![CDATA[New comment by NovaX in "Lessons learned from profiling an algorithm in Rust"]]></title><description><![CDATA[
<p>I believe it is (!i & 32) because the bitwise version is an incorrect rewrite.<p>[1] <a href="https://news.ycombinator.com/item?id=41830016">https://news.ycombinator.com/item?id=41830016</a></p>
]]></description><pubDate>Mon, 14 Oct 2024 16:46:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=41839263</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=41839263</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41839263</guid></item><item><title><![CDATA[New comment by NovaX in "Lessons learned from profiling an algorithm in Rust"]]></title><description><![CDATA[
<p>I'm confused because isn't the bitwise version the inverted logic? If the LSB is 1 then it is an odd value, which should be zero, yet that is shifted to become 32. The original modulus is for an even value becoming 32. Shouldn't the original code or compiler invert it first? I'd expect<p><pre><code>    let shift = ((~(i >> 5) & 1) << 5);
</code></pre>
EDIT:
The compiler uses "vpandn" with the conditional version and "vpand" with the bitwise version. The difference is it includes a bitwise logical NOT operation on the first source operand. It looks like the compiler and I are correct, the author's bitwise version is inverted, and the incorrect code was merged in the author's commit. Also, I think this could be reduced to just (~i & 32).</p>
]]></description><pubDate>Sun, 13 Oct 2024 17:53:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=41830016</link><dc:creator>NovaX</dc:creator><comments>https://news.ycombinator.com/item?id=41830016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41830016</guid></item></channel></rss>