<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: _rlh</title><link>https://news.ycombinator.com/user?id=_rlh</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 08:42:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=_rlh" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by _rlh in "Understanding the Go Runtime: The Scheduler"]]></title><description><![CDATA[
<p>“It’s a problem that only go can solve”<p>I had this discussion a decade ago and concluded that a reasonable fair scheduler could be built on top of the go runtime scheduler by gating the work presented. The case was be made that the application is the proper, if not only, place to do this. Other than performance, if you encountered a runtime limitation then filing an issue is how the Go community moves forward.</p>
]]></description><pubDate>Sun, 15 Mar 2026 14:30:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47387739</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=47387739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47387739</guid></item><item><title><![CDATA[New comment by _rlh in "Nature vs Golang: Performance Benchmarking"]]></title><description><![CDATA[
<p>Go's allocator draws from the Hoard work as do most modern alloc/free implementations. Similar C/C++/Rust flavor implementations do not seem to "inevitably leads to memory fragmentation issues". Perhaps this fragmentation concern is a myth carried over from earlier malloc/free or gc algorithms.</p>
]]></description><pubDate>Sun, 18 Jan 2026 00:42:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46663689</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=46663689</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46663689</guid></item><item><title><![CDATA[New comment by _rlh in "Hyrum’s Law in Golang"]]></title><description><![CDATA[
<p>Fred Brooks discussed this in the unfortunately named pun "The Mythical Man-Month". Most of the gray beards have read it, ask to borrow it, it will make their day. The punchline was on the IBM 360 they stopped fixing bugs when the fix cause the same or more bugs than the unfixed bug, which soon became all bugs.<p>Well aware of Brooks, when the loop var semantics were changed Go did an analysis showing that many more bugs were fixed than created by the change.</p>
]]></description><pubDate>Tue, 26 Nov 2024 12:08:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42244975</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=42244975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42244975</guid></item><item><title><![CDATA[New comment by _rlh in "Conservative GC can be faster than precise GC"]]></title><description><![CDATA[
<p>It was a memory model / two word atomicity problem. The mutator uses two writes, one for type and one for value to create the interface. The GC concurrently reads the 2 words of the interface to see if the value is a pointer or not. This is a race that was considered too expensive / complicated to fix.</p>
]]></description><pubDate>Wed, 11 Sep 2024 04:31:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=41508037</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=41508037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41508037</guid></item><item><title><![CDATA[New comment by _rlh in "Techniques for safe garbage collection in Rust"]]></title><description><![CDATA[
<p>Go's defrag techniques and why they work are discussed in the Hoard papers and have proven their value not only in Go but in most malloc implementations.<p>There is a relationship between cache locality, moving colocated objects to different cache lines to control fragmentation, value types, and interior pointers.  Perhaps it is subtle but cache optimization is real important for performance and is not ignored by Go in the language spec or the runtime implementation.</p>
]]></description><pubDate>Wed, 28 Aug 2024 11:53:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41378498</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=41378498</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41378498</guid></item><item><title><![CDATA[New comment by _rlh in "Why is D's garbage collection slower than Go's?"]]></title><description><![CDATA[
<p>Still one of the best ideas in the field in recent years. I will note that it also works for non-moving GC collectors and if they are precise, like Go, they can also update pointers and eliminate the redundant page table entries.</p>
]]></description><pubDate>Sat, 29 Oct 2022 11:48:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=33383200</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=33383200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33383200</guid></item><item><title><![CDATA[New comment by _rlh in "Go does not need a Java-style GC"]]></title><description><![CDATA[
<p>Just for fun set the Java heap to .4 Gigs or use GOGC to set the Go heap to 1.7 Gigs. If Go is faster then try some other sizes and draw a graph to see what the lines look like.</p>
]]></description><pubDate>Fri, 26 Nov 2021 03:31:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=29347288</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=29347288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29347288</guid></item><item><title><![CDATA[New comment by _rlh in "Erlang Garbage Collection Details and Why It Matters (2015)"]]></title><description><![CDATA[
<p>I think you are confusing memory management with memory model. Memory management is about garbage collection, RC, malloc / free, and allocations. Memory models are about what happens when you read and write to shared mutable memory. I'm not a Erlang programmer but in general the Actors concurrency model does not support shared mutable memory. Don't be clever.</p>
]]></description><pubDate>Thu, 25 Mar 2021 14:06:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=26580450</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=26580450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26580450</guid></item><item><title><![CDATA[New comment by _rlh in "Google’s robots.txt parser is now open source"]]></title><description><![CDATA[
<p>Actually Go has the reputation of having solved many runtime problems including the GC tail latency problem.</p>
]]></description><pubDate>Tue, 02 Jul 2019 16:26:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=20336668</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=20336668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=20336668</guid></item><item><title><![CDATA[New comment by _rlh in "Java 12"]]></title><description><![CDATA[
<p>This thread reads eerily like threads about Go's low latency GC from 2015 and how 10ms isn't good enough and throughput will be impacted and on and on. Three years later Go treats any 500 microsecond pause as a bug as Go continues to focus on throughput. Shenandoah is being put together by some very very smart people and I'm optimistic that the only thing that stands in the way of Java reaching the "500 microseconds is a bug" level is engineering hours and resources. More kudos for this achievement are in order.</p>
]]></description><pubDate>Wed, 20 Mar 2019 13:57:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=19442195</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=19442195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19442195</guid></item><item><title><![CDATA[New comment by _rlh in "Proposed New Go GC: Transaction-Oriented Collector"]]></title><description><![CDATA[
<p>Not sure how to reach twotwotwo directly. I was hoping to get permission to rename the GC to ROC from TOC ("request" instead of "transaction"). It's simply a better name and the bird makes for better visuals on the slides. I'm Rick Hudson, rlh@, and can be emailed directly at golang.org.</p>
]]></description><pubDate>Tue, 28 Jun 2016 13:34:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=11993629</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=11993629</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11993629</guid></item><item><title><![CDATA[New comment by _rlh in "Show HN: Mmm – manual memory management for Go"]]></title><description><![CDATA[
<p>Some folks export GODEBUG=gctrace=1 which will result in the GC dumping out a lot of interesting information including stop the world latency. It doesn't increase the determinism and the GC cycle is started when the runtime decides but it does provide visibility into the concurrent GC latency in a benchmark or actual application. Perhaps you already use it and that is how you noticed the latency problems to start with.<p>I do know that you need version 1.5 of Go that was released last August to get the low latency GC. If throughput is an issue some folks adjust GOGC to use as much RAM as they can. If none of this helps file an issue report with the Go team. You seem to have a nice well thought out work around but a reproducible gctrace showing hundreds of millisecond of GC latency on heaps with millions of objects will be of interest to the Go team and might really help them.<p>I hope this helps.</p>
]]></description><pubDate>Tue, 01 Dec 2015 20:44:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=10658599</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=10658599</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10658599</guid></item><item><title><![CDATA[New comment by _rlh in "Show HN: Mmm – manual memory management for Go"]]></title><description><![CDATA[
<p>You use runtime.GC() when you did your benchmark. This tells the Go runtime to do an aggressive stop the world GC, which it does. The normal GC is concurrent and if you use it latency should not be a problem. For your use case I'd remove runtime.GC() and try timing the RPC and look for latency spikes. Report back if you can, if you are still seeing high latency we can file a bug report with the Go team.</p>
]]></description><pubDate>Tue, 01 Dec 2015 11:19:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=10654609</link><dc:creator>_rlh</dc:creator><comments>https://news.ycombinator.com/item?id=10654609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10654609</guid></item></channel></rss>