<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: simscitizen</title><link>https://news.ycombinator.com/user?id=simscitizen</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 08:38:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=simscitizen" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by simscitizen in "Age of Empires: 25 years of pathfinding problems with C++ [video]"]]></title><description><![CDATA[
<p>DE is definitely not meant for older computers, it contains gigabytes of 2D sprites</p>
]]></description><pubDate>Sat, 14 Feb 2026 02:30:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47010933</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=47010933</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47010933</guid></item><item><title><![CDATA[New comment by simscitizen in "Vm.overcommit_memory=2 is the right setting for servers"]]></title><description><![CDATA[
<p>There's already a popular OS that disables overcommit by default (Windows). The problem with this is that disallowing overcommit (especially with software that doesn't expect that) can mean you don't get anywhere close to actually using all the RAM that's installed on your system.</p>
]]></description><pubDate>Sat, 20 Dec 2025 05:26:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46333843</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=46333843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46333843</guid></item><item><title><![CDATA[New comment by simscitizen in "Simple trick to increase coverage: Lying to users about signal strength"]]></title><description><![CDATA[
<p>It is surfaced to apps, but the "just detecting that connectivity sucks" heuristic turns out to be not all that easy to implement. There doesn't seem to be a better heuristic than "try and let the app decide if waited too long".<p>There are some comments here: <a href="https://developer.apple.com/documentation/systemconfiguration/scnetworkreachability-g7d?language=objc" rel="nofollow">https://developer.apple.com/documentation/systemconfiguratio...</a></p>
]]></description><pubDate>Tue, 04 Nov 2025 00:00:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45805989</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=45805989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45805989</guid></item><item><title><![CDATA[New comment by simscitizen in "SQLite concurrency and why you should care about it"]]></title><description><![CDATA[
<p>Copying the file likely forces the creation of a new one with no or lower filesystem fragmentation (e.g. a 1MB file probably gets assigned to 1MB of consecutive FS blocks). Then those FS blocks likely get assigned to flash dies in a way that makes sense (i.e. the FS blocks are evenly distributed across flash dies). This can improve I/O perf by some constant factor. See <a href="https://www.usenix.org/system/files/fast24-jun.pdf" rel="nofollow">https://www.usenix.org/system/files/fast24-jun.pdf</a> for instance for more explanation.<p>I would say that the much more common degradation is caused by write amplification due to a nearly full flash drive (or a flash drive that appears nearly full to the FTL because the system doesn't implement some TRIM-like mechanism to tell the FTL about free blocks). This generally leads to systemwide slowdown though rather than slowdown accessing just one particular file.<p>This was especially prevalent on some older Android devices which didn't bother to implement TRIM or an equivalent feature (which even affected the Google devices, like the Nexus 7).</p>
]]></description><pubDate>Sun, 02 Nov 2025 05:12:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45787976</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=45787976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45787976</guid></item><item><title><![CDATA[New comment by simscitizen in "A 16.67 Millisecond Frame"]]></title><description><![CDATA[
<p>Not really something anyone can change at this point, given that the entire web API presumes an execution model where everything logically happens on the main thread (and code can and does expect to observe those state changes synchronously).</p>
]]></description><pubDate>Thu, 09 Oct 2025 22:29:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45533722</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=45533722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45533722</guid></item><item><title><![CDATA[New comment by simscitizen in "Subtleties of SQLite Indexes"]]></title><description><![CDATA[
<p>I agree, all of these rules aren't the right way to teach about how to reason about this. All of the perf properties described should fall out of the understanding that both tables and indices in SQLite are B-trees. B-trees have the following properties:<p>- can look up a key or key prefix in O(log N) time ("seek a cursor" in DB parlance, or maybe "find/find prefix and return an iterator" for regular programmers)<p>- can iterate to next row in amortized O(1) time ("advance a cursor" in DB parlance, or maybe "advance an iterator" for regular programmers). Note that unordered data structures like hash maps don't have this property. So the mental model has to start with thinking that tables/indices are ordered data structures or you're already lost.<p>A table is a b+tree where the key is the rowid and the value is the row (well, except for WITHOUT ROWID tables).<p>An index is a b-tree where the key is the indexed column(s) and the value is a rowid.<p>And SQLite generally only does simple nested loop joins. No hash joins etc. Just the most obvious joining that you could do if you yourself wrote database-like logic using ordered data structures with the same perf properties e.g. std::map.<p>From this it ought to be pretty obvious why column order in an index matters, etc.</p>
]]></description><pubDate>Mon, 29 Sep 2025 22:33:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45419677</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=45419677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45419677</guid></item><item><title><![CDATA[New comment by simscitizen in "iPhone dumbphone"]]></title><description><![CDATA[
<p>That’s exactly what he meant by using Focus Modes, which is the iOS feature that lets you do just that.</p>
]]></description><pubDate>Tue, 09 Sep 2025 02:15:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45176622</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=45176622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45176622</guid></item><item><title><![CDATA[New comment by simscitizen in "uBlock Origin Lite now available for Safari"]]></title><description><![CDATA[
<p>If you type the name of the person, it should allow you to create a filter for "Messages with: Person". It should also pop up a filter bubble for photos. From there I think you can type in some query and it should do a query on the photos via text. I don't think you can add your date filter though.<p>Second way would be to open that conversation view, click on the contact icon at the top of the view, which should then bring you to a details page that lists a bunch of metadata and settings about the conversation (e.g. participants, hide alerts, ...). One of the sections shows all photos from that conversation. Browse that until you find the one you care about.</p>
]]></description><pubDate>Wed, 06 Aug 2025 04:48:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44807741</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=44807741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44807741</guid></item><item><title><![CDATA[New comment by simscitizen in "The case of the UI thread that hung in a kernel call"]]></title><description><![CDATA[
<p>Oh I've debugged this before. Native memory allocator had a scavenge function which suspended all other threads. Managed language runtime had a stop the world phase which suspended all mutator threads. They ran at about the same time and ended up suspending each other. To fix this you need to enforce some sort of hierarchy or mutual exclusion for suspension requests.<p>> Why you should never suspend a thread in your own process.<p>This sounds like a good general princple but suspending threads in your own process is kind of necessary for e.g. many GC algorithms. Now imagine multiple of those runtimes running in the same process.</p>
]]></description><pubDate>Tue, 15 Apr 2025 17:46:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43696145</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=43696145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43696145</guid></item><item><title><![CDATA[New comment by simscitizen in "Servo vs. Ladybird"]]></title><description><![CDATA[
<p>> It is insane that we have to dedicate multiple gigabytes of RAM, have CPUs 1000x faster than we had back with Netscape Navigator<p>Webpages are applications. Browsers are application runtimes. The main culprit driving high memory usage is not the runtime, but the application.</p>
]]></description><pubDate>Wed, 26 Mar 2025 17:50:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=43484804</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=43484804</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43484804</guid></item><item><title><![CDATA[New comment by simscitizen in "The NIH is being slashed and burned, not "reformed""]]></title><description><![CDATA[
<p>And replace him with JD Vance?</p>
]]></description><pubDate>Sun, 02 Mar 2025 04:49:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=43227423</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=43227423</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43227423</guid></item><item><title><![CDATA[New comment by simscitizen in "Fixing C Strings"]]></title><description><![CDATA[
<p>There are quite a few of these "better C string" idioms floating around.<p>Another one to consider is e.g. <a href="https://github.com/antirez/sds">https://github.com/antirez/sds</a> (used by Redis), which instead stores the string contents in-line with the metadata.</p>
]]></description><pubDate>Tue, 17 Dec 2024 23:56:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=42446774</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=42446774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42446774</guid></item><item><title><![CDATA[New comment by simscitizen in "The number given as % CPU in Activity Monitor"]]></title><description><![CDATA[
<p>Pretty sure it’s just scheduled CPU time / wall clock time. If you have multiple cores then scheduled CPU time can be greater than wall clock time.<p>Also scheduled CPU time doesn’t take in to account frequency scaling or core type as explained in the article. Just how much time the OS scheduler has allocated to the core to run tasks.</p>
]]></description><pubDate>Sat, 09 Nov 2024 18:44:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=42096091</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=42096091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42096091</guid></item><item><title><![CDATA[New comment by simscitizen in "Warning: DNS encryption in Little Snitch 6.1 may occasionally fail"]]></title><description><![CDATA[
<p>That’s not the current documentation, as evidenced by the “archive” in the URL.<p>If you want to stay at a lower level the recommendation these days is to use Network.framework. If you want something higher level then use CFNetwork (probably through the classes exported by Foundation like NSURLSession).</p>
]]></description><pubDate>Wed, 18 Sep 2024 01:40:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=41574856</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=41574856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41574856</guid></item><item><title><![CDATA[New comment by simscitizen in "Logbookd – SQLite Backed Syslogd"]]></title><description><![CDATA[
<p>Inserting a new row for each log line in autocommit mode would be absurdly inefficient compared to just appending a log line to a file.</p>
]]></description><pubDate>Mon, 16 Sep 2024 04:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=41552736</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=41552736</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41552736</guid></item><item><title><![CDATA[New comment by simscitizen in "Logbookd – SQLite Backed Syslogd"]]></title><description><![CDATA[
<p>How does this work exactly? Is every log line a separate transaction in autocommit mode? Because I don't see any begin/commit statements in this codebase so far...</p>
]]></description><pubDate>Sun, 15 Sep 2024 17:04:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41548691</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=41548691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41548691</guid></item><item><title><![CDATA[New comment by simscitizen in "Modular Monoliths Are a Good Idea"]]></title><description><![CDATA[
<p>The main ones were www which contained most of the PHP code and fbcode which contained most of the other backend services. There were actually separate repos for the mobile apps also.</p>
]]></description><pubDate>Fri, 13 Sep 2024 23:17:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=41536066</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=41536066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41536066</guid></item><item><title><![CDATA[New comment by simscitizen in "Flame Graphs: Making the opaque obvious (2017)"]]></title><description><![CDATA[
<p>Peaks don't matter, they just correspond to the depth of the call stack.<p>Probably the simplest way to use the flame graph is work from the bottom of the flamegraph and walk upwards until you find something interesting you optimize. Ideally you find something wide to optimize that makes sense. (The widest thing here is "main" which is obviously probably not the interesting thing to optimize, so you would work upwards from there.) The basic idea is that things that are wide in the flamegraph are expensive and potential things to optimize.<p>Where I work, we have tools that can produce diffed flamegraphs which can be really useful in figuring out why one trace uses so much more/less CPU than another.</p>
]]></description><pubDate>Thu, 27 Jun 2024 19:16:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=40813991</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=40813991</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40813991</guid></item><item><title><![CDATA[New comment by simscitizen in "Flame Graphs: Making the opaque obvious (2017)"]]></title><description><![CDATA[
<p>As an example, imagine you sampled a program and got 5 CPU call stack samples.<p><pre><code>  c               c
  b               b               d
  a       a       a       a       a
  main    main    main    main    main
</code></pre>
In a flamegraph, you would see:<p><pre><code>  [c              ]
  [b              ][d     ]
  [a                                     ]
  [main                                  ]</code></pre></p>
]]></description><pubDate>Thu, 27 Jun 2024 18:00:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=40813219</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=40813219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40813219</guid></item><item><title><![CDATA[New comment by simscitizen in "Flame Graphs: Making the opaque obvious (2017)"]]></title><description><![CDATA[
<p>> I do memory/CPU traces like all day every day, and I fix code all the time based on that<p>So I take it you understand one of the standard visualizations produced by a CPU profiling tool, e.g. it takes a call stack sample every millisecond. The x-axis is time (one sample per ms, if you have only one CPU), and the y-axis is the call stack.<p>Now for a flamegraph, you basically have the same visualization, but you sort the samples across the x-axis so that callstacks that start with the same prefixes are grouped together.<p>Incidentally, the sorting of samples across the x-axis destroys the time information which is often critical. I constantly see engineers who don't understand flamegraphs looking at the entire flamegraph of a say 3 second trace, and then trying to use that whole flamegraph to optimize a 100 ms critical path within the trace, which is totally nonsensical.</p>
]]></description><pubDate>Thu, 27 Jun 2024 17:31:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=40812929</link><dc:creator>simscitizen</dc:creator><comments>https://news.ycombinator.com/item?id=40812929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40812929</guid></item></channel></rss>