<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: victorNicollet</title><link>https://news.ycombinator.com/user?id=victorNicollet</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 23:14:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=victorNicollet" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by victorNicollet in "Caesar's Last Breath"]]></title><description><![CDATA[
<p>The atmosphere is estimated to have ~830PgC worth of CO₂, and plants are estimated to photosynthesize ~120PgC worth of CO₂ every year, so a given molecule would have 14% chance to be broken down in a year. The probability to survive for 2000 years would be around 1e-60.<p>Of course, CO₂ contents of the atmosphere have varied over the last 2000 years, and not all CO₂ is produced into or consumed from the atmosphere (it can be dissolved in surface water, etc).<p>EDIT: since there's much more O₂ than CO₂ in the atmosphere, a given O₂ molecule has a 8% chance to not be broken down by respiration over 2000 years.</p>
]]></description><pubDate>Fri, 23 May 2025 22:22:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44077146</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=44077146</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44077146</guid></item><item><title><![CDATA[New comment by victorNicollet in "Caesar's Last Breath"]]></title><description><![CDATA[
<p>I have seen the similar assertion "some of the water molecules you drank today were once part of a dinosaur", which is false because water molecules do not last very long when in liquid phase (they continuously swap protons, turning into hydronium ions and back).<p>The O-O and N-N bonds are much stronger than H-O bonds, but there are still atmospheric processes that can break them. For instance, O2 undergoes photodissociation under ultraviolet light and recombines into O3 ozone, and N2 likely also undergoes photodissociation. And obviously, the fact that living beings breathe O2...</p>
]]></description><pubDate>Fri, 23 May 2025 15:08:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073559</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=44073559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073559</guid></item><item><title><![CDATA[New comment by victorNicollet in "Binary Formats Are Better Than JSON in Browsers"]]></title><description><![CDATA[
<p>We are using binary formats in production, also for data visualization and analysis. We went for a simple custom format: in addition to the usual JSON types (string, number, array, boolean, object), the serialized value can contain the standard JSON types, but can also contain JavaScript typed arrays (Uint8Array, Float32Array, etc). The serialized data contains the raw data of all the typed arrays, followed by a single JSON-serialized block of the original value with the typed arrays replaced by reference objects pointing to the appropriate parts of the raw data region.<p>For most data visualization tasks, the dataset will be composed of 5% of JSON data and 95% of a small number of large arrays (usually Float32Array) representing data table columns. The JSON takes time to parse, but it is small, and the large arrays can be created in constant time from the ArrayBuffer of the HTTP response (on big-endian machines, this will be linear time for all except Uint8Array).<p>For situations where hundreds of thousands of complex objects must be transferred, we will usually pack those objects as several large arrays instead. For example, using a struct-of-arrays instead of an array-of-structs, and/or by having an Uint8Array contain the result of a binary serialization of each object, with an Uint32Array containing the bounds of each object. The objective is to have the initial parsing be nearly instantaneous, and then to extract the individual objects on demand: this minimizes the total memory usage in the browser, and in the (typical) case where only a small subset of objects is being displayed or manipulate, the parsing time is reduced to only that subset instead of the entire response.<p>The main difficulty is that the browser developer tools "network" tab does not properly display non-JSON values, so investigating an issue requires placing a breakpoint or console.log right after the parsing of a response...</p>
]]></description><pubDate>Wed, 14 May 2025 09:37:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43982617</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=43982617</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43982617</guid></item><item><title><![CDATA[New comment by victorNicollet in "Bloat is still software's biggest vulnerability (2024)"]]></title><description><![CDATA[
<p>I would say 4. grab individual code files (as opposed to entire libraries) and manually edit them, removing unnecessary features and adding new ones where needed.</p>
]]></description><pubDate>Wed, 07 May 2025 19:03:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43919447</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=43919447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43919447</guid></item><item><title><![CDATA[New comment by victorNicollet in "Bloat is still software's biggest vulnerability (2024)"]]></title><description><![CDATA[
<p>Tree-shaking is able to remove code that will never be called. And it's not necessarily good at it: we can detect some situations where a function is never called, and remove that function, but it's mostly the obvious situations such as "this function is never referenced".<p>It cannot detect a case such as: if the string argument to this function contains a substring shaped like XYZ, then replace that substring with a value from the environment variables (the Log4j vulnerability), or from the file system (the XML Entity Extension vulnerability). From the point of view of tree-shaking, this is legitimate code that could be called. This is the kind of vulnerable bloat that comes with importing large libraries (large in the sense of "has many complex features", rather than of megabytes).</p>
]]></description><pubDate>Wed, 07 May 2025 06:41:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43912863</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=43912863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43912863</guid></item><item><title><![CDATA[New comment by victorNicollet in "As an experienced LLM user, I don't use generative LLMs often"]]></title><description><![CDATA[
<p>The snippet you shared is consistent with the kind of output I have also been seeing out of LLMs: it looks correct overall, but contains mistakes and code quality problems, both of which would need human intervention to fix.<p>For example, why is the root object's entityType being passed to the recursive mergeEntities call, instead of extracting the field type from the propSchema?<p>Several uses of `as` (as well as repeated `result[key] === null`) tests could be eliminated by assigning `result[key]` to a named variable.<p>Yes, it's amazing that LLMs have reached the level where they can produce almost-correct, almost-clean code. The question remains of whether making it correct and clean takes longer than writing it by hand.</p>
]]></description><pubDate>Tue, 06 May 2025 12:33:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=43904368</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=43904368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43904368</guid></item><item><title><![CDATA[New comment by victorNicollet in "Ask HN: Best way to simultaneously run multiple projects locally?"]]></title><description><![CDATA[
<p>We use the following steps:<p>- Each service listens on a different, fixed port (as others have recommended).<p>- Have a single command (incrementally) build and then run each service, completely equivalent to running it from your IDE. In our case, `dotnet run` does this out of the box.<p>- The above step is much easier if services load their configuration from files, as opposed to environment variables. The main configuration files are in source control; they never contain secrets, instead they contain secret identifiers that are used to load secrets from a secret store. In our case, those are `appsettings.json` files and the secret store is Azure KeyVault.<p>- An additional optional configuration file for each application is outside source control, in a standard location that is the same on every development machine (such as /etc/companyname/). This lets us have "personal" configuration that applies regardless of whether the service is launched from the IDE or the command-line. In particular, when services need to communicate with each other, it lets us configure whether service A should use a localhost address for service B, or a testing cluster address, or a production cluster address.<p>- We have a simple GUI application that lists all services. For each service it has a "Run" button that launches it with the command-line script, and a checkbox that means "other local services should expect this one to be running on localhost". This makes it very simple to, say, check three boxes, run two of them from the GUI, and run the third service from the IDE (to have debugging enabled).</p>
]]></description><pubDate>Mon, 10 Mar 2025 08:20:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43318087</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=43318087</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43318087</guid></item><item><title><![CDATA[New comment by victorNicollet in "Printf debugging is ok"]]></title><description><![CDATA[
<p>Indeed. Worst week of 2023 !<p>But I consider myself lucky that the issue could be reproduced on a local machine (arguably, one with 8 cores and 64GiB RAM) and not only on the 32 core, 256GiB RAM server. Having to work remotely on a server would have easily added another week of investigation.</p>
]]></description><pubDate>Mon, 06 Jan 2025 11:54:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42609822</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42609822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42609822</guid></item><item><title><![CDATA[New comment by victorNicollet in "Printf debugging is ok"]]></title><description><![CDATA[
<p>One of the hardest bugs I've investigated required the extreme version of debugging with printf: sprinkling the code with dump statements to produce about 500GiB of compressed binary trace, and writing a dedicated program to sift through it.<p>The main symptom was a non-deterministic crash in the middle of a 15-minute multi-threaded execution that should have been 100% deterministic. The debugger revealed that the contents of an array had been modified incorrectly, but stepping through the code prevented the crash, and it was not always the same array or the same position within that array. I suspected that the array writes were somehow dependent on a race, but placing a data breakpoint prevented the crash. So, I started dumping trace information. It was a rather silly game of adding more traces, running the 15-minute process 10 times to see if the overhead of producing the traces made the race disappear, and trying again.<p>The root cause was a "read, decompress and return a copy of data X from disk" method which was called with the 2023 assumption that a fresh copy would be returned every time, but was written with the 2018 optimization that if two threads asked for the same data "at the same time", the same copy could be returned to both to save on decompression time...</p>
]]></description><pubDate>Mon, 06 Jan 2025 11:45:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=42609775</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42609775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42609775</guid></item><item><title><![CDATA[New comment by victorNicollet in "Show HN: A remake of my 2004 PDA video game"]]></title><description><![CDATA[
<p>The trouble is, darklaga.bin will not work locally because it must be fetched over an HTTP(S) connection. So, there's a darklaga.local.html provided when running locally. The build sequence is "npm run pack" then "npm run build", after which darklaga.local.html should work.</p>
]]></description><pubDate>Sun, 05 Jan 2025 14:41:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=42602036</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42602036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42602036</guid></item><item><title><![CDATA[New comment by victorNicollet in "Show HN: A remake of my 2004 PDA video game"]]></title><description><![CDATA[
<p>Yes, ship offset from touch point was an intentional 2024 change. In the 2004 version, touch screens used a stylus, which were thin enough that you could see whatever you were touching, so the player's ship was centered on the stylus point of contact. With a finger, it's much harder to see...<p>For Safari, I ended up having to investigate a crash on iOS Safari specifically, because it did not happen on Mac Safari.<p>I considered itch.io but I am still undecided on the whole affair.</p>
]]></description><pubDate>Sun, 05 Jan 2025 14:36:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42601997</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42601997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42601997</guid></item><item><title><![CDATA[New comment by victorNicollet in "Show HN: A remake of my 2004 PDA video game"]]></title><description><![CDATA[
<p>The square pattern enemies are a bug (well, level design mistake) back from 2004, where they would move outside the range of the laser but would still be within reach of other weapons.<p>I must admit, since I no longer have access to the level editor, I edit the level files with HxD :-)</p>
]]></description><pubDate>Sun, 05 Jan 2025 14:30:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=42601955</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42601955</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42601955</guid></item><item><title><![CDATA[New comment by victorNicollet in "Show HN: A remake of my 2004 PDA video game"]]></title><description><![CDATA[
<p>On other browser/platform combinations I use fullscreen mode, which gets rid of the issue entirely. But it isn't supported on iOS Safari as far as I can tell.</p>
]]></description><pubDate>Sun, 05 Jan 2025 14:28:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=42601938</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42601938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42601938</guid></item><item><title><![CDATA[New comment by victorNicollet in "Show HN: A remake of my 2004 PDA video game"]]></title><description><![CDATA[
<p>Sadly I could no longer build the C++ version of the game because of some missing proprietary dependencies, but I don't expect there would have been noticeable improvements: it was already running at 60FPS on computers from the time, even in PocketPC emulation mode.</p>
]]></description><pubDate>Wed, 01 Jan 2025 10:58:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42565300</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42565300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42565300</guid></item><item><title><![CDATA[Show HN: A remake of my 2004 PDA video game]]></title><description><![CDATA[
<p>My background project for the last two years has been re-implementing my 2004 C++ shoot'em up game in TypeScript + WebGL, and it's finally done (just in time for the 20th anniversary!)<p>Play the game online: <a href="https://nicollet.net/blog/darklaga/darklaga.html" rel="nofollow">https://nicollet.net/blog/darklaga/darklaga.html</a><p>Technical article about the remake: <a href="https://nicollet.net/blog/darklaga/remake.html" rel="nofollow">https://nicollet.net/blog/darklaga/remake.html</a><p>I have tested Firefox, Chrome and Edge on desktop and mobile (no access to a device capable of running Safari).<p>It's amazing how much difference 20 years makes: the hardware is so much more powerful, the web as a deployment platform is so much easier than side-loading onto a PDA through a serial cable or sharing .exe files through e-mail, and my experience as a professional developer makes almost everything so much easier... but at the same, it didn't feel that the language, editor or debugger (TypeScript on Visual Studio Code) were significantly better than good old Visual C++ 6.<p>Repository with the code of the remake: <a href="https://github.com/VictorNicollet/Darklaga">https://github.com/VictorNicollet/Darklaga</a> (sadly, I cannot provide the video and audio assets themselves under any open license).</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42557920">https://news.ycombinator.com/item?id=42557920</a></p>
<p>Points: 106</p>
<p># Comments: 23</p>
]]></description><pubDate>Tue, 31 Dec 2024 10:55:56 +0000</pubDate><link>https://nicollet.net/blog/darklaga/remake.html</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42557920</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42557920</guid></item><item><title><![CDATA[New comment by victorNicollet in "Monorepo – Our Experience"]]></title><description><![CDATA[
<p>I agree ! We use the commit status instead of the PR status. A non-FF merge commit, being a commit, would have its own status separate from the status of its parents.</p>
]]></description><pubDate>Wed, 06 Nov 2024 22:18:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42070383</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42070383</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42070383</guid></item><item><title><![CDATA[New comment by victorNicollet in "Monorepo – Our Experience"]]></title><description><![CDATA[
<p>I suppose our development process is a bit unusual.<p>The meaning we give to "the commit is green" is not "this PR can be merged" but "this can be deployed to production", and it is used for the purpose of selecting a release candidate several times a week. It is a statement about the entire state of the project as of that commit, rather than just the changes introduced in that commit.<p>I can understand the frustration of creating a PR from a red commit on the main branch, and having that PR be red as well as a result. I can't say this has happened very often, though: red commits on the main branch are very rare, and new branches tend to be started right after a deployment, so it's overwhelmingly likely that the PR will be rooted at a green commit. When it does happen, the time it takes to push a fix (or a revert) to the main branch is usually much shorter than the time for a review of the PR, which means it is possible to rebase the PR on top of a green commit as part of the normal PR acceptance timeline.</p>
]]></description><pubDate>Wed, 06 Nov 2024 21:04:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42069368</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42069368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42069368</guid></item><item><title><![CDATA[New comment by victorNicollet in "Monorepo – Our Experience"]]></title><description><![CDATA[
<p>Wouldn't CI be easier with a monorepo ? Testing integration across multiple repositories (triggered by changes in any of them) seems more complex than just adding another test suite to a single repo.</p>
]]></description><pubDate>Wed, 06 Nov 2024 17:01:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42065313</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42065313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42065313</guid></item><item><title><![CDATA[New comment by victorNicollet in "Monorepo – Our Experience"]]></title><description><![CDATA[
<p>I'm not familiar with GitHub Actions, but we reverted our migration to Bitbucket Pipelines because of a nasty side-effect of conditional execution: if a commit triggers test suite T1 but not T2, and T1 is successful, Bitbucket displays that commit with a green "everything is fine" check mark, regardless of the status of T2 on any ancestors of that commit.<p>That is, the green check mark means "the changes in this commit did not break anything that was not already broken", as opposed to the more useful "the repository, as of this commit, passes all tests".</p>
]]></description><pubDate>Wed, 06 Nov 2024 16:55:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=42065201</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=42065201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42065201</guid></item><item><title><![CDATA[New comment by victorNicollet in "Building a better and scalable system for data migrations"]]></title><description><![CDATA[
<p>Avoiding SQL migrations was my #1 reason for moving to event sourcing.<p>This approach cuts the "database server" into an event stream (an append-only sequence of events), and a cached view (a read-only database that is kept up-to-date whenever events are added to the stream, and can be queried by the rest of the system).<p>Migrations are overwhelmingly cached view migrations (that don't touch the event stream), and in very rare cases they are event stream migrations (that don't touch the cached view).<p>A cached view migration is made trivial by the fact that multiple cached views can co-exist for a single event stream. Migrating consists in deploying the new version of the code to a subset of production machines, waiting for the new cached view to be populated and up-to-date (this can take a while, but the old version of the code, with the old cached view, is still running on most production machines at this point), and then deploying the new version to all other production machines. Rollback follows the same path in reverse (with the advantage that the old cached view is already up-to-date, so there is no need to wait).<p>An event stream migration requires a running process that transfers events from the old stream to the new stream as they appear (transforming them if necessary). Once the existing events have been migrated, flip a switch so that all writes point to the new stream instead of the old.</p>
]]></description><pubDate>Tue, 29 Oct 2024 16:27:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41986050</link><dc:creator>victorNicollet</dc:creator><comments>https://news.ycombinator.com/item?id=41986050</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41986050</guid></item></channel></rss>