<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tomck</title><link>https://news.ycombinator.com/user?id=tomck</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 03:11:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tomck" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tomck in "Zig's New Async I/O"]]></title><description><![CDATA[
<p>> `if (file_exists(f))` is misuse of the interface, a lesson in interface design, and a faulty pattern that's easy to repeat with async-await.<p>It's even easier to repeat without async-await, where you don't need to tag the function call with `await`!<p>> Think about it, you can't `await` on part of the state only once and then know it's available in other parts of code to avoid async pollution. When you solve this problem, you realize `await` was just in the way and is completely useless and code looks exactly like a callback or any other more primitive mechanism.<p>I don't understand why you can't do this by just bypassing the async/await mechanism when you're sure that the data is already loaded<p>```<p>data = null<p>async function getDataOrWait() {
    await data_is_non_null(); // however you do this
    return data
}<p>function getData() {
    if (data == null) { throw new Error('data not available yet'); }
    return data;
}<p>```<p>You aren't forced into using async/await everywhere all the time. this sounds like 'a misuse of the interface, a lesson in interface design', etc<p>> I think "how to express concurrency" is a question I'm not even trying to answer<p>You can't criticise async/await, which is explicitly a way to express concurrency, if you don't even care to answer the question - you're just complaining about a solution that solves a problem that you clearly don't have (if you don't need to express concurrency, then you don't need async/await, correct!)<p>> point to approaches that completely eliminate pollution and force you to write code in that "unrolled" way from start, something like Rx or FRP where time is exactly the unit they're dealing with.<p>So they don't 'eliminate pollution', they just pollute everything by default all the time (???)</p>
]]></description><pubDate>Fri, 31 Oct 2025 16:39:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45773976</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=45773976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45773976</guid></item><item><title><![CDATA[New comment by tomck in "Zig's New Async I/O"]]></title><description><![CDATA[
<p>> I replied directly to you<p>My comment was responding only to the person who equated threads and async. My comment only said that async and threading are completely orthogonal, even though they are often conflated<p>> `is_something_true` is very simple, if condition is true, and then inside the block, if you were to check again it can be false, something that can't happen in synchronous code<p>It <i>can</i> happen in synchronous code, but even if it couldn't - why is async/await the problem here? what is your alternative to async/await to express concurrency?<p>Here are the ways it can happen:<p>1. it can happen with fibers, coroutines, threads, callbacks, promises, any other expression of concurrency (parallel or not!). I don't understand why async/await specifically is to blame here.<p>2. Even without concurrency, you can mutate state to make the value of is_something_true() change.<p>3. is_something_true might be a blocking call to some OS resource, file, etc - e.g. the classic `if (file_exists(f)) open(f)` bug.<p>I am neutral on async/await, but your example isn't a very good argument against it<p>Seemingly nobody ever has any good arguments against it<p>> async-await pollutes the code completely if you're not strict about its usage<p>This is a <i>good</i> thing, if a function is async then it does something that won't complete after the function call. I don't understand this argument about 'coloured functions' polluting code. if a function at the bottom of your callstack needs to do something and wait on it, then you need to wait on it for all functions above.<p>If the alternative is just 'spin up an OS thread' or 'spin up a fiber' so that the function at the bottom of the callstack can block - that's exactly the same as before, you're just lying to yourself about your code. Guess what - you can achieve the same thing by putting 'await' before every function call<p>Perhaps you have convinced me that async/await is great after all!</p>
]]></description><pubDate>Fri, 31 Oct 2025 15:09:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45772858</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=45772858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45772858</guid></item><item><title><![CDATA[New comment by tomck in "Zig's New Async I/O"]]></title><description><![CDATA[
<p>I think you replied to the wrong person.<p>That being said, I don't understand your `is_something_true` example.<p>> It's very often used to do 1 thing at a time when N things could be done instead<p>That's true, but I don't think e.g. fibres fare any better here. I would say that expressing that type of parallel execution is much more convenient with async/await and Promise.all() or whatever alternative, compared to e.g. raw promises or fibres.</p>
]]></description><pubDate>Wed, 29 Oct 2025 21:35:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45753361</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=45753361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45753361</guid></item><item><title><![CDATA[New comment by tomck in "Zig's New Async I/O"]]></title><description><![CDATA[
<p>>  I do fully understand people who can't get their heads around threads and prefer async<p>This is a bizarre remark<p>Async/await isn't "for when you can't get your head around threads", it's a completely orthogonal concept<p>Case in point: javascript has async/await, but everything is singlethreaded, there is no parallelism<p>Async/await is basically just coroutines/generators underneath.<p>Phrasing async as 'for people who can't get their heads around threads' makes it sound like you're just insecure that you never learned how async works yet, and instead of just sitting down + learning it you would rather compensate<p>Async is probably a more complex model than threads/fibers for expressing concurrency. It's fine to say that, it's fine to not have learned it if that works for you, but it's silly to put one above the other as if understanding threads makes async/await irrelevant<p>> The stdlib isn't too bad but last time I checked a lot of crates.io is filled with async functions for stuff that doesn't actually block<p>Can you provide an example? I haven't found that to be the case last time I used rust, but I don't use rust a great deal anymore</p>
]]></description><pubDate>Wed, 29 Oct 2025 19:58:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45752230</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=45752230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45752230</guid></item><item><title><![CDATA[New comment by tomck in "New coding models and integrations"]]></title><description><![CDATA[
<p>I have tried a model on my laptop+GPU before, and it is incredibly unusable. Incredibly slow <i>and</i> just bad output for exactly the work you describe<p>If you're looking for a cheap practical tool + don't care if it's not local, deepseek's non-reasoning model via openrouter is the most cost efficient <i>by far</i> for the work you describe.<p>I put 10 dollars in my account about 6 months ago and still haven't gotten through it, after heavy use semi regularly.</p>
]]></description><pubDate>Thu, 16 Oct 2025 18:16:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=45608829</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=45608829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45608829</guid></item><item><title><![CDATA[New comment by tomck in "Defer: Resource cleanup in C with GCCs magic"]]></title><description><![CDATA[
<p>This isn't what people are talking about, you aren't understanding the problem<p>With RAII you need to leave everything in an initialized state unless you are being very very careful - which is why MaybeUninit is always surrounded by unsafe<p><pre><code>    {
        Foo f;
    }

</code></pre>
f <i>must</i> be initialized here, it cannot be left uninitialized<p><pre><code>    std::vector<T> my_vector(10000);
</code></pre>
EVERY element in my_vector must be initialized here, they cannot be left uninitialized, there is no workaround<p>Even if I just want a std::vector<uint8_t> to use as a buffer, I can't - I need to manually malloc with `(uint8_t)malloc(sizeof(uint8_t)*10000)` and fill that<p>So what if the API I'm providing needs a std::vector? well, I guess i'm eating the cost of initializing 10000 objects, pull them into cache + thrash them out just to do it all again when I memcpy into it<p>This is just <i>one</i> example of many<p>another one:<p>with raii you need copy construction, operator=, move construction, move operator=. If you have a generic T, then using `=` on T might allocate a huge amount of memory, free a huge amount of memory, or none of the above. in c++ it could execute arbitrary code<p>If you haven't actually used a language without RAII for an extended period of time then you just shouldn't bother commenting. RAII <i>very clearly</i> has its downsides, you should be able to at least reason about the tradeoffs without assuming your terrible strawman argument represents the other side of the coin accurately</p>
]]></description><pubDate>Wed, 01 Oct 2025 15:21:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45438785</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=45438785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45438785</guid></item><item><title><![CDATA[New comment by tomck in "TigerBeetle is a most interesting database"]]></title><description><![CDATA[
<p>This whole page, and their response in this thread, is about tigerbeetle as a transaction processing database - e.g. financial transaction processing<p>I think this is very clear, I don't know why you're saying that tigerbeetle is trying to make a generic claim about general workloads<p>The comment you're replying to explicitly states that this isn't true for general workloads</p>
]]></description><pubDate>Wed, 01 Oct 2025 14:02:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45437861</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=45437861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45437861</guid></item><item><title><![CDATA[New comment by tomck in "Wikimedia Foundation Challenges UK Online Safety Act Regulations"]]></title><description><![CDATA[
<p>You're repeating propaganda from a far right newspaper headline, written misleadingly to make it sound like labour have said something recently about VPNs (they haven't)</p>
]]></description><pubDate>Tue, 29 Jul 2025 12:36:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44722553</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=44722553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44722553</guid></item><item><title><![CDATA[New comment by tomck in "Fixed Timestep Without Interpolation"]]></title><description><![CDATA[
<p>Why do you think you would see a flicker of death?<p>For local inputs, the fixed timestep is always a frame or smaller, this is not an issue unless you're over a network<p>Edit: Oh i see, this <i>is</i> a problem with the commenters suggestion of predicting the future though</p>
]]></description><pubDate>Sat, 19 Oct 2024 08:47:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=41886562</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=41886562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41886562</guid></item><item><title><![CDATA[New comment by tomck in "Why Use Onion Layering?"]]></title><description><![CDATA[
<p>> So getListOfUsersWithFoodPreferences and getListOfUsersWithFoodPreferencesWithoutFavouriteFoods living together as client-specific methods is absolutely fine<p>Sorry; my point was that adding this function as a public API 'onion layer' in your code means you're less able to adapt to change. The fact this function returns a `User` entity isn't particularly important - it's the fact when you make a function public, other teams will reuse your function and add invariants you didn't realise existed, so that changing your function in the future will break other teams' code.<p>Less public 'onion layers' means less of this</p>
]]></description><pubDate>Wed, 10 Jul 2024 16:59:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=40928926</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=40928926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40928926</guid></item><item><title><![CDATA[New comment by tomck in "Why Use Onion Layering?"]]></title><description><![CDATA[
<p>I have no idea what you're talking about, my example doesn't include any writes, only a read</p>
]]></description><pubDate>Wed, 10 Jul 2024 14:38:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=40927403</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=40927403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40927403</guid></item><item><title><![CDATA[New comment by tomck in "Why Use Onion Layering?"]]></title><description><![CDATA[
<p>Every layer you create is another public API that someone else can use in some other code. Each time your public API is used in a different place, it gathers different invariants - 'this function should be fast', 'this function should never error', 'this function shouldn't contact the database', etc. More invariants = more stuff broken when you change the layer.<p>So let's say you have some 'User' ORM entity for a food app. Each user has a favourite food and food preferences. You have a function `List<User> getListOfUsersWithFoodPreferences(FoodPreference preference)` which queries another service for users with a given food preference.<p>The `User` entity has a `String getName()` and `String getFavouriteFood()` methods, cool<p>Some other team builds some UI on top of that, which takes a list of users and displays their names and their favourite food.<p>Another team in your org uses the same API call to get a list of users with the same food prefs as you, so they loop over all your food prefs + call the function multiple times.<p>Amazing, we've layered the system and reused it twice!<p>Now, the database needs to change, because users can have multiple favourite foods, so the database gets restructured and favourite foods are now <i>more expensive</i> to query - they're not just in the same table row anymore.<p>As a result, `getListOfUsersWithFoodPreferences` runs a bit slower, because the favourite food query is more expensive.<p>This is fine for the UI, but the other team using this function to loop over all your food prefs now have their system running 4x slower! They didn't even need the user's favourite food!<p>If we're lucky that team gets time to investigate the performance regression, and we end up with another function `getListOfUsersWithFoodPreferencesWithoutFavouriteFoods`. Nice.<p>The onion layer limited the 'blast radius' of the DB change, but only in the API - the performance of the layer changed, and that broke another team.</p>
]]></description><pubDate>Wed, 10 Jul 2024 09:22:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=40925128</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=40925128</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40925128</guid></item><item><title><![CDATA[New comment by tomck in "Why Use Onion Layering?"]]></title><description><![CDATA[
<p>No, this is how everyone incompetent designs systems<p>Layers of generic APIs required to be 1000x more complex than would be required if they were just coupled to the layer above<p>Changing requirements means tunneling data through many layers<p>Layers are generic, which means either you tightly couple your APIs for the above-layer's use case, or your API will limit the performance of your system<p>Everyone who <i>thinks</i> they can design systems does it this way, then they end up managing a system that runs 10x slower than it should + complaining about managers changing requirements 'at the last minute'</p>
]]></description><pubDate>Wed, 10 Jul 2024 07:47:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=40924587</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=40924587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40924587</guid></item><item><title><![CDATA[New comment by tomck in "Fast linked lists"]]></title><description><![CDATA[
<p>This article is disingenuous with its Vec benchmark. Each call to `validate` creates a new Vec, but that means you allocate + free the vec for <i>each</i> validation. Why not store the vec on the validator to reuse the allocation? Why not mention this in the article, i had to dig in the git history to find out whether the vec was getting reallocated. This feels like you had a cool conclusion for your article, 'linked lists faster than vec', but you had to engineer the vec example to be worse. Maybe I'm being cynical.<p>It would be interesting to see the performance of a `Vec<&str>` where you reuse the vector, but also a `Vec<u8>` where you copy the path bytes directly into the vector and don't bother doing any pointer traversals. The example path sections are all very small - 'inner', 'another', 5 bytes, 7 bytes - less than the length of a pointer! storing a whole `&str` is 16 bytes per element and then you have to rebuild it again anyway in the invalid case.<p>---<p>This whole article is kinda bad, it's titled 'blazingly fast linked lists' which gives it some authority but the approach is all wrong. Man, be responsible if you're choosing titles like this. Someone's going to read this and assume it's a reasonable approach, but the entire section with Vec is bonkers.<p>Why are we designing 'blazingly fast' algorithms with rust primitives rather than thinking about where the data needs to go first? Why are we even considering vector clones or other crazy stuff? The thought process behind the naive approach and step 1 is insane to me:<p>1. i need to track some data that will grow and shrink like a stack, so my solution is to copy around an immutable Vec (???)<p>2. this is really slow for obvious reasons, how about we: pull in a whole new dependency ('imbl') that attempts to optimize for the general case using complex trees (???????????????)<p>You also mention:<p>> In some scenarios, where modifications occur way less often than clones, you can consider using Arc as explained in this video<p>I understand you're trying to be complete, but 'some scenarios' is doing a lot of work here. An Arc<[T]> approach is <i>literally</i> just the same as the naive approach <i>but</i> with extra atomic refcounts! Why mention it in this context?<p>You finally get around to mutating the vector + using it like a stack, but then comment:<p>> However, this approach requires more bookkeeping and somewhat more lifetime annotations, which can increase code complexity.<p>I have no idea why you mention 'code complexity' here (complexity introduced <i>by rust</i> and its lifetimes), but fail to mention how adding a dependency on 'imbl' is a negative.</p>
]]></description><pubDate>Tue, 14 May 2024 16:49:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=40357225</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=40357225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40357225</guid></item><item><title><![CDATA[New comment by tomck in "Zig's multi-sequence for loops"]]></title><description><![CDATA[
<p>You <i>can</i> tell when, because if it uses the allocator it will return an error. So the first line definitely doesn't allocate, and the second definitely does.<p>That is, unless you explicitly handle OOM conditions inside your construct, e.g. 'crash if you're OOM', which isn't typical in zig code. All code I interact with will return an allocator error if allocation fails.</p>
]]></description><pubDate>Tue, 28 Feb 2023 18:07:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=34972625</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=34972625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34972625</guid></item><item><title><![CDATA[New comment by tomck in "It Can Happen to You"]]></title><description><![CDATA[
<p>Is that really that slow? idk how they even read the file in that amount of time, my drive is only about 125MB/s</p>
]]></description><pubDate>Thu, 04 Mar 2021 12:48:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=26341652</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=26341652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26341652</guid></item><item><title><![CDATA[New comment by tomck in "Ask HN: Production Lisp in 2020?"]]></title><description><![CDATA[
<p>> have to restart the server every 20 days because of some memory leak<p>Hmm this seems like something super specific to their async setup, rather than 'common lisp leaks memory'</p>
]]></description><pubDate>Tue, 19 May 2020 19:50:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=23239328</link><dc:creator>tomck</dc:creator><comments>https://news.ycombinator.com/item?id=23239328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23239328</guid></item></channel></rss>