<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dureuill</title><link>https://news.ycombinator.com/user?id=dureuill</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 09:39:58 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dureuill" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dureuill in "An AI agent published a hit piece on me"]]></title><description><![CDATA[
<p>To be clear I'm not saying that Pike's response is appropriate in a professional setting.<p>"This project does not accept fully generated contributions, so this contribution is not respecting the contribution rules and is rejected." would be.<p>That's pretty much the maintainer's initial reaction, and I think it is sufficient.<p>What I'm getting at is that it shouldn't be expected from the maintainer to have to persuade anyone. Neither the offender nor the onlookers.<p>Rejecting code generated under these conditions might be a bad choice, but it is their choice. They make the rules for the software they maintain. We are not entitled to an explanation and much less justification, lest we reframe the rule violation in the terms of the abuser.</p>
]]></description><pubDate>Sat, 14 Feb 2026 08:53:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47012881</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=47012881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47012881</guid></item><item><title><![CDATA[New comment by dureuill in "An AI agent published a hit piece on me"]]></title><description><![CDATA[
<p>The project states a boundary clearly: code by LLMs not backed by a human is not accepted.<p>The correct response when someone oversteps your stated boundaries is not debate. It is telling them to stop. There is no one to convince about the legitimacy of your boundaries. They just are.</p>
]]></description><pubDate>Thu, 12 Feb 2026 21:33:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46995546</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=46995546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46995546</guid></item><item><title><![CDATA[New comment by dureuill in "Anthropic agrees to pay $1.5B to settle lawsuit with book authors"]]></title><description><![CDATA[
<p>> The commons are still as available as they ever were,<p>That is false. As a direct consequence of LLMs:<p>1. The web is increasingly closed to automated scraping, and more marginally to people as well. Owners of websites like reddit now have a stronger incentive to close off their APIs and sell access.<p>2. The web is being inundated with unverified LLM output which poisons the well<p>3. More profoundly, increasingly basing our production on LLM outputs and making the human merely "in the loop" rather than the driver, and sometimes eschewing even the human in the loop, leads to new commons that are less adapted to the evolutions of our world, less original and of lesser quality</p>
]]></description><pubDate>Sat, 06 Sep 2025 07:05:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45147237</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=45147237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45147237</guid></item><item><title><![CDATA[New comment by dureuill in "Meilisearch – search engine API bringing AI-powered hybrid search"]]></title><description><![CDATA[
<p>hello, I implemented hybrid search in Meilisearch.<p>Whether it uses re-ranking or not depends on how you want to stretch the definition. Meilisearch does not use the rank of the documents in each list of results to compute the final list of results.<p>Rather, Meilisearch attributes a relevancy score to each result and then orders the results in the final list by comparing the relevancy score of the documents in each list of results.<p>This is usually much better than any method that uses the rank of the documents, because the rank of a document doesn't tell you if the document is relevant, only that it is more relevant than documents that ranked after it in that list of hits. As a result, these methods tend to mix good and bad results. As semantic and full-text search are complementary, one method is best for some queries and the other for different queries, and taking results by only considering their rank in their respective list of results is really <i>bizarre</i> to me.<p>I gather other search engines might be doing it that way because they cannot produce a comparable relevancy score for both the full-text search results and the semantic search results.<p>I'm not sure why the website mentions Reciprocal Rank Fusion (RRF) (I'm just a dev, not in charge of this particular blog article), but it doesn't sound right to me. Maybe something got lost in translation. I'll try and have it fixed. EDIT: Reported, this is being fixed.<p>By the way, this way of comparing scores from multiple lists of results generalizes very well, which is how Meilisearch is able to provide its "federated search" feature, which is quite unique across search engines, I believe.<p>Federated search allows comparing the results of multiple queries against possibly multiple indexes or embedders.</p>
]]></description><pubDate>Tue, 15 Apr 2025 07:29:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43689963</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=43689963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43689963</guid></item><item><title><![CDATA[New comment by dureuill in "Meilisearch Is Too Slow"]]></title><description><![CDATA[
<p>> We aim to make Meilisearch updates seamless. Our vision includes avoiding dumps for non-major updates and reserving them only for significant version changes.
We will implement a system to version internal databases and structures. With this, Meilisearch can read and convert older database versions to the latest format on the fly. This transition will make the whole engine resource-based, and @dureuill is driving this initiative.<p>Seamless upgrades has been my dream for Meili for a while, I'm still hoping I can smuggle it with the indexing refactor itself :-)</p>
]]></description><pubDate>Tue, 20 Aug 2024 13:47:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=41300030</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=41300030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41300030</guid></item><item><title><![CDATA[New comment by dureuill in "Can a Rust binary use incompatible versions of the same library?"]]></title><description><![CDATA[
<p>I agree the answer is no in the abstract, but it is not very useful in practice.<p>Nobody writing Rust needs to cover this "basic use case" you're referring to, so it is the same as people saying "unsafe exists so Rust is no safer than C++". In theory that's true, in practice in 18 months, 604 commits and 60,008 LoC I wrote `unsafe` exactly twice. Once for memory mapping something, once for skipping UTF-8 validation that I'd just done before (I guess I should have benchmarked that one as it is probably premature).<p>In practice when developing Rust software at a certain scale you will mix and match incompatible library versions in your project, and it will not be an issue. Our project has 44 dependencies with conflicting versions, one of which appears in 4 incompatible versions, and it compiles and runs perfectly fine. In other languages I used (C++, python), this exact same thing has been a problem, and it is not in Rust. This is what the article is referring to</p>
]]></description><pubDate>Tue, 20 Aug 2024 13:21:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=41299814</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=41299814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41299814</guid></item><item><title><![CDATA[New comment by dureuill in "Can a Rust binary use incompatible versions of the same library?"]]></title><description><![CDATA[
<p>I'm not sure I understand the use case here. Are you asking if you can depend on two versions of the same crate, for a crate that exports a `#[no_mangle]` or `#[export_name]` function?<p>I guess you could slap a `#[used]` attribute on your exported functions, and use their mangled name to call them with dlopen, but that would be unwieldy and guessing the disambiguator used by the compiler error prone to impossible.<p>Other than that, you cannot. What you can do is define the `#[no_mangle]` or `#[export_name]` function at the top-level of your shared library. It makes sense to have a single crate bear the responsibility of exporting the interface of your shared library.<p>I wish Rust would enforce that, but the shared library story in Rust is subpar.
Fortunately it never actually comes into play, as the ecosystem relies on static linking</p>
]]></description><pubDate>Mon, 19 Aug 2024 13:51:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=41291059</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=41291059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41291059</guid></item><item><title><![CDATA[New comment by dureuill in "The human typewriter, or why optimizing for typing is short-sighted"]]></title><description><![CDATA[
<p>Very disappointing read.<p>Starts with an interesting claim "don't optimize for typing", but then it completely fails to prove it, and confuses itself in thinking that `auto` is an optimization for typing.<p>`auto` is:<p>- A way to express types that are impossible or truly difficult to express, such as iterators, lambdas, etc<p>- A way to optimize <i>reading</i>, by limiting the redundancy<p>- A way to optimize <i>maintenance</i>, by limiting the amount of change brought by a refactor<p>The insistence on notepad or "dumb editors" is also difficult to grasp. I expect people reviewing my code to be professionally equipped.<p>Lastly the example mostly fails to demonstrate the point.<p>- There's a point made on naming (distinct from `auto`): absent a wrapping type, `dataSizeInBytes` is better than `dataSize`. The best way though is to have `dataSize` be a `Bytes` type that supports conversion at its boundaries (can be initialized from bytes, MB, etc)<p>- What's the gain between:<p><pre><code>    auto dataSet = pDatabase->readData(queryResult.getValue());
</code></pre>
and<p><pre><code>    DatabaseDataSet dataSet = pDatabase->readData(queryResult.getValue());
</code></pre>
The `dataset` part can be inferred from the naming of the variable, it is useless to repeat it. The `Dabatase` is also clear from the fact that we read data from a db. Also, knowing the variable has this specific type brings me absolutely nothing.<p>- Their point about mutability of the db data confused me, as it is <i>not</i> clear to me if I can modify a "shadow copy" (I suppose not?). I suggest they use a programming language where mutating something you should not it a compile time error, it is much more failsafe than naming (which is hard)<p>I'm sad, because indeed one shouldn't blindly optimize for <i>typing</i>, and I frequently find myself wondering when people tell me C++ is <i>faster to write</i> than Rust, when I (and others) empirically measured that <i>completing a task</i>, which is the interesting measure IMO, is twice as fast in the latter than in the former.<p>So I would have loved a defence of why more typing does not equate higher productivity. But this ain't it.</p>
]]></description><pubDate>Tue, 06 Aug 2024 07:06:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=41168477</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=41168477</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41168477</guid></item><item><title><![CDATA[New comment by dureuill in "How to organize large Rust codebases"]]></title><description><![CDATA[
<p>Some good advice, some bad advice in here. This is necessarily going to be opinionated.<p>> Provide a development container<p>Generally unneeded. It is expected that a Rust project can build with cargo build, don't deviate from that. People can `git clone` and `code .`.<p>Now, a docker might be needed for deployment. As much as I personally dislike docker, at Meilisearch we are providing a Docker image, because our users use it.<p>This is hard to understand to me as a Rust dev, when we provide a single executable binary, but I'm not in devops and I guess they have good reason to prefer docker images.<p>> Use workspaces<p>Yes, definitely.<p>> Declare your dependencies at the workspace level<p>Maybe, when it makes sense. Some deps have distinct versions by design.<p>> Don't use cargo's default folder structure<p>*Do* use cargo's default folder structure, <i>because</i> it is the default. Please, don't be a special snowflake that decides to do things differently, even with a good reason. The described hierarchy would be super confusing for me as an outsider discovering the codebase. Meanwhile, vs code pretty much doesn't care that there's an intermediate `src` directory. Not naming the root of the crate `lib.rs` also makes it hard to actually find the root component of a crate. Please don't do this.<p>> Don't put any code in your mod.rs and lib.rs files<p>Not very useful. Modern IDEs like VsCode will let you define custom patterns so that you can match `<crate-name>/src/lib.rs` to `crate <crate-name>`. Even without doing this, a lot of the time your first interaction with a crate will be through docs.rs or a manual `cargo doc`, or even just the autocomplete of your IDE. Then, finding the definition of an item is just a matter of asking the IDE (or, grepping for the definition, which is easy to do in Rust since all definitions have a prefix keyword such as `struct`, `enum`, `trait` or `fn`).<p>> Provide a Makefile<p>Please don't do this! In my experience, Makefiles are brittle, push people towards non-portable scripts (since the Makefile uses a non-portable shell by default), `make` is absent by default in certain systems, ...<p>Strongly prefer just working with `cargo` where possible. If not possible, Rust has a design pattern called `cargo xtask`[1] that allows adding cargo subcommands that are specific to your project, by compiling a Rust executable that has a much higher probability to be portable and better documented. If you must, use `cargo xtask`.<p>> Closing words<p>I'm surprised to not find a word about CI workflows, that are in my opinion key to sanely growing a codebase (well in Rust there's no reason not to have them even on smaller repos, but they quickly become a necessity as more code gets added).<p>They will ensure that the project:<p>- has no warning on `main` (allowed locally, error in CI)<p>- is correctly formatted (check format in CI)<p>- has passing tests (check tests in CI, + miri if you have unsafe code, +fuzzer tests)<p>- is linted (clippy)<p>[1]: <a href="https://github.com/matklad/cargo-xtask">https://github.com/matklad/cargo-xtask</a></p>
]]></description><pubDate>Sun, 14 Jul 2024 12:21:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=40960515</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40960515</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40960515</guid></item><item><title><![CDATA[New comment by dureuill in "Zig vs. Rust at work: the choice we made"]]></title><description><![CDATA[
<p>> You are actually just arguing for the sake of arguing here.<p>I'm very much not doing that.<p>I'm just really tired of reading claims that "C++ is actually safe if you follow these very very simple rules", and then the "simple rules" are either terrible for performance, not actually leading to memory safe code (often by ignoring facts of life like the practices of the standard library, iterator and reference invalidation, or the existence of multithreaded programming), or impossible to reliably follow in an actual codebase. Often all three of these, too.<p>I mean the most complete version of "just follow rules" is embodied by the C++ core guidelines[1], a 20k lines document of about 116k words, so I think we can drop the "very very simple" qualifier at this point. Many of these rules among the more important are not currently machine-enforceable, like for instance the rules around thread safety.<p>Meanwhile, the rules for Rust are:<p>1. Don't use `unsafe`<p>2. there is no rule #2<p>*That* is a very very simple rule. If you don't use unsafe, any memory safety issue you would have is not your responsibility, it is the compiler's or your dependency's. It is a radical departure from C++'s "blame the users" stance.<p>That stance is imposed by the fact that C++ simply doesn't have the tools, at the language level, to provide memory safety. It lacks:<p>- a borrow checker<p>- lifetime annotations<p>- Mutable XOR shared semantics<p>- the `Send`/`Sync` markers for thread-safety.<p>Barring the addition of each one of these ingredients, we're not going to see zero-overhead, parallel, memory-safe C++. Adding these is pretty much as big of a change to existing code as switching to Rust, at this point.<p>> Use the abstractions within the rules and you won't get issues, use compiler flags and analyzers on CI and you don't even need to remember the rules.<p>I want to see the abstractions, compiler flags and analyzers that will *reliably* find:<p>- use-after-free issues<p>- rarely occurring data races in multithreaded code<p>Use C++ if you want to, but please don't pretend that it is memory safe if following a set of simple rules. That is plain incorrect.<p>[1]: <a href="https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines" rel="nofollow">https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines</a></p>
]]></description><pubDate>Wed, 26 Jun 2024 21:14:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=40804659</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40804659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40804659</guid></item><item><title><![CDATA[New comment by dureuill in "Zig vs. Rust at work: the choice we made"]]></title><description><![CDATA[
<p>> no raw loops and not raw pointer access<p>- Do these rules allow iterators?<p>- Under the "no raw pointer" rule, how do you express view objects? For instance, is `std::string_view` forbidden under your rules? If no, then you cannot get rid of memory issues in C++. If yes, then that's a fair bit more than "no raw pointer access", and then how do you take a slice of a string? deep copy? shared_ptr? Both of these solutions are bad for performance, they mean a lot of copies or having all objects reference-counted (which invites atomic increment/decrements overhead, cycles, etc). Compare to the `&str` that Rust affords you.<p>- What about multithreading? Is that forbidden as well? If it is allowed, what are the simple rules to avoid memory issues such as data races?<p>> That's already available in well written C++<p>Where are the projects in well-written C++ that don't have memory-safety CVEs?</p>
]]></description><pubDate>Tue, 25 Jun 2024 13:13:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=40788188</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40788188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40788188</guid></item><item><title><![CDATA[New comment by dureuill in "Zig vs. Rust at work: the choice we made"]]></title><description><![CDATA[
<p>> Ironically I see with your last point regarding golang that we are very different people ans thats fine. For me I would much rather lean back towards C if I can guarantee safety than the more abstract and high level rust. Honestly I am extremely intrigued by zig but until it's stable I'm not going near it.
> 
> We want different things from languages and that is fine.<p>I just wanted to tell you that I agree. A lot of what makes people like or dislike a language seems to be down to aesthetics in its nobler meaning.<p>> The time I did, was recursively accessing different parts of a pretty central struct but the borrow checker concidered the entire struct a borrowed object.<p>Ah OK. It helps to model a borrow of a struct as a capability. If your struct is made of multiple "capabilities" that can be borrowed separately, then you better express that with a function that borrows the struct and return "view objects" representing the capabilities.<p>For instance, if you can `foo` and `bar` your struct at the same time, you can have a method:<p>`fn as_cap(&mut self) -> (Foo<'_>, Bar<'_>) { todo!() }`<p>and have the `Foo` borrow the fields you need to `foo()` from `self`, and `Bar` borrow the fields you need to `bar()`  from `self`.<p>Then you simply can call `Foo.foo()` and `Bar.bar()`.</p>
]]></description><pubDate>Mon, 24 Jun 2024 07:16:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=40773279</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40773279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40773279</guid></item><item><title><![CDATA[New comment by dureuill in "Zig vs. Rust at work: the choice we made"]]></title><description><![CDATA[
<p>> Rust also has UB and you should still be runnig fuzzers and sanitizers on your rust code, that is true for C++.<p>Safe Rust doesn't have UB[1], and safe Rust is what I review 99% of the time. For unsafe modules, you should indeed be running sanitizers. Fuzzers are always good, they are also interesting for other properties than UB.<p>> tools available that should be run on CI that can catch those issues<p>Available tools have both false positives and false negatives. Careful review is unfortunately the best tool we had in C++ to nip UB in the bud, IME.<p>> I think my big productivity issue with rust has always been the very weird hoops I need to jump through to make it do stuff I can confirm is correct but the borrow checker prevents me from doing<p>Interesting, I remember having to adapt some idioms around caching and storing iterators in particular, but very quickly I felt like there wasn't that many hoops and they weren't that weird. There's a sore point for "view types" (think parsed data) that are hard to bundle with the owning data (I have my own crate to do so[2]), but other than that I can't really think of anything. Do you mind sharing some of the patterns you find are difficult in Rust but <i>should work</i>, in your opinion?<p>> [rust-analyzer and clangd]<p>I find there's been tons of regressions in usage in rust-analyzer recently, but IME it blows clangd out of the water. The fact that Rust has a much saner compilation model is a large contributing factor, as well as the defacto standard build system with nice properties for analysis.<p>clangd never properly worked on our project due to our use of ExternalProject for dependencies.<p>> And regarding the google report, was that not self reported productivity.<p>No, the recent report (presented by Lars at some Rust conf) is distinct of the blog article and is not self reported productivity. They measured the time taken to perform "similar tasks", which google is uniquely positioned to do because it is such a large organization.<p>> Just make an informed decision is my point, you have tradeoffs for each laguage and for me easy C interop is extremely important for the places I actually need C++. For the rest I use golang.<p>That's fair. I would say the tradeoff goes very far in the Rust direction, but I have strong bias against golang (I find it verbose and inexpressive, I don't like that it allows data races[3])<p>[1]: to be precise, if safe Rust has UB it is a compiler bug or a bug in underlying unsafe code. By safe Rust, I mean modules that don't have `unsafe` in them.<p>[2]: <a href="https://github.com/dureuill/nolife">https://github.com/dureuill/nolife</a><p>[3]: <a href="https://arxiv.org/abs/2204.00764" rel="nofollow">https://arxiv.org/abs/2204.00764</a></p>
]]></description><pubDate>Sun, 23 Jun 2024 11:43:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=40766693</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40766693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40766693</guid></item><item><title><![CDATA[New comment by dureuill in "Zig vs. Rust at work: the choice we made"]]></title><description><![CDATA[
<p>> There are many languages that fit this description<p>I find that in practice not, especially if you further limit that to imperative languages. Note that I mentioned memory safe AND heavily favors correctness. In that regard, Rust is uniquely placed due to its shared XOR mutable paradigm. One has to look at functional languages that completely disable mutation to find comparable tools for correctness. Allegedly, they're more niche.<p>> However, if you heavily interop with C/C++ safety goes out the window anyway<p>I find this to be incorrect. The way you would do this is by (re)writing modules of the application in Rust. Firefox did that for its parallel CSS rendering engine. I did it for the software at my previous job. The software at my current job relies on a C database, we didn't have a memory safety issue in years (never had one since I joined, actually). We have abstracted talking to the DB with a mostly safe wrapper (there are some unsafe functions, but most of them are safe), the very big majority of our code is safe Rust.<p>> it probably never mattered much in the first place<p>It does matter. First, for security reasons. Second, because debugging memory issues is not fun and a waste of time when alternatives that fix this class of errors exist.</p>
]]></description><pubDate>Fri, 21 Jun 2024 08:25:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=40747379</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40747379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40747379</guid></item><item><title><![CDATA[New comment by dureuill in "Zig vs. Rust at work: the choice we made"]]></title><description><![CDATA[
<p>> As someone who has some large projects in C++ and co tribute to OSS C++ projects I find this isn't true.<p>Well, that goes contrary to my personal experience (professional dev in C++11 and up for a decade), and also to the data recently shared by Google[1] ("Rust teams are twice as productive as C++ teams"). Either your Rust is slower than average, or your C++ is faster than average. Perhaps both.<p>The reasons for being more productive are easy to understand. Build system and easiness to tap into the ecosystem are good reasons, but tend to diminish as the project matures. However, the comparative lack of boilerplate (compare specializing std to add a hash implementation in C++, and deriving it in Rust, rule of five, header maintenance and so on), proper sum types (let's don't talk about std::variant :(), exhaustive pattern matching, exhaustive destructuring and restructuring makes for much easier maintenance, so much that I think it tends to an order of magnitude more productivity as the project matures. On the ecosystem side, the easy access to an ecosystem wide serialization framework is also very useful. The absence of UB makes for simpler reviews.<p>[1]: <a href="https://www.reddit.com/r/rust/comments/1bpwmud/media_lars_bergstrom_google_director_of/" rel="nofollow">https://www.reddit.com/r/rust/comments/1bpwmud/media_lars_be...</a></p>
]]></description><pubDate>Fri, 21 Jun 2024 08:17:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=40747325</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40747325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40747325</guid></item><item><title><![CDATA[New comment by dureuill in "Zig vs. Rust at work: the choice we made"]]></title><description><![CDATA[
<p>This feels very shortsighted to me.<p>"Easy to learn" and "easy to hire for" are an advantage in the first few weeks. Besides, we now have data indicating that ramp up time in Rust is not longer than in other languages.<p>On the other hand, serving millions of users with a language that isn't even v1 doesn't seem very reasonable. The advantages of a language that is memory safe in practice and also heavily favors correctness in general boosts productivity tenfold in the long term.<p>I'm speaking from experience, I switched from C++ to Rust professionally and I'm still not over how more productive and "lovable" the language in general is. A language like zig isn't bringing much to the table in comparison (in particular with the user-hurting decisions around "all warnings are errors, period")</p>
]]></description><pubDate>Thu, 20 Jun 2024 08:13:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=40736232</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40736232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40736232</guid></item><item><title><![CDATA[New comment by dureuill in "The new Framework Laptop 13 with Intel Core Ultra Series 1 CPUs"]]></title><description><![CDATA[
<p>Thank you for the answer!<p>The limited range of compatibility of RAM components makes me wonder if the promise of upgradeability actually delivers</p>
]]></description><pubDate>Sun, 02 Jun 2024 07:01:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=40551992</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40551992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40551992</guid></item><item><title><![CDATA[New comment by dureuill in "The new Framework Laptop 13 with Intel Core Ultra Series 1 CPUs"]]></title><description><![CDATA[
<p>Hello, what kind of RAM is supported for the Intel configuration?<p>I have a 32GB DDR4 SO DIMM 2400MHz sitting around, can I repurpose it for a new Framework config (and spare the corresponding 180€)?</p>
]]></description><pubDate>Sat, 01 Jun 2024 09:22:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=40544167</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40544167</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40544167</guid></item><item><title><![CDATA[New comment by dureuill in "Mixing rayon and Tokio for fun and (hair) loss"]]></title><description><![CDATA[
<p>hey, thanks for posting my article :-)<p>It's a bug postmortem about tokio and rayon. Would be interested to know if anybody else encountered similar issues with these libraries or mixing other libraries that "own" threads</p>
]]></description><pubDate>Tue, 14 May 2024 09:04:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=40353085</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40353085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40353085</guid></item><item><title><![CDATA[Are We Modules Yet?]]></title><description><![CDATA[
<p>Article URL: <a href="https://arewemodulesyet.org/">https://arewemodulesyet.org/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40224578">https://news.ycombinator.com/item?id=40224578</a></p>
<p>Points: 96</p>
<p># Comments: 90</p>
]]></description><pubDate>Wed, 01 May 2024 15:23:06 +0000</pubDate><link>https://arewemodulesyet.org/</link><dc:creator>dureuill</dc:creator><comments>https://news.ycombinator.com/item?id=40224578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40224578</guid></item></channel></rss>