<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: aw1621107</title><link>https://news.ycombinator.com/user?id=aw1621107</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 13:41:21 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=aw1621107" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by aw1621107 in "C++: Freestanding Standard Library"]]></title><description><![CDATA[
<p>At least as far as major C++ implementations go you pretty much already have a freestanding stdlib since the standard-specified freestanding parts are a subset of the hosted parts. It's just a matter of compiling in freestanding "mode" (e.g., passing -ffreestanding to GCC/Clang)</p>
]]></description><pubDate>Fri, 10 Apr 2026 21:09:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47723699</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47723699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47723699</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++: Freestanding Standard Library"]]></title><description><![CDATA[
<p>> I feel like it said a whole lot without giving me much to take action on. Like great, you summarized the current state of affairs but it doesn't make clear what I am to do about it.<p>To be fair, not every article is a call to action. Sometimes they exist purely for informational purposes.</p>
]]></description><pubDate>Fri, 10 Apr 2026 16:20:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47720384</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47720384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47720384</guid></item><item><title><![CDATA[New comment by aw1621107 in "Hegel, a universal property-based testing protocol and family of PBT libraries"]]></title><description><![CDATA[
<p>> A saner approach would be to start with a FFI-friendly language and create bindings. I don't think just being able to use an already written framework in Python is worth the trade-off.<p>For what it's worth the devs say their "current long-term plan is to implement a second Hegel server in Rust" [0], so the current state of affairs is probably a compromise between getting something usable for end users out and something more "sane", as you put it.<p>[0]: <a href="https://antithesis.com/blog/2026/hegel/#what%E2%80%99s-next" rel="nofollow">https://antithesis.com/blog/2026/hegel/#what%E2%80%99s-next</a></p>
]]></description><pubDate>Thu, 09 Apr 2026 21:54:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47710701</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47710701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47710701</guid></item><item><title><![CDATA[New comment by aw1621107 in "Hegel, a universal property-based testing protocol and family of PBT libraries"]]></title><description><![CDATA[
<p>A bit of an intro/announcement blog post for Hegel ("Hypothesis, Antithesis, synthesis", [0]) was submitted here ~2 weeks ago [1] and got a fair bit of discussion (106 comments).<p>[0]: <a href="https://antithesis.com/blog/2026/hegel/" rel="nofollow">https://antithesis.com/blog/2026/hegel/</a><p>[1]: <a href="https://news.ycombinator.com/item?id=47504094">https://news.ycombinator.com/item?id=47504094</a></p>
]]></description><pubDate>Thu, 09 Apr 2026 19:18:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47708435</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47708435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47708435</guid></item><item><title><![CDATA[New comment by aw1621107 in "A tail-call interpreter in (nightly) Rust"]]></title><description><![CDATA[
<p>> does this work well with async or is it purely sync tail calls for now?<p>The current RFC generally does not allow `become` to be used with `async` for now [0]:<p>> Tail calling from async functions or async blocks is not allowed. This is due to the high implementation effort as it requires special handling for the async state machine. This restriction can be relaxed by a future RFC.<p>> Using `become` on a `.await` expression, such as `become f().await`, is also not allowed. This is because `become` requires a function call and `.await` is not a function call, but is a special construct.<p>> Note that tail calling async functions from sync code is possible but the return type for async functions is `impl Future`, which is unlikely to be interesting.<p>[0]: <a href="https://github.com/phi-go/rfcs/blob/guaranteed-tco/text/0000-explicit-tail-calls.md#async" rel="nofollow">https://github.com/phi-go/rfcs/blob/guaranteed-tco/text/0000...</a></p>
]]></description><pubDate>Mon, 06 Apr 2026 07:48:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47658047</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47658047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47658047</guid></item><item><title><![CDATA[New comment by aw1621107 in "A tail-call interpreter in (nightly) Rust"]]></title><description><![CDATA[
<p>> Having a keyword to force it then becomes a very useful thing, vs relying on hopes that future compiler versions and different arch targets will all discover the optimisation opportunity.<p>Having a way to guarantee TCO/TCE is essential for some cases, yes. GP's question, though, was why a keyword specifically and not a hypothetical attribute that effectively does the same thing.</p>
]]></description><pubDate>Mon, 06 Apr 2026 07:25:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47657895</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47657895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47657895</guid></item><item><title><![CDATA[New comment by aw1621107 in "A tail-call interpreter in (nightly) Rust"]]></title><description><![CDATA[
<p>> I wonder why they went with a new keyword; I assumed the compiler would opportunistically do TCO when it thinks it's possible, and I figured that the simplest way to require TCO (or else fail compilation) could be done with an attribute.<p>The first RFC for guaranteed tail calls stated an attribute on `return` was a possible alternative "if and when it becomes possible to attach attributes to expressions" [0]. That was from pre-1.0, though; I believe Rust now supports attributes on at least some expressions, but I don't know when that was added.<p>The second RFC [1] doesn't seem to discuss keyword vs. attribute, but it does mention that the proof-of-concept implementation "parses `become` exactly how it parses the `return` keyword. The difference in semantics is handled later", so perhaps a keyword is actually simpler implementation-wise?<p>There's some more discussion on attribute vs. keyword starting here [2], though the attribute being discussed there is a function-level attribute rather than something that effectively replaces a `return`. The consensus seems to be that a function-level attribute is not expressive enough to support the desired semantics, at least. There's also a brief mention of `become` vs. `return` (i.e., new keyword because different semantics).<p>[0]: <a href="https://github.com/rust-lang/rfcs/pull/81/changes" rel="nofollow">https://github.com/rust-lang/rfcs/pull/81/changes</a><p>[1]: <a href="https://github.com/DemiMarie/rfcs/blob/become/0000-proper-tail-calls.md" rel="nofollow">https://github.com/DemiMarie/rfcs/blob/become/0000-proper-ta...</a><p>[2]: <a href="https://github.com/rust-lang/rfcs/pull/1888#issuecomment-278988088" rel="nofollow">https://github.com/rust-lang/rfcs/pull/1888#issuecomment-278...</a></p>
]]></description><pubDate>Mon, 06 Apr 2026 07:23:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47657889</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47657889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47657889</guid></item><item><title><![CDATA[New comment by aw1621107 in "A tail-call interpreter in (nightly) Rust"]]></title><description><![CDATA[
<p>That touches on why TCO/TCE is desirable, but it doesn't address why the Rust devs chose to use a keyword for guaranteed TCE.</p>
]]></description><pubDate>Mon, 06 Apr 2026 06:51:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47657704</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47657704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47657704</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>Hrm, OK, that makes sense. Thanks for taking the time to explain! Guessing optimizing x+y*z would entail something similar to the third eval() definition but with Expr<L, Expr<L2, R2, Mul>, Add> instead.<p>I think at this point I can see how my initial assertion was wrong - specialization isn't fully orthogonal to expression templates, as the former is needed for some of the latter's use cases.<p>Does make me wonder how far one could get with rustc's internal specialization attributes...</p>
]]></description><pubDate>Wed, 01 Apr 2026 23:23:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47607931</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47607931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47607931</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> With expression templates you can translate one to the other, this is a static manipulation that does not depend on compiler level.<p>How does that work on an implementation level? First thing that comes to mind is specialization, but I wouldn't be surprised if it were something else.<p>> What does depend on the compiler is whether the incidental trivial function calls to operators gets optimized away or not.<p>> Of course in many cases the optimization level does matter: if you are optimizing small vector operators to simd inlining will still be important.<p>Perhaps this is the source of my confusion; my uses of expression templates so far have generally been "simpler" ones which rely on the optimizer to unravel things. I haven't been exposed much to the kind of matrix/BLAS-related scenarios you describe.</p>
]]></description><pubDate>Wed, 01 Apr 2026 15:00:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47601841</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47601841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601841</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> If you have a genuine question you should ask one but please do not disguise so that it only goes to prove your point.<p>I think my question is pretty simple: "How does an optimizer-independent expression template implementation work?" Evidently the resources I've found so far describe "optimizer-dependent expression templates", and apparently none of the "expression template" implementations I've had reason to look at disabused me of that notion.<p>> My comments here is not work, and I am not here to win arguments, but most of the time learn from other people's experiences, and sometimes dispute conclusions based on those experiences too.<p>Sure, and I like to learn as well from the more knowledgeable/experienced folk here, but as much as I want to do so here I'm finding it difficult since there's precious little for me to go off of beyond basically just being told I'm wrong.<p>> If you don't believe me, or you believe expression templates work differently, then so be it.<p>I <i>want</i> to understand how you understand expression templates, but between the above and not being able to find useful examples of your description of expression templates I'm at a bit of a loss.</p>
]]></description><pubDate>Tue, 31 Mar 2026 15:53:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47589255</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47589255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47589255</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> The example is false because that's not how you would write an expression template for given computation<p>OK, so how would you write an expression template for the given computation, then?<p>> Expression templates do not rely on O1, O2 or O3 levels being set - they work the same way in O0 too and that may be the hint you were looking for.<p>This claim confuses me given how expression templates seem to work in practice?<p>For example, consider Todd Veldhuizen's 1994 paper introducing expression templates [0]. If you take the examples linked at the top of the page and plug them into Godbolt (with slight modifications to isolate the actual work of interest) you can see that with -O0 you get calls to overloaded operators instead of the nice flattened/unrolled/optimized operations you get with -O1.<p>You see something similar with Eigen [2] - you get function calls to "raw" expression template internals with -O0, and you need to enable the optimizer to get unrolled/flattened/etc. operations.<p>Similar thing yet again with Blaze [3].<p>At least to me, it looks like expression templates produce <i>quite</i> different outputs when the optimizer is enabled vs. disabled, and the -O0 outputs very much don't resemble the manually-unrolled/flattened-like output one might expect (and arguably gets with optimizations enabled). Did all of these get expression templates wrong as well?<p>[0]: <a href="https://web.archive.org/web/20050210090012/http://osl.iu.edu/~tveldhui/papers/Expression-Templates/exprtmpl.html" rel="nofollow">https://web.archive.org/web/20050210090012/http://osl.iu.edu...</a><p>[1]: <a href="https://cpp.godbolt.org/z/Pdcqdrobo" rel="nofollow">https://cpp.godbolt.org/z/Pdcqdrobo</a><p>[2]: <a href="https://cpp.godbolt.org/z/3x69scorG" rel="nofollow">https://cpp.godbolt.org/z/3x69scorG</a><p>[3]: <a href="https://cpp.godbolt.org/z/7vh7KMsnv" rel="nofollow">https://cpp.godbolt.org/z/7vh7KMsnv</a></p>
]]></description><pubDate>Tue, 31 Mar 2026 11:53:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586019</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47586019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586019</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> What nasal demons?<p>Those that result from the pre-C++26 behavior where use of an indeterminate value is UB.<p>> But there’s lots of room between deterministic values and UB.<p>That's a fair point. I do think I made a mistake in how I represented the authors' decision, as it seems the authors intentionally wanted the predictability of fixed values (italics added):<p>> Reading an uninitialized value is never intended and a definitive sign that the code is not written correctly and needs to be fixed. At the same time, we do give this code well-defined behaviour, <i>and if the situation has not been diagnosed, we want the program to be stable and predictable</i>. This is what we call erroneous behaviour.<p>> And I think the word “indeterminate” should have been reserved for that sort of behavior.<p>Perhaps, but that'd be a departure from how the word has been/is used in the standard so there would probably be some resistance against redefining it.</p>
]]></description><pubDate>Tue, 31 Mar 2026 02:16:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582003</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47582003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582003</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST).<p>Right, I understand that. What is not exactly clear to me is how you get from the tree of deferred expressions to the "flat" optimized expression without involving the optimizer.<p>Take something like the above example for instance - w = x + y * z for vectors w/x/y/z. How do you get from that to effectively<p><pre><code>    for (size_t i = 0; i < w.size(); ++i) {
        w[i] = x[i] + y[i] * z[i];
    }
</code></pre>
without involving the optimizer at all?</p>
]]></description><pubDate>Mon, 30 Mar 2026 19:55:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578958</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47578958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578958</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>They are both; there are things that Rust's macros can do metaprogramming-wise that C++ templates cannot do and vice-versa.<p>Rust's macros work on a syntactic level, so they are more powerful in that they can work with "normally" invalid code and perform token-to-token transformations (and in the case of proc macros effectively function as compiler extensions/plugins) and less powerful in that they don't have access to semantic information.</p>
]]></description><pubDate>Mon, 30 Mar 2026 18:49:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578176</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47578176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578176</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>I had always thought expression templates at the very least needed the optimizer to inline/flatten the tree of function calls that are built up. For instance, for something like x + y * z I'd expect an expression template type like sum<vector, product<vector, vector>> where sum would effectively have:<p><pre><code>    vector l;
    product& r;
    auto operator[](size_t i) {
        return l[i] + r[i];
    }
</code></pre>
And then product<vector, vector> would effectively have:<p><pre><code>    vector l;
    vector r;
    auto operator[](size_t i) {
        return l[i] * r[i];
    }
</code></pre>
That would require the optimizer to inline the latter into the former to end up with a single expression, though. Is there a different way to express this that doesn't rely on the optimizer for inlining?</p>
]]></description><pubDate>Mon, 30 Mar 2026 18:40:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578046</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47578046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578046</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> Rust cannot take a const function and evaluate that into the argument of a const generic<p>Assuming I'm interpreting what you're saying here correctly, this seems wrong? For example, this compiles [0]:<p><pre><code>    const fn foo(n: usize) -> usize {
        n + 1
    }

    fn bar<const N: usize>() -> usize {
        N + 1
    }

    pub fn baz() -> usize {
        bar::<{foo(0)}>()
    }
</code></pre>
In any case, I'm a little confused how this is relevant to what I said?<p>[0]: <a href="https://rust.godbolt.org/z/rrE1Wrx36" rel="nofollow">https://rust.godbolt.org/z/rrE1Wrx36</a></p>
]]></description><pubDate>Mon, 30 Mar 2026 18:02:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47577639</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47577639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47577639</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> For example you cannot design something that comes evwn close to expression templates libraries.<p>You keep saying this and it's still wrong. Rust is quite capable of expression templates, as its iterator adapters prove. What it isn't capable of (yet) is specialization, which is an orthogonal feature.</p>
]]></description><pubDate>Mon, 30 Mar 2026 17:09:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576967</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47576967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576967</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> C++26 adds destructive moves. They are called relocatable types.<p>I thought those were removed? For example, see Herb's 2025-11/Kona trip report [0]:<p>> For trivial relocatability, we found a showstopper bug that the group decided could not be fixed in time for C++26, so the strong consensus was to remove this feature from C++26.<p>[0]: <a href="https://herbsutter.com/2025/11/10/trip-report-november-2025-iso-c-standards-meeting-kona-usa/" rel="nofollow">https://herbsutter.com/2025/11/10/trip-report-november-2025-...</a></p>
]]></description><pubDate>Mon, 30 Mar 2026 17:03:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576889</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47576889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576889</guid></item><item><title><![CDATA[New comment by aw1621107 in "C++26 is done: ISO C++ standards meeting Trip Report"]]></title><description><![CDATA[
<p>> Let's say I'm the implementation. If I go get fresh (but not zeroed) memory from the OS to put my stack on, the garbage in there isn't state of the program, right?<p>I'd argue that once you get the memory it's now part of the state of your program, which precludes it from being involved in whatever value you end up reading from the variable(s) corresponding to that memory.<p>> If I want a fixed init value per address, is that allowed as a hardening feature or disallowed as being based on allocation patterns?<p>I'd guess that that specific implementation would be disallowed, but as I'm an internet nobody I'd take that with an appropriately-sized grain of salt.<p>> And would that mean there's still no way to say "Don't waste time initializing it, but don't do any UB shenanigans either. (Basically, pretend it was initialized by a random number generator.)"<p>I feel like you'd need something like LLVM's `freeze` intrinsic for that kind of functionality.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:51:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576721</link><dc:creator>aw1621107</dc:creator><comments>https://news.ycombinator.com/item?id=47576721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576721</guid></item></channel></rss>