<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: babel_</title><link>https://news.ycombinator.com/user?id=babel_</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 10:19:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=babel_" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by babel_ in "Keep Android Open"]]></title><description><![CDATA[
<p>It might actually be a better environmental decision, if instead of buying a <i>new</i> second phone, it is instead about keeping an existing phone in use and not adding to the burning heaps of e-waste. Given the rising popularity of refurbished phones, not to mention the lower costs, it might actually be the opposite of what you claim, at least on those grounds.<p>And for the rest, well, "just works" for what? With a little time and effort, it may even get to the case of the "just works" part is a siloed unit like a SIM card that is just installed to the device, making it opt-in and user owned...</p>
]]></description><pubDate>Wed, 29 Oct 2025 13:36:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45746650</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45746650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45746650</guid></item><item><title><![CDATA[New comment by babel_ in "Defer: Resource cleanup in C with GCCs magic"]]></title><description><![CDATA[
<p>> on top of C.<p>If we're referring to the "C is a subset of C++" / "C++ is a superset of C" idea, then this just hasn't been the case for some time now, and the two continue to diverge. It came up recently, so I'll link to a previous comment on it (<a href="https://news.ycombinator.com/item?id=45268696">https://news.ycombinator.com/item?id=45268696</a>). I did reply to that with a few of the other current/future ways C is proposing/going to diverge even further from C++, since it's increasingly relevant to the discussion about what C2y (and beyond) will do, and how C code and C++ code will become ever more incompatible - at least at the syntactic level, presuming the C ABI contains to preserve its stability and the working groups remain cordial, as they have done, then the future is more "C & C++" rather than "C / C++", with the two still walking side-by-side... but clearly taking different steps.<p>If we're just talking about features C++ has that C doesn't, well, sure. RAII is the big one underpinning a lot of other C++ stuff. But C++ still can't be used in many places that C is, and part of why is baggage that features like RAII require (particularly function overloading and name mangling, even just for destructors alone)... which was carefully considered by the `defer` proposals, such as in N3488 (recently revised to N3687[0]) under section 4, or in other write-ups (including those by that proposal's author) like "Why Not Just Do Simple C++ RAII in C?"[1] and under the "But… What About C++?" section in [2]). In [0] they even directly point to "The Ideal World" (section 4.3) where both `defer` and RAII are available, since as they explain in 4.2, there are benefits to `defer` that RAII misses, and generally both have their uses that the other does not cleanly (if at all) represent! Of course, C++ does still have plenty of nice features that are sorely missing in C (personally longing for the day C gets proper namespaces), so I'm happy we always have it as an option and alternative... but, in turn, I feel the same about C. Sadly isn't as simple to "just use C++" in several domains I care about, let alone dealing with the "what dialect of C++" problem; exceptions or not, etc, etc...<p>[0]: <a href="https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3687.htm" rel="nofollow">https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3687.htm</a>
[1]: <a href="https://thephd.dev/just-put-raii-in-c-bro-please-bro-just-one-more-destructor-bro-cmon-im-good-for-it" rel="nofollow">https://thephd.dev/just-put-raii-in-c-bro-please-bro-just-on...</a>
[2]: <a href="https://thephd.dev/c2y-the-defer-technical-specification-its-time-go-go-go" rel="nofollow">https://thephd.dev/c2y-the-defer-technical-specification-its...</a></p>
]]></description><pubDate>Wed, 01 Oct 2025 10:01:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45436030</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45436030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45436030</guid></item><item><title><![CDATA[New comment by babel_ in "Defer: Resource cleanup in C with GCCs magic"]]></title><description><![CDATA[
<p>Well, each `defer` proposal for C agreed that it shouldn't be done the way Go does it, and should just be "run this at the end of lexical scope", so it'll certainly be less surprising than the alternative... and far easier to implement correctly on the compiler side... and easier to read and write than the corresponding goto cleanup some rely on instead. Honestly, I feel like it becomes about as surprising as the `i++` expression in a `for` loop, since that conceptually is also moved to the end of the loop's lexical scope, to run before the next conditional check. Of course, a better way of representing and visualising the code, even if optionally, would help show where and when these statements run, but a standard feature (especially with some of the proposed safety mechanisms around jumps and other ways it could fail in surprising ways) it would hardly seem exotic, and inversely is quite likely to expose things currently fail in surprising ways precisely because we don't have a simple `defer` feature and so wrote something much more complicated and error-prone instead.<p>So, I completely understand the sentiment, but feel that `defer` is a feature that should hopefully move in the opposite direction, allowing us to rely on less exotic code and expose & resolve some of the surprising failure paths instead!</p>
]]></description><pubDate>Wed, 01 Oct 2025 09:28:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=45435867</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45435867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45435867</guid></item><item><title><![CDATA[New comment by babel_ in "Defer: Resource cleanup in C with GCCs magic"]]></title><description><![CDATA[
<p>Jen's macro that this was based on was an implementation of his own proposal (N3434) for `defer`, which was one of a few preceding what finally became TS25755! So, yes, C2y is lined up to have "defer: the feature", but until then, we can explore "defer: the macro" (at least on GCC builds, as formulated).</p>
]]></description><pubDate>Wed, 01 Oct 2025 09:17:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45435797</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45435797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45435797</guid></item><item><title><![CDATA[New comment by babel_ in "Defer: Resource cleanup in C with GCCs magic"]]></title><description><![CDATA[
<p>Testing with Jen's macro this was based on, and found that the always_inline was redundant under even -O1 (<a href="https://godbolt.org/z/qoh861Gch" rel="nofollow">https://godbolt.org/z/qoh861Gch</a> via the examples from N3488 as became the baseline for the TS for C2y, which has recently a new revision under N3687), so there's an interesting trade-off between visibly seeing the `defer` by not not-inlining within the macro under an -O0 or similar unoptimised build, since with the inlining they are unmarked in the disassembly. But, there's an interesting twist here, as "defer: the feature" is likely not going to be implemented as "defer: the macro", since compilers will have the keyword (just `defer` in TS25755, or something else that uses a header for sugared `defer`) and may see the obvious optimised rewrite as the straightforward way of implementing it in the first place (as some already have), meaning we can have the benefit of the optimised inline with the opportunity to also keep it clearly identifiable, even in unoptimised and debug builds, which would certainly be nice to have!</p>
]]></description><pubDate>Wed, 01 Oct 2025 09:10:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45435770</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45435770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45435770</guid></item><item><title><![CDATA[New comment by babel_ in "In Defense of C++"]]></title><description><![CDATA[
<p>The two will also continue to diverge over time, after all, C2y should have the defer feature, which C++ will likely never add. Even if we used polyfills to let C++ compilers support it, the performance characteristics could be quite different; if we compare a polyfill (as suggested in either N3488 or N3434) to a defer feature, C++ would be in for a nasty shock as the "zero cost abstractions" language, compared to how GCC does the trivial re-ordering and inlining even at -O1, as quickly tested here: <a href="https://godbolt.org/z/qoh861Gch" rel="nofollow">https://godbolt.org/z/qoh861Gch</a><p>I used the [[gnu::cleanup]] attribute macro (as in N3434) since it was simple and worked with the current default GCC on CE, but based on TS 25755 the implementation of defer and its optimisation should be almost trivial, and some compilers have already added it. Oh, and the polyfills don't support the braceless `defer free(p);` syntax for simple defer statements, so there goes the full compatibility story...<p>While there are existing areas where C diverged, as other features such as case ranges (N3370, and maybe N3601) are added that C++ does not have parity with, C++ will continue to drift further away from the "superset of C" claim some of the 'adherents' have clung to for so long. Of course, C has adopted features and syntax from C++ (C2y finally getting if-declarations via N3356 comes to mind), and some features are still likely to get C++ versions (labelled breaks come to mind, via N3355, and maybe N3474 or N3377, with C++ following via P3568), so the (in)compatibility story is simply going to continue getting more nuanced and complicated over time, and we should probably get this illusion of compatibility out of our collective culture sooner rather than later.</p>
]]></description><pubDate>Wed, 17 Sep 2025 16:06:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45277585</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45277585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45277585</guid></item><item><title><![CDATA[New comment by babel_ in "Formatting code should be unnecessary"]]></title><description><![CDATA[
<p>The blog post, in its opening section, directly points out:<p>> Everyone had their own pretty-printing settings for viewing it however they wanted<p>This is an example of how treating storage and presentation as two separate concerns obviates a large swathe of low-value yet high-friction concerns with current "draw it as you store it" plain text code.<p>>> It did not object to the presence of mechanically-enforced style rules<p>Quite the opposite, by my reckoning! I won't belabour the dissonance about "linter enforced" somehow not being "mechanically enforced", since I think that merely belies a different interpretation of those words to have a subtle difference I feel adds nothing to the conversation. Instead, note the prior quote, which is quite literally from the leading section, as pointing out how you don't have "mechanically enforced" rules in such a scheme as the blog suggests. In particular, by letting someone views code "however they wanted", in other words, we're not merely talking indentation or casing, we're talking about using code to present the code, potentially in a contextually relevant manner.<p>This is, in my mind, quite the opposite of mechanically enforcing a set of style rules, since that would result in a fixed, static presentation, akin to merely "what flavour of indentation do you like"... here, we see the idea of contextually presenting the code as per your current needs and wants, for example, to directly craft the "one-off" not as an exception to the norm, i.e. "please turn off for these lines so you don't disrupt the formatting or trip on a bunch of special cases", but rather as a "here's how you should present this specific thing" in a way that is at the heart of this entire endeavour in the first place: programming the logic to get the intended results, now simply reflected back upon the task of programming itself (and for arguably the most important part, reading the code). By establishing these "rules" and patterns, it focuses the task on how to make the code more readable as a direct consequence of considering how to present and format it, with the ability to handle the special cases in that "one-off" manner with simple hard-coded patterns (i.e. "when the code is like this, present it exactly like that"), but of course also accumulating and generalising to handle even more cases, only now able to perform the delicate, "hand-crafted" formatting on code you're only just looking at for the first time, finding that it is now already formatted exactly how you needed it, or can be switched to another contextual mode easily with a quick addition to a set of such places to activate it, or a direct command to present it in such a way regardless.<p>Likewise, nowhere did the article state that this could not be shared, as people are often wont to do. The blog doesn't even talk about what people are currently getting up to with similar ideas now, with a little more rendering capability than the 1980s could reasonably provide. So, this hardly seems like glorification, even when it discusses not having to waste time debating linter/autoformat settings with one another. Indeed, it holds back from mentioning what can be done with some of the ideas it so casually includes, such as live environments (think about it, the presentation reflecting the current meaning, semantics, values, or state of the code as it runs, or while testing/debugging! that's something we currently either lack in most editors/IDEs, or are relegated to perhaps some basic syntax highlighting changes) or some of the interesting ways some "refactors" are actually entirely superficial and can be reframed as presentational changes since they do not alter the underlying semantics (or literal IR), such as "what order should these variables be declared in?" and other similarly banal or indeed perhaps more serious and useful presentational shifts we could explore with better tools (such as exploring "order-of-operations" sequencing in the "business logic" for edge-cases or to improve clarity, finding equivalent but more intelligible database queries without impacting optimisation, etc) without the need for worrying how entirely superficial changes might need a meeting to decide how to handle merges because two people renamed the same function or its arguments or similar clashes that are completely brittle right now.<p>The current tooling, particularly the use of linters and similar static analysis for auto-formatting, is based on a compromise with the underlying conceit that the storage medium and the presentation must be mechanically connected with little room for alteration (I still see people claiming syntax highlighting is tantamount to sin, unironically, so the extreme positions here are alive and well, thankfully barring calls to magnetised needles) and that the form is given primacy over the function, syntax over semantics, which continues to bring in pointless discrepancies over what that form/syntax should be, precisely because we can and should disagree since our own needs and tastes are individual, yet are forced to come to some compromise purely for the sake of having some consistent, canonical form that will be presented identically on everyone's screen baring only editor/IDE level differences such as syntax highlighting, themes, fonts, or indentation. Those are perhaps the most superficial changes to presentation and formatting that could be made, yet they are the only one most code and editors "allows" the user to have control over so they can customise it to their own needs, perhaps even going so far as to quickly switch them up with shortcuts or commands.<p>Now, with that in mind, we reflect on the blog, and on the use of some canonical storage of code (minified code, IR, or simply language-specific canonical formatting) with the explicitly non-canonical presentation, alleviating the concerns about people disagreeing over how to format a 2D array or something similarly innocuous, since they are all free to format it exactly however they please, either with a more manual "pushing syntax around" approach akin to moving characters/symbols in an editor, or programmatically extending from "pretty printing" into a rich, contextual and dynamic approach, which you as the programmer are free to configure to meet your exact needs.<p>Does that sound like a glorification of "mechanically enforced" style rules? Like it's destroying the signal rather than trying to expose and even amplify it? Like there is no room for us humans to craft and refine how something is presented to make it more intelligible and understandable? I hope not. Because, by my reckoning, this blog and the ideas it's discussing are perhaps one of the few directions we could reasonably and understandably start down to resolve the issues you, I, and the blogger are all agreeing on here, and with clear historical precedent to show it's not only achievable, but that it was achievable with only a fraction of the hardware and understanding widely available today. The "subtext" here feels quite contrary to how you are presenting it, though assuming such a "subtext" would indeed make the blog less coherent due to the continued cognitive dissonance of assuming the blog is suggesting "take away autoformatting/linting and then add it back in by a different name", instead of the really quite significant change it's actually suggesting we can do... and, indeed, wouldn't even have to change much these days to achieve it, with canonical or even somewhat minified code being perfectly acceptable for a line-oriented VCS to handle, without needing to figure out a suitable textual representation for the IR or otherwise needing to handle a dedicated binary/wire format.<p>Oh, and FWIW, to your FWIW, I felt it was the correct way to approach the comment, given that the substance of the blog post was not reflected in a comment that focused entirely on "formatting code" as in the title, in such a way that could be composed wholesale by riffing solely on the title. No direct reference or allusions to specific points in the blog were made, nor anything about what the blog actually suggests that directly supports your comment. Because, FWIW to my own FWIW, I actually agreed with the bulk of your comment, but I also felt the underlying position of said comment was only being presented in this way because you had not read through the blog post, and instead jumped straight in off the title alone, since, again, I felt nothing in the comment connected to the post beyond its title. Formatting is critical, which is why we should not rely on a static, mechanically fixed view of the world/code, and certainly not one decreed by "senior leads of sprints past" (or whatever the authority or popularity we are deferring to is on a given project or Tuesday). "Formatting" as a direct, mechanical act enforced either by a human at a keyboard pushing characters around, or by a linter following a style guide, is something that indeed "should" be "unnecessary", to elevate "formatting" (presentation) and make it a clear and important part of how we prepare code and make it more amenable to reading and understanding for our wetware, rather than convenient for fragile and lazy software. Why would we compromise on this now, when it could already be done in the 80s, and rely on static, linter enforced style rules at a time when we have so many cycles to spare on rendering code that we often render it in a web browser for the sake of "portability" (a huge irony given the origins of linters), and need not waste our time arguing over presentation when we could be making the presentations more useful to ourselves without concern for making it less useful for others, and then getting on with the actual task at hand? To me, this blog is all about elevating and prioritising formatting, without stamping on anyone's toes.<p>Still, to each their own. Oh, but that was kinda the point of the blog...</p>
]]></description><pubDate>Mon, 08 Sep 2025 23:49:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45175635</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45175635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45175635</guid></item><item><title><![CDATA[New comment by babel_ in "Formatting code should be unnecessary"]]></title><description><![CDATA[
<p>The blog entry is short and simple, perhaps consider reading it before knee-jerk reacting to the title, and then you might understand why "should" and "unnecessary" are operative in said title.</p>
]]></description><pubDate>Mon, 08 Sep 2025 06:40:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45165255</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=45165255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45165255</guid></item><item><title><![CDATA[New comment by babel_ in "Hierarchical Reasoning Model"]]></title><description><![CDATA[
<p>Quite the opposite, a clever algorithm needs less compute, and can leverage extra compute even more.</p>
]]></description><pubDate>Sun, 27 Jul 2025 19:36:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44703954</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=44703954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44703954</guid></item><item><title><![CDATA[New comment by babel_ in "Hierarchical Reasoning Model"]]></title><description><![CDATA[
<p>AlphaZero may have the rules built in, but MuZero and the other follow-ups didn't. MuZero not only matched or surpassed AlphaZero, but it did so with less training, especially in the EfficientZero variant; notably also on the Atari playground.</p>
]]></description><pubDate>Sun, 27 Jul 2025 13:00:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44701027</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=44701027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44701027</guid></item><item><title><![CDATA[New comment by babel_ in "Emergent Misalignment: Narrow Finetuning Can Produce Broadly Misaligned LLMs"]]></title><description><![CDATA[
<p>Any high-enough dimensional space means the distance between any two vectors tends towards 1, so given a "good" concept all other related "good" concepts and all "evil" concepts are approximately equidistant from it, so this is inescapable; and therefore the Waluigi effect is too.<p>Even accounting for (statistical) correlations, naturally the "evil" versions of a concept differ only slightly from the "good" concept (since otherwise they'd be evil versions of another concept, no?) meaning that so long as there is some expressible "evilness", well, the classic notion of vector arithmetic from word2vec will carry over, even as some ineffable "evil vibes" that may apply in any number of directions and thus be applicable to a vast sway of concepts, since you can take an average of a bunch of "evil" vectors and end up with a vector that's now statistically correlated to this "evil vibe", so including this with a "good" concept that is otherwise uncorrelated allows you to create an "evil negative" of even the most "good" concept possible... and by dimensionality, it was already close in distance and similarity to begin with, so the artifact of this "vibe" was inherently embedded within the space to begin with, but emphasising this "vibe" or doing any such further statistical correlation (such as 'finetuning') increases correlation to this "evilness", and suddenly "corrupts the incorruptible", flipping a "good" concept into an "evil" negative version of that concept (hence, Waluigi).<p>Because of dimensionality, even accounting for statistical correlation between any given vectors, the distances between any embedding vectors becomes moot, especially since the dimensions are meaningless (as we can increase the "dimensionality" by accepting approximation, compacting even more dimensions into the small discrepancies of low-precision in any distance metric). So, for all intents and purposes, "evil" concepts aren't just similar to each other, but similar to their corresponding "good" counterparts, and to all other vectors as well, making misalignment (and, indeed, the aforementioned Waluigi effect) an inevitable emergent property by construction.<p>At no point were these distances or similarities "meaningless", instead they demonstrate the fine wire tightrope that we're navigating by dint of the construction of our original embeddings as a vector space through fitting to data, as the clustering and approximate nearest neighbours along any dimensions like this results in a sparsity paradox of sorts. We hope to take the next "step" towards something meaningfully adjacent and thus refine our concepts, but any time we "misstep" we end up imperceptibly stepping onto a nearby but different (perhaps "evil") tightrope, so we're at little risk of "falling" into the void between points (though auto-regression means we must end up at some attractor state instead, which we might think of as some infinite plummet through negative space, potentially an implicit with no direct vector representation) but instead we may end up switching between "good" and "evil" versions of a concept with such missteps... and by the argument around approximate values effectively placing additional dimensions around any basis vector, well, this quickly begins to resemble a fractal space like flipping a coin or rolling a die, where the precision with which you measure the results may change the output (meaning even just rounding to the nearest 0.001 instead of 0.01 may go from "good" to "evil", etc) in such a way that we can't even meaningfully predict where the "good" and "evil" vectors (and thus outputs) are going to arise, even if we started with human-constructed basis dimensions (i.e. predefined dimensions for 'innate' concepts as basis vectors) because by approximation the construction will always "smuggle" in additional vectors that diverge from our intent — the tightropes crisscross around where we "want" to step (near basis vectors) because that's where we're already likely to step, meaning any statistical correlation must go in the vicinity and by dimensionality so must unrelated concepts because it's "as good a place as any" based on the distance metric, and if they're in that vicinity too, then they're likely to co-occur, and now we get a survivorship bias that ensures these negatives and "evil vibes" (and thus any Waluigi) will remain nestled "close by" since those are the areas we were sampling from anyway (so act as a sort of attractor that pulls vectors towards them), and unavoidably so because by going at it from the other direction, those are the points from which we initially started constructing vectors and statistical correlations from in the first place, in other words, it's not a bug, it's literally the only feature "working as intended".</p>
]]></description><pubDate>Mon, 05 May 2025 20:05:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43898903</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=43898903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43898903</guid></item><item><title><![CDATA[New comment by babel_ in "Using uv as your shebang line"]]></title><description><![CDATA[
<p>That's not the uvx way? Dependencies at the top of a script was outlined in PEP 723, which uv/uvx added support for. Not everything is a "project", some files are just one-and-done scripts which should not have to carry the burden of project/environment management, but absolutely should still be able to make use of dependencies. The "uvx way" of doing it just means that it doesn't have to pollute the global/user install, and can even be isolated into a separate instance.<p>Besides, not everyone uses conda, and it would be quite a stretch to say it "might as well" be a built-in compared to, well, the actual built in, pip! Plus, uv works quite nicely as "just a pip replacement", which is how I started with it, so it aligns quite well to the actual built-in paradigm of the language.</p>
]]></description><pubDate>Tue, 28 Jan 2025 19:15:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=42856643</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=42856643</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42856643</guid></item><item><title><![CDATA[New comment by babel_ in "Using uv as your shebang line"]]></title><description><![CDATA[
<p>Now that's a trick I should remember!<p>I recently switched over my python aliases to `uv run python` and it's been really quite pleasant, without needing to manage `.venv`s and the rest. No fuss about system installs for python or anything else either, which resolves the old global/user install problem (and is a boon on Debian). Also means you can invoke the REPL within a project/environment without any `activate`, which saves needing to think about it.<p>Only downside for calling a .py directly with uv is the cwd relative pathing to project/environment files, rather than relative to the .py file itself. There is an explicit switch for it, `--project`, which at least is not much of an ask (`uv run --project <path> <path>/script.py`), though a target relative project switch would be appreciated, to at least avoid the repetition.</p>
]]></description><pubDate>Tue, 28 Jan 2025 19:07:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42856532</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=42856532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42856532</guid></item><item><title><![CDATA[New comment by babel_ in "SQL, Homomorphisms and Constraint Satisfaction Problems"]]></title><description><![CDATA[
<p>For anyone curious: the performance difference between Clang and GCC on the example C solution for verbal arithmetic comes down to Clang's auto-vectorisation (deducing SIMD) whilst GCC here sticks with scalar, which is why the counter brings Clang closer in line to GCC (<a href="https://godbolt.org/z/xfdxGvMYP" rel="nofollow">https://godbolt.org/z/xfdxGvMYP</a>), and it's actually a pretty nice example of auto-vectorisation (and its limitations) in action, which is a fun tangent from this article (given its relevance to high-performance SMT/SAT solving for CSP)</p>
]]></description><pubDate>Wed, 20 Nov 2024 18:37:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=42196844</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=42196844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42196844</guid></item><item><title><![CDATA[New comment by babel_ in "A bunch of programming advice I'd give to myself 15 years ago"]]></title><description><![CDATA[
<p>I think you're misreading a singular opinion as occurring between two disparate points here.<p>The initial phrase was
> doctors hurt people for years while learning to save them<p>It's then a separate reply from someone else about deaths from errors/malpractice.<p>So, nobody seems to be expressing the mentality you are, correctly, lambasting (at least so far as I've read in the comments). But, as it is relevant to the lessons we'd all want to pass back to ourselves (in my opinion, it's because we wish to hold ourselves accountable to said lessons for the future), let's address the elephant in the comment.<p>Normalising deaths, especially the preventable ones, is absolutely not something that anybody here is suggesting (so far as my read of the situation is).<p>Normalising responsibility, by recognising that we can cause harm and so <i>do</i> something about it, that seems far more in-line with the above comments.<p>As you say it yourself, there's a time and a place, which is undoubtedly what we hope to foster for those who are younger or at the start of learning any discipline, beyond programming, medicine, or engineering.<p>Nobody is saying that the death and loss is acceptable and normalised, but rather that in the real world we need a certain presence around the knowledge that they will occur, to a certain degree, regardless. So, we accept responsibility for what we can prevent, and strive to push the frontier of our capacity further with this in mind. For some, that comes from experience, unfortunately. For others, they can start to grapple with the notion in lieu of it through considering the consequences, and the scale of consequence; as the above comments would be evidence of, at least by implication.<p>These are not the only ways to develop a mindset of responsibility, fortunately, but that is what they can be, even if you find the literal wording to suggest otherwise. I cannot, of course, attest to the "true" feelings of others, but neither can anyone else... But in the spirit of the matter, consider: Your sense of responsibility, in turn, seems receptive to finding the areas by which such thinking can become justification for the very thing it would otherwise prevent, either as a shield purpose-built for the role or co-opted out of convenience. That too becomes integral, as we will always need to avoid complacency, and so must also promote this vigilance to become a healthy part of the whole for a responsible mindset -- lest we become that which we seek to prevent, and all that.<p>Exactly as you say, there's a greater problem, but this thinking is not necessarily justification for it, and can indeed become another tool to counter it. More responsibility, more mindfulness about our intents, actions, and consequences? That will prove indispensable for us to actually solve the greater problem, so we must appreciate that different paths will be needed to achieve it, after all, there are many different justifications for that problem which will all need to be robustly refuted and shown for what they are. Doing so won't solve the problem, but is rather one of many steps we will need to take.<p>Regardless, this mindfulness and vigilance about ourselves, as much as about each other, will be self-promoting through the mutual reinforcement of these qualities. If someone must attempt to visualise the staggering scale of consequence as part of developing this, then so be it. In turn, they will eventually grapple with this vigilance as well, as the responsibility behoves them to, else they may end up taking actions and having consequences that are equivalent to the exact mentality you fear, even if they do/did not actually "intend" to do so. The learning never ends, and the mistakes never will, so we must have awareness of the totality of this truth; even if only as best we can manage within our limited abilities.</p>
]]></description><pubDate>Sat, 29 Jun 2024 15:44:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=40831374</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=40831374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40831374</guid></item><item><title><![CDATA[New comment by babel_ in "Chasing a Bug in a SAT Solver"]]></title><description><![CDATA[
<p>Having ended up with a critical bug in the SAT solver I wrote for my undergrad thesis, it really can be a challenge to fix without clear logs. So, always nice to see a little love for contribution through issues and finding minimal ways to reproduce edge cases.<p>While we do mention how good issue contributions are significant and meaningful, we often forget how there's often more to it than an initial filing, and may overlook the contributions from those that join lengthier issue threads later.<p>(Oh, and yes, that critical bug did impact the undergrad thesis, but it could be worked around, however meant I couldn't show the full benefits of the solver.)</p>
]]></description><pubDate>Fri, 21 Jun 2024 15:45:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=40750755</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=40750755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40750755</guid></item><item><title><![CDATA[New comment by babel_ in "Big data is dead (2023)"]]></title><description><![CDATA[
<p>Many startups seem to aim for this, naturally it's difficult to put actual numbers to this, and I'm sure many pursue multiple aims in the hope one of them sticks. Since unicorns are really just describing private valuation, really it's the same as saying many aim to get stupendously wealthy. Can't put a number on that, but you can at least see it's a hope for many, though "goal" is probably making it seem like they've got actually achievable plans for it... That, at least, I'm not so convinced of.<p>Startups are, however, atypical from new businesses, ergo the unicorn myth, meaning we see many attempts to follow such a path that likely stands in the way of many new businesses from actually achieving the more real goals of, well, being a business, succeeding in their venture to produce whatever it is and reach their customers.<p>I describe it as a unicorn "myth" as it very much behaves in such a way, and is misinterpreted similarly to many myths we tell ourselves. Unicorns are rare and successful because they had the right mixture of novel business and the security of investment or buyouts. Startups purportedly are about new ways of doing business, however the reality is only a handful really explore such (e.g. if it's SaaS, it's probably not a startup), meaning the others are just regular businesses with known paths ahead (including, of course, following in the footsteps of prior startups, which really is self-refuting).<p>With that in mind, many of the "real" unicorns are realistically just highly valued new businesses (that got lucky and had fallbacks), as they are often not actually developing new approaches to business, whereas the mythical unicorns that startups want to be are half-baked ideas of how they'll achieve that valuation and wealth without much idea of how they do business (or that it can be fluid, matching their nebulous conception of it), just that "it'll come", especially with "growth".<p>There is no nominative determinism, and all that, so businesses may call themselves startups all they like, but if they follow the patterns of startups without the massive safety nets of support and circumstance many of the real unicorns had, then a failure to develop out the business proper means they do indeed suffer themselves by not appreciating 5000 paying customers and instead aim for "world domination", as it were, or acquisition (which they typically don't "survive" from, as an actual business venture). The studies have shown this really does contribute to the failure rate and instability of so-called startups, effectively due to not cutting it as businesses, far above the expected norm of new businesses...<p>So that pet peeve really is indicative of a much more profound issue that, indeed, seems to be a bit of an echo chamber blind spot with HN.<p>After all, if it ought to have worked all the time, reality would look very different from today. Just saying how many don't become unicorns (let alone the failure rate) doesn't address the dissonance from then concluding "but this time will be different". It also doesn't address the idea that you don't need to become a "unicorn", and maybe shouldn't want to either... but that's a line of thinking counter to the echo chamber, so I won't belabour it here.</p>
]]></description><pubDate>Mon, 27 May 2024 11:29:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=40489790</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=40489790</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40489790</guid></item><item><title><![CDATA[New comment by babel_ in "Ask HN: What things are happening in ML that we can't hear over the din of LLMs?"]]></title><description><![CDATA[
<p>I think the situation with regulations will be similar to that with interpretability and explanations. There's a popular phrase that gets thrown around, that "there is no silver bullet" (perhaps most poignantly in AIX360's initial paper [0]), as no single explanation suffices (otherwise, would we not simply use that instead?) and no single static selection of them would either. What we need is to have flexible, adaptable approaches that can interactively meet the moment, likely backed by a large selection of well understood, diverse, and disparate approaches that cover for one other in a totality. It needs to interactively adapt, as the issue with the "dashboards" people have put forward to provide such coverage is that there are simply too many options and typical humans cannot process it all in parallel.<p>So, it's an interesting unsolved area for how to put forward approaches that aren't quite one-size fits all, since that doesn't work, but also makes tailoring it to the domain and moment tractable (otherwise we lose what ground we gain and people don't use it again!)... which is precisely the issue that regulation will have to tackle too! Having spoken with some people involved with the AI HLEG [1] that contributed towards the AI Act currently processing through the EU, there's going to have to be some specific tailoring within regulations that fit the domain, so classically the higher-stakes and time-sensitive domains (like, say, healthcare) will need more stringent requirements to ensure compliance means it delivers as intended/promised, but that it's not simply going to be a sliding scale from there, and too much complexity may prevent the very flexibility we actually desire; it's harder to standardise something fully general purpose than something fitted to a specific problem.<p>But perhaps that's where things go hand in hand. An issue currently is the lack of standardisation, in general, it's unreasonable to expect people re-implement these things on their own given the mathematical nuance, yet many of my colleagues agree it's usually the most reliable way. Things like scikit had an opportunity, sitting as a de facto interface for the basics, but niche competitors then grew and grew, many of which simply ignored it. Especially with things like [0], there are a bunch of wholly different "frameworks" that cannot intercommunicate except by someone knuckling down and fudging some dataframes or ndarrays, and that's just within Python, let alone those in R (and there are many) or C++ (fewer, but notable). I'm simplifying somewhat, but it means that plenty of isolated approaches simply can't worth together, meaning model developers may not have much chance but to use whatever batteries are available! Unlike, say, Matplotlib, I don't see much chance for declarative/semi-declarative layers to take over here, such as pyplot and seaborn could, which enabled people to empower everything backed by Matplotlib "for free" with downstream benefits such as enabling intervals or live interaction with a lower-level plugin or upgrade. After all, scikit was meant to be exactly this for SciPy! Everything else like that is generally focused on either models (e.g. Keras) or explanations/interpretability (e.g. Captum or Alibi).<p>So it's going to be a real challenge figuring out how to get regulations that aren't so toothless that people don't bother or are easily satisfied by some token measure, but also don't leave us open to other layers of issues, such as adversarial attacks on explanations or developer malfeasance. Naturally, we don't want something easily gamed that the ones causing the most trouble and harm can just bypass! So I think there's going to have to be a bit of give and take on this one, the regulators must step up while industry must step down, since there's been far too much "oh, you simply must regulate us, here, we'll help draft it" going around lately for my liking. There will be a time for industry to come back to the fore, when we actually need to figure out how to build something that satisfies, and ideally, it's something we could engage in mutually, prototyping and developing both the regulations and the compliant implementations such that there are no moats, there's a clearly better way to do things that ultimately would probably be more popular anyway even without any of the regulatory overhead; when has a clean break and freshening up of the air not benefited? We've got a lot of cruft in the way that's making everyone's jobs harder, to which we're only adding more and more layers, which is why so many are pursuing clean-ish breaks (bypass, say, PyTorch or Jax, and go straight to new, vectorised, Python-ese dialects). The issue is, of course, the 14 standards problem, and now so many are competing that the number only grows, preventing the very thing all these intended to do: refresh things so we can get back to the actual task! So I think a regulatory push can help with that, and that industry then has the once-in-a-lifetime chance to then ride that through to the actual thing we need to get this stuff out there to millions, if not billions, of people.<p>A saying keeps coming back to mind for me, all models are wrong, some are useful. (Interpretable) AI, explanations, regulations, they're all models, so of course they won't be perfect... if they were, we wouldn't have this problem to begin with. What it all comes back to is usefulness. Clearly, we find these things useful, or we wouldn't have them, necessity being the mother of invention and all, but then we must actually make sure what we do is useful. Spinning wheels inventing one new framework after the next doesn't seem like that to me. Building tools that people can make their own, but know that no matter what, a hammer is still a hammer, and someone else can still use it? That seems much more meaningful of an investment, if we're talking the tooling/framework side of things. Regulation will be much the same, and I do think there are some quite positive directions, and things like [1] seem promising, even if only as a stop-gap measure until we solve the hard problems and have no need for it any more -- though they're not solved yet, so I wouldn't hold out for such a thing either. Regulations also have the nice benefit that, unlike much of the software we seem to write these days, they're actually vertically and horizontally composable, and different places and domains at different levels have a fascinating interplay and cross-pollination of ideas, sometimes we see nation-states following in the footsteps of municipalities or towns, other times a federal guideline inspires new institutional or industrial policies, and all such combinations. Plus, at the end of the day, it's still about people, so if a regulation needs fixing, well, it's not like you're not trying to change the physics of the universe, are you?<p><pre><code>  [0]: Vijay Arya et al. "One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques" https://arxiv.org/abs/1909.03012
  [1]: High-Level Expert Group on AI "Ethics Guidelines for Trustworthy AI"
  Apologies, will have to just cite those, since while there are some papers associated with the others, it's quite late now, so I hope the recognisable names suffices.</code></pre></p>
]]></description><pubDate>Fri, 29 Mar 2024 02:31:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=39859993</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=39859993</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39859993</guid></item><item><title><![CDATA[New comment by babel_ in "Ask HN: What things are happening in ML that we can't hear over the din of LLMs?"]]></title><description><![CDATA[
<p>So, from the perspective I have within the subfield I work in, explainable AI (XAI), we're seeing a bunch of fascinating developments.<p>First, as you mentioned, Rudin continues to prove that the reason for using AI/ML is that we don't understand the problem well enough; otherwise we wouldn't even think to use it! So, pushing our focus to better understand the problem, and then levy ML concepts and techniques (including "classical AI" and statistical learning), we're able to make something that not only outperforms some state-of-the-art in most metrics, but often even is much less resource intensive to create and deploy (in compute, data, energy, and human labour), with added benefits from direct interpretability and post-hoc explanations. One example has been the continued primacy of tree ensembles on tabular datasets [0], even for the larger datasets, though they truly shine on the small to medium datasets that actually show up in practice, which from Tigani's observations [1] would include most of those who <i>think</i> they have big data.<p>Second, we're seeing practical examples of exactly this outside Rudin! In particular, people are using ML more to do live parameter fine-tuning that outwise would need more exhaustive searches or human labour that are difficult for real-time feedback, or copious human ingenuity to resolve in a closed-form solution. Opus 1.5 is introducing some experimental work here, as are a few approaches in video and image encoding. These are domains where, as in the first, we understand the problem, but also understand well enough that there's search spaces we simply don't know enough about to be able to dramatically reduce. Approaches like this have been bubbling out of other sciences (physics, complexity theory, bioinformatics, etc) that lead to some interesting work in distillation and extraction of new models from ML, or "physically aware" operators that dramatically improve neural nets, such as Fourier Neural Operators (FNO) [2], which embeds FFTs rather than forcing it to be relearned (as has been found to often happen) for remarkable speed-ups with PDEs such as for fluid dynamics, and has already shown promise with climate modelling [3], material science [4]. There are also many more operators, which all work completely differently, yet bring human insight back to the problem, and sometimes lead to extracting a new model for us to use without the ML! Understanding begets understanding, so the "shifting goalposts" of techniques considered "AI" is a good thing!<p>Third, specifically to improvements in explainability, we've seen the Neural Tangent Kernel (NTK) [5] rapidly go from strength to strength since its introduction. While rooted in core explainability vis a vis making neural nets more mathematically tractable to analysis, not only inspiring other approaches [6] and behavioural understanding of neural nets [7, 8], but novel ML itself [9] with ways to transfer the benefits of neural networks to far less resource intensive techniques; which [9]'s RFM kernel machine proves competitive with the best tree ensembles from [0], and even has advantage on numerical data (plus outperforms prior NTK based kernel machines). An added benefit is the approach used to underpin [9] itself leads to new interpretation and explanation techniques, similar to integrated gradients [10, 11] but perhaps more reminiscent of the idea in [6].<p>Finally, specific to XAI, we're seeing people actually deal with the problem that, well, people aren't really using this stuff! XAI in particular, yes, but also the myriad of interpretable models a la Rudin or the significant improvements found in hybrid approaches and reinforcement learning. Cicero [12], for example, does have an LLM component, but uses it in a radically different way compared to most people's current conception of LLMs (though, again, ironically closer to the "classic" LLMs for semantic markup), much like the AlphaGo series altered the way the deep learning component was utilised by embedding and hybridising it [13] (its successors obviating even the traditional supervised approach through self-play [14], and beyond Go). This is all without even mentioning the neurosymbolic and other approaches to embed "classical AI" in deep learning (such as RETRO [15]). Despite these successes, adoption of these approaches is still very far behind, especially compared to the zeitgeist of ChatGPT style LLMs (and general hype around transformers), and arguably much worse for XAI due to the barrier between adoption and deeper usage [16].<p>This is still early days, however, and again to harken Rudin, we don't understand the problem anywhere near well enough, and that extends to XAI and ML as problem domains themselves. Things we can actually understand seem a far better approach to me, but without getting too Monkey's Paw about it, I'd posit that we should really consider if some GPT-N or whatever is actually what we <i>want</i>, even if it did achieve what we thought we wanted. Constructing ML with useful and efficient inductive bias is a much harder challenge than we ever anticipated, hence the eternal 20 years away problem, so I just think it would perhaps be a better use of our time to make stuff like this, where we know what is <i>actually</i> going on, instead of just <i>theoretically</i>. It'll have a part, no doubt, Cicero showed that there's clear potential, but people seem to be realising "... is all you need" and "scaling laws" were just a myth (or worse, marketing). Plus, all those delays to the 20 years weren't for nothing, and there's a lot of really capable, understandable techniques just waiting to be used, with more being developed and refined every year. After all, look at the other comments! So many different areas, particularly within deep learning (such as NeRFs or NAS [17]), which really show we have so much left to learn. Exciting!<p><pre><code>  [0]: Léo Grinsztajn et al. "Why do tree-based models still outperform deep learning on tabular data?" https://arxiv.org/abs/2207.08815
  [1]: Jordan Tigani "Big Data is Dead" https://motherduck.com/blog/big-data-is-dead/
  [2]: Zongyi Li et al. "Fourier Neural Operator for Parametric Partial Differential Equations" https://arxiv.org/abs/2010.08895
  [3]: Jaideep Pathak et al. "FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators" https://arxiv.org/abs/2202.11214
  [4]: Huaiqian You et al. "Learning Deep Implicit Fourier Neural Operators with Applications to Heterogeneous Material Modeling" https://arxiv.org/abs/2203.08205
  [5]: Arthur Jacot et al. "Neural Tangent Kernel: Convergence and Generalization in Neural Networks" https://arxiv.org/abs/1806.07572
  [6]: Pedro Domingos "Every Model Learned by Gradient Descent Is Approximately a Kernel Machine" https://arxiv.org/abs/2012.00152
  [7]: Alexander Atanasov et al. "Neural Networks as Kernel Learners: The Silent Alignment Effect" https://arxiv.org/abs/2111.00034
  [8]: Yilan Chen et al. "On the Equivalence between Neural Network and Support Vector Machine" https://arxiv.org/abs/2111.06063
  [9]: Adityanarayanan Radhakrishnan et al. "Mechanism of feature learning in deep fully connected networks and kernel machines that recursively learn features" https://arxiv.org/abs/2212.13881
  [10]: Mukund Sundararajan et al. "Axiomatic Attribution for Deep Networks" https://arxiv.org/abs/1703.01365
  [11]: Pramod Mudrakarta "Did the model understand the questions?" https://arxiv.org/abs/1805.05492
  [12]: META FAIR Diplomacy Team et al. "Human-level play in the game of Diplomacy by combining language models with strategic reasoning" https://www.science.org/doi/10.1126/science.ade9097
  [13]: DeepMind et al. "Mastering the game of Go with deep neural networks and tree search" https://www.nature.com/articles/nature16961
  [14]: DeepMind et al. "Mastering the game of Go without human knowledge" https://www.nature.com/articles/nature24270
  [15]: Sebastian Borgeaud et al. "Improving language models by retrieving from trillions of tokens" https://arxiv.org/abs/2112.04426
  [16]: Umang Bhatt et al. "Explainable Machine Learning in Deployment" https://dl.acm.org/doi/10.1145/3351095.3375624
  [17]: M. F. Kasim et al. "Building high accuracy emulators for scientific simulations with deep neural architecture search" https://arxiv.org/abs/2001.08055</code></pre></p>
]]></description><pubDate>Thu, 28 Mar 2024 13:59:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=39851558</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=39851558</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39851558</guid></item><item><title><![CDATA[New comment by babel_ in "The return of the frame pointers"]]></title><description><![CDATA[
<p>Telemetry is exceedingly useful, and it's basically a guaranteed boon when you operate your own systems. But telemetry isn't essential, and it's not the heart of the matter I was addressing. Again, the crux of this is consent, as an imbalance of power easily distorts the nature of consent.<p>Suppose Chrome added new telemetry, for example, like it did when WebRTC was added in Chrome 28, so we really can just track this against something we're all familiar (enough with). When a user clicks "Update", or it auto-updated and "seamlessly" switched version in the background / between launches, well, did the user consent to the newly added telemetry?<p>Perhaps most importantly: did they even know? After all, the headline feature of Chrome 28 was Blink, not some feature that had only really been shown off in a few demos, and was still a little while away from mass adoption. No reporting on Chrome 28 that I could find from the time even mentions WebRTC, despite entire separate articles going out just based on seeing WebRTC demos! Notifications got more<p>So, capabilities to alter software like this are, knowingly or unknowingly, undermine the nature of consent that many find implicit in downloading a browser, since what you download and what you end up using may be two very different things.<p>Now, let's consider a second imbalance. Did you even download Chrome? Most Android devices often have it preinstalled, or some similar "open-core" browser (often a Chromium-derivative). Some are even protected from being uninstalled, so you can't opt out that way, and Apple only just had to open up iOS to non-Safari backed browsers.<p>So the notion of consent via the choice to install is easily undermined.<p>Lastly, because we really could go on all day with examples, what about when you do use it? Didn't you consent then?<p>Well, they may try to onboard you, and have you pretend to read some EULA, or just have it linked and give up the charade. If you don't tick the box for "I read and agree to this EULA", you don't progress. Of course, this is hardly a robust system. Enforceability aside, the moment you had it over to someone else to look at a webpage, did they consent to the same EULA you did?<p>... Basically, all the "default" ways to consider consent are nebulous, potentially non-binding, and may be self-defeating. After all, you generally don't consent to every single line of code, every single feature, and so on, you are usually assumed to consent to the entire thing or nothing. Granularity with permissions has improved that somewhat, but there is usually still a bulk core you must accept before everything else; otherwise the software is usually kept in a non-functional state.<p>I'm not focused too specifically on Chrome here, but rather the broad patterns of how user consent typically assumed in software don't quite pan out as is often claimed. Was that telemetry the specific reason why libwebrtc was adopted by others? I'm not privy to the conversations that occurred with these decisions, but I imagine it's more one factor among many (not to mention, Pion is in/for Go, which was only 4 years old then, and the pion git repo only goes back to 2018). People were excited out of the gate, and libwebrtc being available (and C++) would have kept them in-step (all had support within 2013). But, again, really this is nothing to do with the actual topic at hand, so let's not get distracted.<p>The user has no opportunity to meaningfully consent to this. Ask most people about these things, and they wouldn't even recognise the features by now (as WebRTC or whatever is ubiquitous), let alone any mechanisms they may have to control how it engages with them.<p>Yet, the onus is put on the user. Why do we not ask about anything/anyone else in the equation, or consider what influences the user?<p>A recent example I think illustrates the imbalance and how it affects and warps consent is the recent snafu with a vending machine with limited facial recognition capabilities. In other words, the vending machine had a camera, ostensibly to know when to turn on or not and save power. When this got noticed at a university, it was removed, and everyone made a huge fuss, as they had not consented to this!<p>What I'd like to put in juxtaposition with that is how, in all likelihood, this vending machine was probably being monitored by CCTV, and even if not, that there is certainly CCTV at the university, and nearby, and everywhere else for that matter.<p>So what changed? The scale. CCTV everywhere does not feel like something you can, individually, do anything about; the imbalance of power is such that you have no recourse if you did not consent to it. A single vending machine? That scale and imbalance has shifted, it's now one machine, not put in place by your established security contracts, and not something ubiquitous. It's also something easily sabotaged without clear consequence (students at the university covered the cameras of it quite promptly upon realising), ironically, perhaps, given that this was not their own property and potentially in clear view of CCTV, but despite having all the same qualities as CCTV, the context it embedded in was such that they took action against it.<p>This is the difference between Chrome demanding user consent and someone else asking for it. When the imbalance of power is against you, even just being asked feels like being demanded, whereas when it doesn't quite feel that way, well, users often take a chance to prevent such an imbalance forming, and so work against something that may (in the case of some telemetry) actually be in their favour. However, part and parcel with meeting user needs is respecting their own desires -- as some say, the customer is always right in matters of taste.<p>To re-iterate myself from before, there are other ways of getting profiling information, or anything you might relay via telemetry, that do not have to conform to the Google/Meta/Amazon/Microsoft/etc model of user consent. They choose the way they do because, to them, it's the most efficient way. At their scale, they get the benefits of ubiquitous presence and leverage of the imbalance of power, and so what you view as your system, they view as theirs, altering with impunity, backed by enough power to prevent many taking meaningful action to the contrary.<p>For the rest of us, however, that might just be the wrong way to go about it. If we're trying to avoid all the nightmares that such companies have wrought, and to do it right by one another, then the first step is to evaluate how we engage with users, what the relationship ("contract") we intend to form is, and how we might inspire mutual respect.<p>In ethical user studies, users are remunerated for their participation, and must explicitly give knowing consent, with the ability to withdraw at any time. Online, they're continually A/B tested, frequently without consent. On one hand, the user is placed in control, informed, and provided with the affordances and impunity to consent entirely according to their own will and discretion. On the other, the user is controlled, their agency taken away by the impunity of another, often without the awareness that this is ongoing, or that they might have been able to leverage consent (and often ignored even if they did, after all, it's easy to do so when you hold the power). I know which I'd rather be on the other end of, at least personally speaking.<p>So, if we want to enable telemetry, or other approaches to collaborating with users to improve our software, then we need to do just that. Collaborate. Rethink how we engage, respect them, respect their consent. It's not just that we can't replicate Google, but that maybe we shouldn't, maybe that approach is what's poisoned the well for others wanting to use it, and what's forcing us to try something else. Maybe not, after all, that's not for us to judge at this point, it's only with hindsight that we might truly know. Either way, I think there's some chance for people to come in, make something that actually fits with people, something that regards them as a person, not simply a user, and respects their consent. Stuff like that might start to shift the needle, not by trying to replace Google or libwebrtc or whatever and get the next billion users, but by paving a way and meeting the needs of those who need it, even if it's just a handful of customers or even just friends and family. Who knows, we might start solving some of the problems we're all complaining about yet never seem to fix. At the very least, it feels like a breath of fresh air.</p>
]]></description><pubDate>Mon, 18 Mar 2024 02:44:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=39740093</link><dc:creator>babel_</dc:creator><comments>https://news.ycombinator.com/item?id=39740093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39740093</guid></item></channel></rss>