<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: qazxcvbnm</title><link>https://news.ycombinator.com/user?id=qazxcvbnm</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 08:13:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=qazxcvbnm" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by qazxcvbnm in "Two Years of Emacs Solo"]]></title><description><![CDATA[
<p>My goodness, how so thoughtful Emacs Solo is!</p>
]]></description><pubDate>Tue, 10 Mar 2026 10:23:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47321337</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=47321337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47321337</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Two Years of Emacs Solo"]]></title><description><![CDATA[
<p>The new Emacs features sound great! (We have native window management finally)<p>I wish we would someday be able to edit in xref too, wgrep having landed in Emacs 30 (especially since project.el grep goes to xref by default).<p>By the way, anyone more informed know about any work on getting a graphical browser to work on latest Emacs, now that webkit xwidgets is dead for Emacs 30+? (Have tried EAF; extremely buggy on Mac)</p>
]]></description><pubDate>Tue, 10 Mar 2026 06:44:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47319808</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=47319808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47319808</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Eschewing Zshell for Emacs Shell (2014)"]]></title><description><![CDATA[
<p>M-n, M-p. M-r for searching on past commands. C-c c-n/C-p for next/previous prompt. C-c C-r to scroll to focus on output.<p>Having switched recently to Emacs, my current issue is how to get Emacs shell history saved properly for my other shell buffers, and getting completions from shell (not Emacs) to work, planning to try MisTTY soon.</p>
]]></description><pubDate>Sat, 28 Feb 2026 05:27:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47190835</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=47190835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47190835</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Putting email in its place with Emacs and Mu4e"]]></title><description><![CDATA[
<p>By the way, anyone have experience using emacs to analyse and visualise data (think spreadsheets and charts)? I’d really like to be able to use it to view any sort of data I happen to have.</p>
]]></description><pubDate>Wed, 10 Dec 2025 11:35:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46216602</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=46216602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46216602</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Show HN: I invented a new generative model and got accepted to ICLR"]]></title><description><![CDATA[
<p>An uninformed question: If the network is fully composed of 1x1 convolutions, doesn’t that mean no information mixing between pixels occur? Would that not imply that each pixel is independent of each other? How can that not lead to incoherent results?</p>
]]></description><pubDate>Fri, 10 Oct 2025 17:32:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45541532</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=45541532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45541532</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Still Asking: How Good Are Query Optimizers, Really? [pdf]"]]></title><description><![CDATA[
<p>What are some more information on the state of the art in runtime adaptation? I confess I do not feel like I possess such a thing from the databases I regularly use.<p>Adaptation sounds very compelling; if instead of emitting a plan based on a cardinality estimate, we emit a plan and a range of reasonable intermediate cardinalities together with expected time, and interrupt the original plan when the expectations are exceeded by an order of magnitude, and perform alternative plans based on newly gathered physical information, it sounds like it would be greatly beneficial. Are there concrete reasons that this has not been done (e.g. cost, complexity)?</p>
]]></description><pubDate>Wed, 03 Sep 2025 00:47:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45111024</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=45111024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45111024</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "The Synology End Game"]]></title><description><![CDATA[
<p>When things aren’t ticking perfectly for Synology their software can be kind of weird. Sometimes after power failures, some disks get corrupted and… you simply can’t log in to the Synology UI during  this time (unless you “synobootseq --set-boot-done”, why, of course) for an unspecified number of hours.<p>Their custom software has its quirks (eg scp doesn’t work unless you apply the -O flag, for “security” reasons), and the quirks change sometimes after updates.</p>
]]></description><pubDate>Fri, 29 Aug 2025 12:16:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45063069</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=45063069</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45063069</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "The principles of database design, or, the Truth is out there"]]></title><description><![CDATA[
<p>I’ve once attempted to implement a solution where ids are generated by UUIDv5 from a certain owner and the relationship of the new item to the owner; that way, users cannot generate arbitrary ids but can still predict ahead of time their new ids to ease optimistic behaviour.</p>
]]></description><pubDate>Mon, 19 May 2025 06:51:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44027093</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=44027093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44027093</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "What Every Programmer Should Know About Enumerative Combinatorics"]]></title><description><![CDATA[
<p>Yeah, its a theoretical general connection. Once you’ve pinned down the specific structure and know how many structures to skip by virtue of knowing how to count them, it becomes a somewhat practical algorithm.</p>
]]></description><pubDate>Mon, 19 May 2025 01:07:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44025678</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=44025678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44025678</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "What Every Programmer Should Know About Enumerative Combinatorics"]]></title><description><![CDATA[
<p>For those who are not aware, knowing how to count the number of structures is (nearly) the same as knowing how to encode said structure as an integer (space-optimally) (the naive general algorithm would not have good time complexity); to convert to integer, simply define some ordering on said structures, enumerate until you find your structure, the index of the structure is the integer encoding; conversely, to decode, simply list all structures and pick the structure with the index of the integer.</p>
]]></description><pubDate>Sun, 18 May 2025 16:07:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44022341</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=44022341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44022341</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Show HN: Hardtime.nvim – break bad habits and master Vim motions"]]></title><description><![CDATA[
<p>A somewhat more “complete” solution that doesn’t give you hints (thus doesn’t rely on the plugin support for all of vim’s vast functionalities), but conditions your instincts to get better: increase the latency of my whole terminal (c.f. <a href="https://unix.stackexchange.com/questions/778196/how-to-add-delay-to-my-terminal" rel="nofollow">https://unix.stackexchange.com/questions/778196/how-to-add-d...</a>) (also see the comment) by running my terminal session on a ssh session into my own machine through a ProxyCommand of the command delay.</p>
]]></description><pubDate>Sun, 18 May 2025 15:07:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44021931</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=44021931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44021931</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "The Fifth Kind of Optimisation"]]></title><description><![CDATA[
<p>And batching</p>
]]></description><pubDate>Sat, 05 Apr 2025 05:55:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43591189</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43591189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43591189</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "'Next-Level' Chaos Traces the True Limit of Predictability"]]></title><description><![CDATA[
<p>You're right, it's not the same. Undecidable properties are not related in general.</p>
]]></description><pubDate>Sat, 08 Mar 2025 13:43:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=43300195</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43300195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43300195</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "'Next-Level' Chaos Traces the True Limit of Predictability"]]></title><description><![CDATA[
<p>Halting (problem) is undecidable. Undecidability is not just the halting problem. Whether you go to the park on a given day is undecidable (assuming you have free will, and all such assumptions), but that you can be an oracle for this does not solve the halting problem.</p>
]]></description><pubDate>Sat, 08 Mar 2025 07:09:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43298182</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43298182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43298182</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Succinct data structures"]]></title><description><![CDATA[
<p>Note that succinct data structures may not be faster than conventional structures if your dataset fits in memory <a href="http://www.cs.cmu.edu/~huanche1/slides/FST.pdf" rel="nofollow">http://www.cs.cmu.edu/~huanche1/slides/FST.pdf</a> . Of course, for large datasets where storage access times dominate, succinct data structures win all around. In any case, succinct trees are works of art (If I recall <a href="https://arxiv.org/pdf/1805.11255" rel="nofollow">https://arxiv.org/pdf/1805.11255</a> was a good exposition) (just look at how that RMQ works)!</p>
]]></description><pubDate>Thu, 06 Mar 2025 19:10:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43283971</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43283971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43283971</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Apple's Software Quality Crisis"]]></title><description><![CDATA[
<p>To be fair, searching with Spotlight has been equally slow and useless for me… Whenever I need to find a file and mistakenly use Command F in my Finder, the complete cessation of activity that inevitably results reminds me yet once again to just go to my terminal to use trusty GNU’s find instead.</p>
]]></description><pubDate>Tue, 04 Mar 2025 04:23:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43250256</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43250256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43250256</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Vine: A programming language based on Interaction Nets"]]></title><description><![CDATA[
<p>You may be interested in my other comment. To put it simply, pure languages remove serialisation of false data dependencies, exposing data parallelism, and interaction nets remove serialisation of false control dependencies, exposing control parallelism.</p>
]]></description><pubDate>Sun, 23 Feb 2025 06:17:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43147136</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43147136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43147136</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Vine: A programming language based on Interaction Nets"]]></title><description><![CDATA[
<p>Both lambda calculus and interaction nets are confluent. That is, for a term that can be normalised (i.e. evaluated), one can obtain the same answer by performing any available action in any order (n.b. if the chosen order terminates). For example, for `A (B) (C (D))`, I can choose to first evaluate either `A (B)` or `C (D)` and the final answer will be the same. This is true in both systems (although the reduction in interaction nets has more satisfactory properties).<p>The key reason one may consider interaction nets to be more parallelisable than lambda calculus is that the key evaluation operation is global in lambda calculus, but local in interaction nets. The key evaluation operation in labmda calculus is (beta) reduction. For instance, if one evaluates a lambda term `(\n -> f n n) x`, reduction takes this to the term `f x x`. To do so, one must duplicate the entire term `x` from the argument to perform the computation, by either 1) physical duplication, or 2) keeping track of references. Both are unsatisfactory solutions with many properties hindering parallelism. As I shall explain, the term `x` may be of unbounded size or be intertwined non-locally with a large part of the control graphs of other terms.<p>If `x` is simply a ground term (i.e. a piece of data), then it seems like either duplication or keeping track of the references would be an inevitable and reasonable cost, with the usual managed-language issues of garbage collection. If one decides to solve the problem by attempting to force the argument to be a ground term, one would find the only method to be to impose eager evaluation, evaluating terms by always first evaluating the leaf of the expression, before evaluating internal nodes in the expression. Eager evaluation can easily become unboundedly wasteful when one strives to reuse a general computation for some more specific use cases, so one may not prefer an eager evaluation doctrine.<p>However, once one evaluates in an order that is not strictly eager (e.g. lazy evaluation), the terms that one is duplicating or referencing are no longer simple pieces of data, but pieces of a (not necessarily acyclic) control graph, and any referencing logic quickly becomes very complicated. Moreover, the argument `x` could also be a function, and keeping track of references would involve keeping track of closures over different variables and scopes, which complicates the problem of sharing even further.<p>Thus, either one follows an eager evaluation order, in which most of the nodes in a term's expression tree are not available for evaluation yet, and available pieces of work for evaluation are only generated as evaluation happens, which imposes a global and somewhat strict serialised order of execution, or one deals with a big complicated shared graph, which is also inconvenient to be distributed across computational resources.<p>In contrast with lambda calculus, the key evaluation operation in interaction nets is local. Interaction nets can be seen as more low-level than lambda calculus, and both code and data are represented as graphs of nodes to be evaluated. Thus, a large term is represented as a large net, and regardless of the unbounded size of a term, in one unit of evaluation, only one node from the term's graph is involved.<p>Given a graph of some 'net' to be evaluated, one can choose any "active" node and begin evaluating right there, and the result of computation in that unit of evaluation will be guaranteed to affect only the nodes originally connected to the evaluated node, no referencing involved. Thus, the problem of computation becomes almost embarrassingly parallel, where workers simply pick any piece of a graph and locally add or remove from that piece of the graph.<p>This is what is meant when one refers to interaction nets being more parallelisable than lambda calculus.</p>
]]></description><pubDate>Sun, 23 Feb 2025 04:00:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43146386</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43146386</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43146386</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Automating the Vim Workplace (2020)"]]></title><description><![CDATA[
<p>This is very powerful. Unfortunately, as an Ex command, `!` only works on full lines. One can of course also teach vim to operate `!` over any motion (or visual selection), whether it is part of a line, a full line, or a block <a href="https://vi.stackexchange.com/a/46304/48750" rel="nofollow">https://vi.stackexchange.com/a/46304/48750</a>.</p>
]]></description><pubDate>Thu, 13 Feb 2025 10:35:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43034616</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43034616</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43034616</guid></item><item><title><![CDATA[New comment by qazxcvbnm in "Tiny Pointers"]]></title><description><![CDATA[
<p>I read the paper a few years ago and I agree that for such an incredible algorithmic improvement, it’s not trivial to find a use case, as you still need to maintain a separate (albeit algorithmically insignificant) lookup table. When I read the paper I (mistakenly) hoped it could be used for indexing on-disk structures without ever hitting the disk for things like B tree internal nodes. To get its sweet sweet algorithmic complexity, up its sleeve is once again (as I recall) the age old trick of rebuilding the structure when the size doubles, which makes it much less efficient than it sounds for most practical use cases. I suppose one good use case for this might be compressing database indexes, where you need to maintain a separate structure anyway and the space savings can be worth it.</p>
]]></description><pubDate>Wed, 12 Feb 2025 16:43:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43027057</link><dc:creator>qazxcvbnm</dc:creator><comments>https://news.ycombinator.com/item?id=43027057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43027057</guid></item></channel></rss>