<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mbid</title><link>https://news.ycombinator.com/user?id=mbid</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 19 Apr 2026 05:53:45 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mbid" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mbid in "The acyclic e-graph: Cranelift's mid-end optimizer"]]></title><description><![CDATA[
<p>I believe these ideas are much more mature and better explored for code gen, but similar techniques are useful also in the frontend of compilers, in the type checker. There's a blog post [1] by Niko Matsakis where he writes about adding equalities to Datalog so that Rust's trait solver can be encoded in Datalog. Instead of desugaring equality into a special binary predicate to normal Datalog as Niko suggests, it can be also be implemented by keeping track of equality with union-find and then propagating equality through relations, eliminating now-duplicate rows recursively. The resulting system generalizes both Datalog and e-graphs, since the functionality axiom ("if f(x) = y and f(x) = z, then y = z") is a Datalog rule with equality if you phrase it in terms of the graph of functions.<p>Systems implementing this are egglog [2] (related to egg mentioned in the article) and (self-plug, I'm the author) eqlog [3]. I've written about implementing Hindley-Milner type systems here: [4]. But I suspect that Datalog-based static analysis tools like CodeQL would also benefit from equalities/e-graphs.<p>[1] <a href="https://smallcultfollowing.com/babysteps/blog/2017/01/26/lowering-rust-traits-to-logic/" rel="nofollow">https://smallcultfollowing.com/babysteps/blog/2017/01/26/low...</a><p>[2] <a href="https://github.com/egraphs-good/egglog" rel="nofollow">https://github.com/egraphs-good/egglog</a><p>[3] <a href="https://github.com/eqlog/eqlog" rel="nofollow">https://github.com/eqlog/eqlog</a><p>[4] <a href="https://www.mbid.me/posts/type-checking-with-eqlog-polymorphism/" rel="nofollow">https://www.mbid.me/posts/type-checking-with-eqlog-polymorph...</a></p>
]]></description><pubDate>Tue, 14 Apr 2026 18:01:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47768999</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=47768999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47768999</guid></item><item><title><![CDATA[New comment by mbid in "SQL, Homomorphisms and Constraint Satisfaction Problems"]]></title><description><![CDATA[
<p>If you're interested in the details, you might want to have a look at papers [1] or [2].<p>You can add existentials in this framework, which basically means that the lifting problems mentioned above don't need to have <i>unique</i> solutions. But as you say, then the "minimal" databases aren't determined uniquely up to isomorphism anymore. So the result of Datalog evaluation now depends on the order in which you apply rules.<p>If I recall correctly, then [3] discusses a logic corresponding to accessible categories (Datalog + equality corresponds to locally presentable categories) which includes the the theory of fields. The theory of fields involves the negation 0 != 1, so perhaps that might give you a nicer way to incorporate negations without stratification.<p>[1] <a href="https://www.mbid.me/eqlog-semantics/" rel="nofollow">https://www.mbid.me/eqlog-semantics/</a><p>[2] <a href="https://arxiv.org/abs/2205.02425" rel="nofollow">https://arxiv.org/abs/2205.02425</a><p>[3] <i>Locally presentable and accessible categories</i>, <a href="https://www.cambridge.org/core/books/locally-presentable-and-accessible-categories/94CB48295B6AF097FC232313A57BDE17" rel="nofollow">https://www.cambridge.org/core/books/locally-presentable-and...</a></p>
]]></description><pubDate>Thu, 21 Nov 2024 12:03:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42203479</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=42203479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42203479</guid></item><item><title><![CDATA[New comment by mbid in "SQL, Homomorphisms and Constraint Satisfaction Problems"]]></title><description><![CDATA[
<p>I actually started working on Eqlog because I wanted to use it to implement a type checker. You might want to skim the posts in my series on implementing a Hindley-Milner type system using Eqlog, starting here [1]. The meat is in posts 3 - 5.<p>The type checker of Eqlog is mostly implement in Eqlog itself [2]. The general idea is that your parser populates a Database with syntax nodes, which are represented as `...Node` types in the Eqlog program at [2], and then you propagate type information with Datalog/Eqlog evaluation. Afterwards, you check whether the Database contains certain patterns that you want to rule out, e.g. a variable that doesn't have a type [3].<p>There are still some unsolved problems if you're interested in writing the whole type checker in Datalog. For example, variable lookup requires quadratic memory when implemented in Datalog. I mention this and a possible solution at [4]. However, Datalog as is can probably still be useful for some subtasks during type checking. For example, the Rust compiler uses Datalog in some parts of the type checker I believe. Reach out via e.g. github to mbid@ if you'd like to discuss in more detail.<p>Regarding optimization you probably want to talk with somebody working with egglog, they have a dedicated Zulip [5]. I'd imagine that for prql you want to encode the algebraic rules of pipeline transformations, e.g. associativity of filter over append. Given the query AST, eqlog or egglog would give you all equivalent ways to write the query according to your rules. You'd then select the representation you estimate to be the most performant based on a score you assign to (sub)expression.<p>[1] <a href="https://www.mbid.me/posts/type-checking-with-eqlog-parsing/" rel="nofollow">https://www.mbid.me/posts/type-checking-with-eqlog-parsing/</a><p>[2] <a href="https://github.com/eqlog/eqlog/blob/9efb4d3cb3d9b024d681401b7fe46bb9f37fac02/eqlog-eqlog/src/eqlog.eqlog">https://github.com/eqlog/eqlog/blob/9efb4d3cb3d9b024d681401b...</a><p>[3] <a href="https://github.com/eqlog/eqlog/blob/9efb4d3cb3d9b024d681401b7fe46bb9f37fac02/eqlog/src/semantics/mod.rs#L235-L256">https://github.com/eqlog/eqlog/blob/9efb4d3cb3d9b024d681401b...</a><p>[4] <a href="https://www.mbid.me/posts/dependent-types-for-datalog/#morphisms" rel="nofollow">https://www.mbid.me/posts/dependent-types-for-datalog/#morph...</a><p>[5] <a href="https://egraphs.zulipchat.com" rel="nofollow">https://egraphs.zulipchat.com</a></p>
]]></description><pubDate>Thu, 21 Nov 2024 11:49:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=42203381</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=42203381</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42203381</guid></item><item><title><![CDATA[New comment by mbid in "SQL, Homomorphisms and Constraint Satisfaction Problems"]]></title><description><![CDATA[
<p>The post mentions the idea that querying a database D can be understood algebraically as enumerating all morphisms Q -> D, where Q is the "classifying" database of the query, i.e. a minimal database instance that admits a single "generic" result of the query. You can use this to give a neat formulation of Datalog evaluation. A Datalog rule then corresponds a morphism P -> H, where P is the classifying database instance of the rule body and H is the classifying database instance for matches of both body and head. For example, for the the transitivity rule<p><pre><code>  edge(x, z) :- edge(x, y), edge(y, z).
</code></pre>
you'd take for P the database instance containing two rows (a_1, a_2) and (a_2, a_3), and the database instance H contains additionally (a_1, a_3). Now saying that a Database D satisfies this rule means that every morphism P -> D (i.e., every match of the premise of the rule) can be completed to a commuting diagram<p><pre><code>  P --> D
  |    ^
  |   /
  ⌄  /
  Q 
</code></pre>
where the additional map is the arrow Q -> D, which corresponds to a match of both body and head.<p>This kind of phenomenon is known in category theory as a "lifting property", and there's rich theory around it. For example, you can show in great generality that there's always a "free" way to add data to a database D so that it satisfies the lifting property (the orthogonal reflection construction/the small object argument). Those are the theoretical underpinnings of the Datalog engine I'm sometimes working on [1], and there they allow you to prove that Datalog evaluation is also well-defined if you allow adjoining new elements during evaluation in a controlled way. I believe the author of this post is involved in the egglog project [2], which might have similar features as well.<p>[1] <a href="https://github.com/eqlog/eqlog">https://github.com/eqlog/eqlog</a>
[2] <a href="https://github.com/egraphs-good/egglog">https://github.com/egraphs-good/egglog</a></p>
]]></description><pubDate>Wed, 20 Nov 2024 19:11:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=42197125</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=42197125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42197125</guid></item><item><title><![CDATA[Stop using REST for state synchronization (2024)]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.mbid.me/posts/stop-using-rest-for-state-synchronization/">https://www.mbid.me/posts/stop-using-rest-for-state-synchronization/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41617793">https://news.ycombinator.com/item?id=41617793</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 22 Sep 2024 15:38:30 +0000</pubDate><link>https://www.mbid.me/posts/stop-using-rest-for-state-synchronization/</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=41617793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41617793</guid></item><item><title><![CDATA[New comment by mbid in "I don't always use LaTeX, but when I do, I compile to HTML (2013)"]]></title><description><![CDATA[
<p>For me, the main problem with most tools that render to HTML was that they don't support all math typesetting libraries that latex supports. I used to work with category theory, where it's common to use the tikz-cd library to typeset commutative diagrams. tikz-cd is based on tikz, which is usually not supported for HTML output.<p>But apart from math typesetting, my latex documents were usually very simple: They just used sections, paragraphs, some theorem environments and references to those, perhaps similar to what the stack project uses [3]. Simple latex such as this corresponds relatively directly to HTML (except for the math formulas of course). But many latex to html tools try to implement a full tex engine, which I believe means that they lower the high-level constructs to something more low level (or that's at least my understanding). This results in very complicated HTML documents from even simple latex input documents.<p>So what would've been needed for me was a tool that can (1) render all math that pdflatex can render, but that apart from math only needs to (2) support a very limited set of other latex features. In a hacky way, (1) can be accomplished by simply using pdflatex to render each formula of a latex document in isolation to a separate pdf, then converting this pdf to svg, and then incuding this svg in the output HTML in the appropriate position. And (2) is simply a matter of parsing this limited subset of latex. I've prototyped a tool like that here [1]. An example output can be found here [2].<p>Of course, SVGs are not exactly great for accessibility. But my understanding is that many blind mathematicians are very good at reading latex source code, so perhaps an SVG with alt text set to the latex source for that image is already pretty good.<p>[1] <a href="https://github.com/mbid/latex-to-html">https://github.com/mbid/latex-to-html</a><p>[2] <a href="https://www.mbid.me/lcc-model/" rel="nofollow">https://www.mbid.me/lcc-model/</a><p>[3] <a href="https://stacks.math.columbia.edu/" rel="nofollow">https://stacks.math.columbia.edu/</a></p>
]]></description><pubDate>Fri, 26 Jan 2024 10:18:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=39140934</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=39140934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39140934</guid></item><item><title><![CDATA[New comment by mbid in "MathJax: Beautiful and accessible math in all browsers"]]></title><description><![CDATA[
<p>If you're going to send out math as SVGs anyway, you can also just use your normal latex to PDF renderer (e.g. pdflatex) on each formula, and then convert the output PDFs to SVGs. That way, you get the same output you'd get with latex, and you can also use latex packages that aren't supported by MathJax (e.g. tikz-cd). I've prototyped a latex to html converter [1] based on that approach, but it's probably not ready for serious use. Here's an example: <a href="https://www.mbid.me/lcc-model/" rel="nofollow noreferrer">https://www.mbid.me/lcc-model/</a><p>[1] <a href="https://github.com/mbid/latex-to-html">https://github.com/mbid/latex-to-html</a></p>
]]></description><pubDate>Sat, 14 Oct 2023 09:01:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=37879074</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=37879074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37879074</guid></item><item><title><![CDATA[New comment by mbid in "MathJax: Beautiful and accessible math in all browsers"]]></title><description><![CDATA[
<p>> using KaTeX [...] switched to server-side rendering with MathJax<p>I've been meaning to look into KaTex. Could you elaborate on why you switched away from it? KaTeX appears to support server-side rendering already, in the sense that it generates plain HTML.</p>
]]></description><pubDate>Sat, 14 Oct 2023 08:50:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=37879043</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=37879043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37879043</guid></item><item><title><![CDATA[New comment by mbid in "Things Unix can do atomically (2010)"]]></title><description><![CDATA[
<p>>what does he mean when he says rename isn't atomic if there's a crash?<p>Not sure. One of the papers he cites [1] has this to say about rename atomicity:<p>><i>Directory operations such as rename() and link() are seemingly atomic on all file systems that use techniques like journaling or copy-on-write for consistency</i><p>So it seems to depend on the file system. But the authors claim that not just link but also rename is atomic without qualification, which appears to be false, so I'm not sure how trustworthy this is.<p>There's a stackoverflow comment [2] that suggests that ext4 guarantees the kind of sequential atomicity property (atomic link followed by atomic remove) I mentioned:<p>><i>Kernel implementation depends on filesystem but here's implementation of Linux ext4 filesystem: elixir.bootlin.com/linux/v5.19.3/source/fs/ext4/namei.c#L3887 – it seems to first mark the inode as dirty, then create new link and after that delete the old link. If the kernel crashes in the middle, the end result could be two links and dirty mark. I would guess (but didn't investigate if this true) that journal recovery during the next mount would fix the end result to match atomic behavior of leaving either the old or new state depending on exact crash location.</i><p>In general, I find it rather weird how little authoritative documentation there is about whether or not this property holds given how important it is.<p>[1] <a href="https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-pillai.pdf" rel="nofollow noreferrer">https://www.usenix.org/system/files/conference/osdi14/osdi14...</a><p>[2] <a href="https://stackoverflow.com/questions/7054844/is-rename-atomic#comment129720402_7054851" rel="nofollow noreferrer">https://stackoverflow.com/questions/7054844/is-rename-atomic...</a></p>
]]></description><pubDate>Mon, 14 Aug 2023 08:31:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=37118733</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=37118733</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37118733</guid></item><item><title><![CDATA[New comment by mbid in "Things Unix can do atomically (2010)"]]></title><description><![CDATA[
<p>The rename system call is not quite atomic. From <a href="https://linux.die.net/man/2/rename" rel="nofollow noreferrer">https://linux.die.net/man/2/rename</a>:<p>> <i>However, when overwriting there will probably be a window in which both oldpath and newpath refer to the file being renamed.</i><p>A better way to think about rename is that it's given by two atomic operations in sequence: (1) An atomic link call to make the file available under the new name, and then
(2) an atomic remove operation to delete the old name. Thus, it's possible to observe the state after (1) but before (2).</p>
]]></description><pubDate>Mon, 14 Aug 2023 06:12:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=37118024</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=37118024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37118024</guid></item><item><title><![CDATA[New comment by mbid in "The AST Typing Problem"]]></title><description><![CDATA[
<p>This is also the natural solution when you're using Datalog to compute with the AST: Datalog operates on relations/tables, so associating types to AST nodes can be accomplished using a table with two columns: One for the ID representing an AST node, and one for the type associated to the AST node. I've recently written about this approach in the series starting here: <a href="https://www.mbid.me/posts/type-checking-with-eqlog-parsing/" rel="nofollow noreferrer">https://www.mbid.me/posts/type-checking-with-eqlog-parsing/</a></p>
]]></description><pubDate>Mon, 14 Aug 2023 05:53:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=37117913</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=37117913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37117913</guid></item><item><title><![CDATA[New comment by mbid in "Show HN: Dependently typed language for proofs that you can implement in one day"]]></title><description><![CDATA[
<p><i>>Pompom is an attractive implementation of an extensional (!) dependently typed language</i><p><i>>Pompom provides [...] a strong normalization system</i><p>How is this possible? Extensional dependent type theory has undecidable term/type equality, so there cannot be a strong normalization algorithm.</p>
]]></description><pubDate>Sat, 02 Oct 2021 16:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=28729689</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=28729689</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28729689</guid></item><item><title><![CDATA[New comment by mbid in "Why are monoidal categories interesting?"]]></title><description><![CDATA[
<p>Just to add to the confusion: Monoidal categories are the appropriate notion of "monoid object" in the category of categories; unit and associativity law are replaced by coherent natural isomorphisms (see "categorification"). So that's where the "monoidal" comes from.</p>
]]></description><pubDate>Wed, 27 Mar 2019 18:17:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=19504124</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=19504124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19504124</guid></item><item><title><![CDATA[New comment by mbid in "If monads are the solution, what is the problem?"]]></title><description><![CDATA[
<p>I didn't mean to critize all axiomatization but the concrete axiomatizations arrived at by PL research. In my opinion, one should start with a thorough understanding of the problem domain, and then try to come up with a syntax that allows expression of your intuition as faithfully as possible. Form follows function. Instead, I feel like PL research is often too focused on the concrete syntax considered, thus function follows form. Case in point: Building models for the syntax at hand is often an afterthought, done only to prove consistency. I'd argue that one should start with as precise an understanding of the intended models as possible, and then come with up with an axiomatization (thus, syntax) for the intended semantics.<p>For example, the notion of elementary topos has been invented because its creators wanted to capture the way Grothendieck toposes kind of behave like constructive sets. This I find very useful, and also the axiomatizations of elementary toposes. On the other hand, Martin-Löf type theory didn't have a formal semantics at first, then an erroneous one, and finally ~20 years later a kind of acceptable one. And its category of models is... not really interesting. Except for categories of assemblies, I don't know of a single model of ML type theory that's not also an elementary topos. And the interesting thing about assemblies is that they can be turned into objects of the associated topos... so yeah.</p>
]]></description><pubDate>Mon, 30 Jul 2018 19:42:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=17647351</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=17647351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17647351</guid></item><item><title><![CDATA[New comment by mbid in "If monads are the solution, what is the problem?"]]></title><description><![CDATA[
<p>Kind of, from what I (complete physics noob) hear.</p>
]]></description><pubDate>Mon, 30 Jul 2018 19:14:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=17647117</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=17647117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17647117</guid></item><item><title><![CDATA[New comment by mbid in "If monads are the solution, what is the problem?"]]></title><description><![CDATA[
<p>You're describing my pet peeve with research in programming languages, or more generally use of abstractions in CS. What you're writing makes sense: Understand the structure of your problem, then think of similar problems you've seen before, finally try to abstract away details that are irrelevant for the solutions in each case. If you're lucky, all of these problems are an instance of one general one, and need to be solved only once. In your example, it's that IO and [] are both monads and thus share some structure.<p>What's often happening in PL theory, though, is that people start off with axioms/syntax and then mutilate the actual problem until it fits the syntax (still badly). That's how you get ridiculous stuff like "everything is an object" or "everything is a file". In PL theory, people will usually first make up some syntax/theory and then search for models. If physicists worked the same way, they'd write down random formulae and would then go out to find phenomena described by them. You'd probably read this comment on a cave wall.</p>
]]></description><pubDate>Mon, 30 Jul 2018 16:56:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=17645926</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=17645926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17645926</guid></item><item><title><![CDATA[New comment by mbid in "Geosharded Recommendations with Hilbert Curve at Tinder"]]></title><description><![CDATA[
<p><i>However, using a Hilbert Curve for sharding doesn't seem like the best approach.</i><p>Yes, that's also what I thought. Searching for "same size k-means" yields a simple postprocessing step to even out the clusters produced by the usual k-means algorithm.<p>EDIT: k-means is adapted directly here: <a href="https://elki-project.github.io/tutorial/same-size_k_means" rel="nofollow">https://elki-project.github.io/tutorial/same-size_k_means</a></p>
]]></description><pubDate>Fri, 20 Jul 2018 00:22:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=17571500</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=17571500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17571500</guid></item><item><title><![CDATA[New comment by mbid in "Ask HN: 2018 Summer Reading List?"]]></title><description><![CDATA[
<p><i>Lost in Math</i> by Sabine Hossenfelder, which was published just a few months ago. It deals with how foundational theoretical physics has been led astray by aesthetic ideals, i.e. "lost in Math", because of the lack of new empirical data during the last decades.<p>I read it because I see some parallels to programming/verification language design, where research is usually justified aesthetically because empirical data is hard to come by. It appears doubtful that the book will help you career-wise if you're an academic stuck in a field with similar problems, but it is consolidating to know you're not alone.</p>
]]></description><pubDate>Fri, 13 Jul 2018 07:33:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=17521126</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=17521126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17521126</guid></item><item><title><![CDATA[New comment by mbid in "Logic is Metaphysics (2011) [pdf]"]]></title><description><![CDATA[
<p>So you think the key for understanding constructive logic is to understand some peculiar syntax for a fragment of it. Ridiculous, but sadly quite typical among type theory cultists.</p>
]]></description><pubDate>Wed, 06 Jun 2018 22:59:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=17251847</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=17251847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17251847</guid></item><item><title><![CDATA[New comment by mbid in "Thousands of AI researchers are boycotting the new Nature journal"]]></title><description><![CDATA[
<p><i>> In my own field of machine learning, itself an academic descendant of Gauss’s pioneering work, [...].</i><p>Yes, and I'm a descendand of Julius Caesar, Confucius and Charlemagne. I won't 
tell you this when introducing myself though, because so is everybody else.<p>It wouldn't have occured to me that AI researchers have such inferiority complexes that they need to descend to such name-dropping, but I guess I was wrong.</p>
]]></description><pubDate>Tue, 29 May 2018 19:21:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=17182176</link><dc:creator>mbid</dc:creator><comments>https://news.ycombinator.com/item?id=17182176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17182176</guid></item></channel></rss>