<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: michelpp</title><link>https://news.ycombinator.com/user?id=michelpp</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 19:58:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=michelpp" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by michelpp in "Mystery Science Theater 3000: The Definitive Oral History of a TV Masterpiece"]]></title><description><![CDATA[
<p>Cinimatic Titanic is also great, with a mix of the original mst3k cast doing studio and live shows.  The live show of Alien Factor is hilarious!<p><a href="https://en.wikipedia.org/wiki/Cinematic_Titanic" rel="nofollow">https://en.wikipedia.org/wiki/Cinematic_Titanic</a></p>
]]></description><pubDate>Sun, 14 Dec 2025 17:43:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46265010</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=46265010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46265010</guid></item><item><title><![CDATA[New comment by michelpp in "The CRDT Dictionary: A Field Guide to Conflict-Free Replicated Data Types"]]></title><description><![CDATA[
<p>Automerge is an excellent library, with a great API, not just in Rust, but also Javascript and C.<p>> All you need for the backend is key-value storage with range/prefix queries;<p>This is true, I was able to quickly put together a Redis automerge library that supports the full API, including pub/sub of changes to subscribers for a full persistent sync server [0].  I was surprised how quickly it came together.  Using some LLM assistance (I'm not a frontend specialist) I was able to quickly put together a usable web demo of synchronized documents across multiple browsers using the Webdis [1] websocket support over pub/sub channels.<p>[0] <a href="https://github.com/michelp/redis-automerge" rel="nofollow">https://github.com/michelp/redis-automerge</a><p>[1] <a href="https://webd.is/" rel="nofollow">https://webd.is/</a></p>
]]></description><pubDate>Sat, 29 Nov 2025 20:37:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46090572</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=46090572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46090572</guid></item><item><title><![CDATA[New comment by michelpp in "What nicotine does to your brain"]]></title><description><![CDATA[
<p>I caught covid for the first time in jan of 2024, the illness itself wasn't that bad for me, like a common cold, but the aftereffects lingered for months.  Eventually my ability to smell and taste came back in a few weeks, but the mental brain fog would not let go.  I would sleep over 12 hours a night and still be tired.  According to my fitness tracker my daily step and energy burn counts were cut in half.  It was so bad I forgot my own phone number at one point and my gmail password.  Looking at a screen of code was impossible, I couldn't focus for more than a few seconds.  My friends commented on the noticeable changes in my acuity and behavior.<p>Based on some online anecdotal evidence, I decided to try nicotine "therapy".  I bought 4mg smoking cessation mints, cut them in half with a pill cutter, and took 10-12 2mg doses per day at roughly one hour intervals.  The effect was immediate and brain fog lifted in less than a week.  It was like coming out of a long dream, or like I had been stoned for six months and then suddenly I was sober again.  My fitness stats have exceeded where I was before I got sick.<p>This is just my own anecdotal experience, and there have definitely been some downsides.  The mints are about $50/month.  My dosage has ticked up a bit and I'm certainly addicted, at least once a day I take a full mint instead of a half for an extra kick.  I'd like to taper off, but I'm not sure if I do how to know if any effects are withdrawal or resumption of the covid brain fog.  I have a light caffeine habit (2 cups every morning) and I don't see the mints being any more harmful than the coffee, so I think I'm just going to stick with it.</p>
]]></description><pubDate>Wed, 19 Nov 2025 09:45:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45977660</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45977660</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45977660</guid></item><item><title><![CDATA[New comment by michelpp in "A graph explorer of the Epstein emails"]]></title><description><![CDATA[
<p>There are open source projects moving toward this scale, the GraphBLAS for example uses an algebraic formulation over compressed sparse matrix representations for graphs that is designed to be portable across many architectures, including cuda.  It would be nice if companies like nivida could get more behind our efforts, as our main bottleneck is development hardware access.<p>To plug my project, I've wrapped the SuiteSparse GraphBLAS library in a postgres extension [1] that fluidly blends algebraic graph theory with the relational model, the main flow is to use sql to structure complex queries for starting points, and then use the graphblas to flow through the graph to the endpoints, then joining back to tables to get the relevant metadata.  On cheap hetzner hardware (amd epyc 64 core) we've achieved 7 billion edges per second BFS over the largest graphs in the suitesparse collection (~10B edges).  With our cuda support we hope to push that kind of performance into graphs with trillions of edges.<p>[1] <a href="https://github.com/OneSparse/OneSparse" rel="nofollow">https://github.com/OneSparse/OneSparse</a></p>
]]></description><pubDate>Tue, 18 Nov 2025 00:19:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45960002</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45960002</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45960002</guid></item><item><title><![CDATA[Show HN: Redis-Automerge: CRDT Documents for Redis]]></title><description><![CDATA[
<p>redis-automerge is a redis module that adds real-time collaboration to json-like documents using the automerge CRDT library.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45733893">https://news.ycombinator.com/item?id=45733893</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 28 Oct 2025 15:12:00 +0000</pubDate><link>https://github.com/michelp/redis-automerge</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45733893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45733893</guid></item><item><title><![CDATA[New comment by michelpp in "Lace: A New Kind of Cellular Automata Where Links Matter"]]></title><description><![CDATA[
<p>An interesting approach to characterize graph topology, both locally and globally is to use a graphlet transform, there some interesting research happening around these types of topology signals, here's one that takes a very algebraic approach<p><a href="https://github.com/fcdimitr/fglt" rel="nofollow">https://github.com/fcdimitr/fglt</a></p>
]]></description><pubDate>Fri, 17 Oct 2025 00:56:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45612347</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45612347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45612347</guid></item><item><title><![CDATA[New comment by michelpp in "UUIDv47: Store UUIDv7 in DB, emit UUIDv4 outside (SipHash-masked timestamp)"]]></title><description><![CDATA[
<p>uuidv8 does not contain a timestamp or counter unless you put them in there, it only contains a version and variant field.  It's a very broad format that lets you contain whatever bits you want.<p>This library converts a uuidv7 into a cryptographically random but deterministic  uuidv4 recoverable with a shared key.  For all intents and purposes the external view is a uuidv4, the internal representation is a v7, which has better index block locality and orderability.</p>
]]></description><pubDate>Wed, 17 Sep 2025 20:53:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45281285</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45281285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45281285</guid></item><item><title><![CDATA[New comment by michelpp in "UUIDv47: Store UUIDv7 in DB, emit UUIDv4 outside (SipHash-masked timestamp)"]]></title><description><![CDATA[
<p>That still exposes the timestamp, and the shift just drops precision, so I'm not sure what you're going for here.</p>
]]></description><pubDate>Wed, 17 Sep 2025 19:21:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45280246</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45280246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45280246</guid></item><item><title><![CDATA[New comment by michelpp in "UUIDv47: Store UUIDv7 in DB, emit UUIDv4 outside (SipHash-masked timestamp)"]]></title><description><![CDATA[
<p>Because then you leak the timestamp.  The idea is, present what looks like v4 random uuids externally, but they are stored internally with v7 which greatly improves locality and index usability.  The conversion back and forth happens with a secret key.</p>
]]></description><pubDate>Wed, 17 Sep 2025 19:18:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45280204</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45280204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45280204</guid></item><item><title><![CDATA[New comment by michelpp in "Adjacency Matrix and std:mdspan, C++23"]]></title><description><![CDATA[
<p>You're right, I did read the article before commenting, but I see your point that I didn't completely understand the intent.</p>
]]></description><pubDate>Thu, 11 Sep 2025 23:17:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45217098</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45217098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45217098</guid></item><item><title><![CDATA[New comment by michelpp in "Adjacency Matrix and std:mdspan, C++23"]]></title><description><![CDATA[
<p>Fair enough, showing my age with "impossible".<p>But still true that dense growth is not linear but quadratic to the number of nodes.</p>
]]></description><pubDate>Thu, 11 Sep 2025 22:15:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45216662</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45216662</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45216662</guid></item><item><title><![CDATA[New comment by michelpp in "Adjacency Matrix and std:mdspan, C++23"]]></title><description><![CDATA[
<p>It's an interesting exploration of ideas, but there are some issues with this article.  Worth noting that it does describe it's approach as "simple and naive", so take my comments below to be corrections and/or pointers into the practical and complex issues on this topic.<p>- The article says adjacency matrices are "usually dense" but that's not true at all, most graph are sparse to very sparse.  In a social network with billions of people, the average out degree might be 100.  The internet is another example of a very sparse graph, billions of nodes but most nodes have at most one or maybe two direct connections.<p>- Storing a dense matrix means it can only work with very small graphs, a graph with one million nodes would require one-million-squared memory elements, not possible.<p>- Most of the elements in the matrix would be "zero", but you're still storing them, and when you do matrix multiplication (one step in a BFS across the graph) you're still wasting energy moving, caching, and multiplying/adding mostly zeros.  It's very inefficient.<p>- Minor nit, it says the diagonal is empty because nodes are already connected to themselves, this isn't correct by theory, self edges are definitely a thing.  There's a reason the main diagonal is called "the identity".<p>- Not every graph algebra uses the numeric "zero" to mean zero, for tropical algebras (min/max) the additive identity is positive/negative infinity.  Zero is a valid value in those algebras.<p>I don't mean to diss on the idea, it's a good way to dip a toe into the math and computer science behind algebraic graph theory, but in production or for anything but the smallest (and densest) graphs, a sparse graph algebra library like SuiteSparse would be the most appropriate.<p>SuiteSparse is used in MATLAB (A .* B calls SuiteSparse), FalkorDB, python-graphblas, OneSparse (postgres library) and many other libraries.  The author Tim Davis from TAMU is a leading expert in this field of research.<p>(I'm a GraphBLAS contributor and author of OneSparse)</p>
]]></description><pubDate>Thu, 11 Sep 2025 21:23:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45216254</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45216254</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45216254</guid></item><item><title><![CDATA[New comment by michelpp in "Adjacency Matrix and std:mdspan, C++23"]]></title><description><![CDATA[
<p>Yep, for any decent sized graph, sparse is an absolute necessity, since a dense matrix will grow with the square of the node size, sparse matrices and sparse matrix multiplication is complex and there are multiple kernel approaches depending on density and other factors.  SuiteSparse [1] handles these cases, has a kernel JIT compiler for different scenarios and graph operations, and supports CUDA as well.  Worth checking out if you're into algebraic graph theory.<p>Using SuiteSparse and the standard GAP benchmarks, I've loaded graphs with 6 billion edges into 256GB of RAM, and can BFS that graph in under a second. [2]<p>[1] <a href="https://github.com/DrTimothyAldenDavis/GraphBLAS" rel="nofollow">https://github.com/DrTimothyAldenDavis/GraphBLAS</a><p>[2] <a href="https://onesparse.com/" rel="nofollow">https://onesparse.com/</a></p>
]]></description><pubDate>Thu, 11 Sep 2025 21:04:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45216092</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45216092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45216092</guid></item><item><title><![CDATA[New comment by michelpp in "Adjacency Matrix and std:mdspan, C++23"]]></title><description><![CDATA[
<p>For a powerful sparse adjacently matrix C library check out SuiteSparse GraphBLAS, there are binding for Python, Julia and Postgres.<p><a href="https://github.com/DrTimothyAldenDavis/GraphBLAS" rel="nofollow">https://github.com/DrTimothyAldenDavis/GraphBLAS</a></p>
]]></description><pubDate>Thu, 11 Sep 2025 20:58:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45216047</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45216047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45216047</guid></item><item><title><![CDATA[New comment by michelpp in "The Little Book of Linear Algebra"]]></title><description><![CDATA[
<p>Beautiful, a great intro to and reference to core concepts.  Definitely keeping this one around for mental refresh!</p>
]]></description><pubDate>Tue, 02 Sep 2025 16:42:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45105554</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=45105554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45105554</guid></item><item><title><![CDATA[New comment by michelpp in "GPU-rich labs have won: What's left for the rest of us is distillation"]]></title><description><![CDATA[
<p>Not sure why this is being downvoted, it's a thoughtful comment.  I too see this crisis as an opportunity to push boundaries past current architectures.  Sparse models for example show a lot of promise and more closely track real biological systems.  The human brain has an estimated graph density of 0.0001 to 0.001.  Advances in sparse computing libraries and new hardware architectures could be key to achieving this kind of efficiency.</p>
]]></description><pubDate>Fri, 08 Aug 2025 20:31:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44841398</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=44841398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44841398</guid></item><item><title><![CDATA[New comment by michelpp in "Automerge 3.0"]]></title><description><![CDATA[
<p>There is also a C api wrapper, not sure the state of it wrt this latest release.</p>
]]></description><pubDate>Thu, 07 Aug 2025 00:30:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44819405</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=44819405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44819405</guid></item><item><title><![CDATA[New comment by michelpp in "Billions of Edges per Second with Postgres"]]></title><description><![CDATA[
<p>Author and CEO here, we're excited to get OneSparse off the ground and show off some high performance graph processing with Postgres.  Feel free to ask any questions!</p>
]]></description><pubDate>Tue, 15 Jul 2025 19:41:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44575045</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=44575045</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44575045</guid></item><item><title><![CDATA[New comment by michelpp in "Thnickels"]]></title><description><![CDATA[
<p>Usd nickles have smooth edges.</p>
]]></description><pubDate>Wed, 25 Jun 2025 20:15:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44381421</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=44381421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44381421</guid></item><item><title><![CDATA[New comment by michelpp in "Implementing a Forth"]]></title><description><![CDATA[
<p>Mastering Forth is also an excellent book and gets deep into the subject with very clear writing and excellent diagrams.<p><a href="https://archive.org/details/mastering-forth-by-anderson-anita-tracy-martin-z-lib.org" rel="nofollow">https://archive.org/details/mastering-forth-by-anderson-anit...</a></p>
]]></description><pubDate>Tue, 03 Jun 2025 22:56:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44175653</link><dc:creator>michelpp</dc:creator><comments>https://news.ycombinator.com/item?id=44175653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44175653</guid></item></channel></rss>