<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mmyrte</title><link>https://news.ycombinator.com/user?id=mmyrte</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 15:04:48 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mmyrte" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mmyrte in "Google Chrome silently installs a 4 GB AI model on your device without consent"]]></title><description><![CDATA[
<p>Forgive my ignorance, but it seems that there is more information in the "explicitly inclusive" form than the "implicitly inclusive" one. Doesn't the existence of the inclusive form allow you to explicitly use a non-inclusive form? So in this case<p>Lehrer being explicitly male
and Lehrer:innen being explicitly inclusive?<p>I appreciate that this seems to be an emotional topic, but if people choose to use language in a new way, would it not be best to not withhold that information from you as a reader? Someone else wrote that it's like using an ad-blocker, but if I were to read an article, I would want to read it in the exact form someone wrote it, no? It's a bit like Americans auto-replacing "fucking" with "f***g" in their browsers to avoid an annoyance, but they lose information in the process.</p>
]]></description><pubDate>Tue, 05 May 2026 10:08:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=48020362</link><dc:creator>mmyrte</dc:creator><comments>https://news.ycombinator.com/item?id=48020362</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48020362</guid></item><item><title><![CDATA[New comment by mmyrte in "Google Chrome silently installs a 4 GB AI model on your device without consent"]]></title><description><![CDATA[
<p>It seems like you would lose meaning by automatically replacing words, no? Why would you want to censor your internet experience, just because you find someone else's use of language awkward?</p>
]]></description><pubDate>Tue, 05 May 2026 09:41:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=48020135</link><dc:creator>mmyrte</dc:creator><comments>https://news.ycombinator.com/item?id=48020135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48020135</guid></item><item><title><![CDATA[New comment by mmyrte in "A tale of two teenagers (2023)"]]></title><description><![CDATA[
<p>I read it not as a self-diagnosis but as "I have tested positively, and moreover I believe in the veracity/adequacy of the test".</p>
]]></description><pubDate>Sun, 07 Jan 2024 08:04:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=38899462</link><dc:creator>mmyrte</dc:creator><comments>https://news.ycombinator.com/item?id=38899462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38899462</guid></item><item><title><![CDATA[New comment by mmyrte in "Understanding Parquet, Iceberg and Data Lakehouses"]]></title><description><![CDATA[
<p>TL;DR: In climatology, I know people are using zarr. However, I think columnar storage as in parquet also merits consideration.<p>My thinking goes as follows: I'm trying to read chunks from n-dimensional data with a minimum of skips/random reads. For user-facing analytics and drilling down into the data, these chunks tend to be relatively few, and I'd like to have them close to one another. For high-level statistics however, I only care that the data for each chunk of work be contiguous, since I'm going to read all chunks eventually anyways.<p>You can reach these goals with a partitioning strategy either in HDF or zarr or parquet, but you could also reach it with blob fields in a more traditional DB, be it relational or document based or whatever. Since any storage and memory is linear, I don't care whether a row-major or column-major array is populated from a 1d vector from columnar storage with dimensionality metadata or an explicitly array based storage format; I just trust that a table with good columnar compression doesn't waste too much storage on what is implicit in (dense) array storage.<p>Often, I've found that even climatological data _as it pertains to a specific analytic scenario_ is actually a sparse subset of an originally dense nd-array, e.g. only looking at data over land. This has led me to advocate for more tabular approaches, but this is very domain specific.</p>
]]></description><pubDate>Sat, 30 Dec 2023 17:36:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=38816942</link><dc:creator>mmyrte</dc:creator><comments>https://news.ycombinator.com/item?id=38816942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38816942</guid></item><item><title><![CDATA[New comment by mmyrte in "My uBlock Origin filters to remove distractions"]]></title><description><![CDATA[
<p>"Orion" is developed by the people at Kagi, free, and blocks a lot. As soon as I'm earning a salary again, I'll be supporting them financially; super valuable.</p>
]]></description><pubDate>Wed, 20 Sep 2023 19:08:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=37588625</link><dc:creator>mmyrte</dc:creator><comments>https://news.ycombinator.com/item?id=37588625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37588625</guid></item><item><title><![CDATA[New comment by mmyrte in "Using Lidar to map tree shadows"]]></title><description><![CDATA[
<p>Have you thought of marketing at cities 
or public sector consultancies for modelling urban heat islands? Might be handy to prioritize climate adaptation measures.</p>
]]></description><pubDate>Mon, 10 Jul 2023 06:13:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=36662729</link><dc:creator>mmyrte</dc:creator><comments>https://news.ycombinator.com/item?id=36662729</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36662729</guid></item><item><title><![CDATA[New comment by mmyrte in "Using Lidar to map tree shadows"]]></title><description><![CDATA[
<p>Are you sure you mean topological data science? I know that there are topological methods for classifying high-dimensional data structures, but this discussion is mostly geographical/topographical. Yes, it does describe a surface, but there's a fundamental assumption that all objects are either on a plane or a sphere.<p>edit: If you mean GIS (geographical information systems/science), there are plenty of undergraduate courses strewn over github. IMO, the R geospatial ecosystem is more mature than its Python counterpart, but both are very usable.</p>
]]></description><pubDate>Mon, 10 Jul 2023 06:02:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=36662656</link><dc:creator>mmyrte</dc:creator><comments>https://news.ycombinator.com/item?id=36662656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36662656</guid></item></channel></rss>