<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: orclev</title><link>https://news.ycombinator.com/user?id=orclev</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 17:09:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=orclev" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by orclev in "3M knew its chemicals were harmful decades ago, but didn't tell the public"]]></title><description><![CDATA[
<p>Can do both. Destroy the company, but also go after those responsible. Anyone who knew about the situation but didn't make their supervisor(s) aware is personally liable for damages. Anyone on the board or C-Suite that was aware and either did nothing or didn't notify regulators is also personally liable. In both cases if anyone died as a result the companies actions they should be tried for manslaughter.</p>
]]></description><pubDate>Sun, 17 Dec 2023 21:19:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=38676560</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=38676560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38676560</guid></item><item><title><![CDATA[New comment by orclev in "3M knew its chemicals were harmful decades ago, but didn't tell the public"]]></title><description><![CDATA[
<p>Corporate capital punishment, the company gets seized it's patents and copyrights made public domain, and any other assets it has sold to the highest bidder and those profits used to compensate the victims.</p>
]]></description><pubDate>Sun, 17 Dec 2023 20:13:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=38675975</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=38675975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38675975</guid></item><item><title><![CDATA[New comment by orclev in "Save Open Source /-/ the Impending Tragedy of the Cyber Resilience Act"]]></title><description><![CDATA[
<p>Why not? Nothing in the GPL says you need to certify your code for commercial usage in the EU.<p>Edit: In fact, reading the GPL it looks like it might implicitly already preclude usage in the EU under this law. There's this section right here:<p><pre><code>    11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
    12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
</code></pre>
That seems to suggest that any cost associated with certifying for commercial usage in the EU would fall on the company using the GPL licensed code, not the developers of the licensed code. Certifying the code for commercial usage under the EU law I would argue would be a warranty, one explicitly declaimed already by the GPL.</p>
]]></description><pubDate>Wed, 19 Jul 2023 20:17:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=36792564</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=36792564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36792564</guid></item><item><title><![CDATA[New comment by orclev in "Blizzard’s bringing its PC games to Steam, starting with Overwatch 2"]]></title><description><![CDATA[
<p>Does OW2 even run on the SteamDeck? Checking ProtonDB says it's unknown compatibility, but generally competitive PvP games struggle due to their highly intrusive DRM/anti-cheat that wants root kit level access.</p>
]]></description><pubDate>Wed, 19 Jul 2023 20:11:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=36792492</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=36792492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36792492</guid></item><item><title><![CDATA[New comment by orclev in "Save Open Source /-/ the Impending Tragedy of the Cyber Resilience Act"]]></title><description><![CDATA[
<p>Don't need to actually block EU downloads, just state that no open source software is certified for commercial usage in the EU. Any company that ignores that is now in violation of this law and that's their problem. The EU will need to decide if they want to allow their businesses to continue benefiting from open source software or fix this law.<p>The interesting question is really what happens when commercial software companies outside the EU that use open source libraries decide they don't want to deal with this headache _also_ start refusing to certify their software for use in the EU and stop doing business there.</p>
]]></description><pubDate>Wed, 19 Jul 2023 20:00:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=36792335</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=36792335</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36792335</guid></item><item><title><![CDATA[New comment by orclev in "Ask HN: Which query builders (SQL) inspire you and why?"]]></title><description><![CDATA[
<p>The best one I've seen is jOOQ, although there are some caveats. There are a number of advantages and a few disadvantages to using a query builder. On the positive side you have:<p>1) Implementation agnostic queries. For better or worse SQL is a VERY loose standard with tons of little quirks for each underlying DB. Using a query builder lets you write a standardized syntax that then gets translated to match the quirks of the actual SQL implementation for you. This _somewhat_ although not entirely insulates you from the underlying implementations details.<p>2) Works with existing language tooling. Using the actual language you can take advantage of things like intellisense and syntax highlighting to write code and spot typos. In a more traditional SQL client you'll be embedding your queries inside of Strings and in the worst case constructing ad-hoc queries by appending strings together.<p>3) Related to point 2, you get compile time checks that your queries are well formed and that your types make sense. This is where the first major problems also come in, but more on that later.<p>4) Possible compile time optimization of your queries. I'm not aware of any builder that does this, but in theory just like language compilers run optimization passes over the AST generated from your code, you could in theory optimize the query that results from your builders AST (this could possibly even be done at compile time).<p>And now for the cons.<p>1) Dependent on the query builder to support every weird quirk and advanced feature, and raises the problem of what to do when a feature is used in a query that isn't supported by the underlying implementation. If you want to use some slightly obscure feature supported by your particular DB but isn't supported by the query builder, you might just be SOL.<p>2) Compile time headaches due to either generating or validating code. Often times these tools work best when at compile time they can connect to your actual DB to read its schema and either generate code (such as enums of tables and columns) or to validate queries (E.G. checking that a varchar column is being treated as a string and not an int). If you have a conveniently accessible local instance or dev env this might not be a problem, but then you often also need to find a solution for your CICD server. This also says nothing about generated code which is it's own set of headaches.<p>3) Yet another DSL to learn. You're now no longer writing SQL, but instead something SQL adjacent that has been projected onto another language in a no doubt imperfect fashion. You're now needing to use knowledge of both SQL and your language of choice simultaneously in order to write queries. In theory the compile time checks and language support might make this a wash, but it could become relevant if you run into some weird edge cases and need to debug.<p>As for things I'd wish for in an ideal query builder, I think it's VERY important to provide options to avoid needing to establish DB connections at compile time, while also still providing the advantages that often provides. Being able to E.G. point the tools at files checked into version control containing your schema DDL and validate queries or generate code based on that would be a great feature to have.<p>Beyond that providing adequate escape hatches for unusual features when building queries is also important. There should be a way to invoke arbitrary functions or chunks of raw SQL outside of the confines of the query DSL (with the understandable restriction that query checking of such chunks will be minimal at best).</p>
]]></description><pubDate>Fri, 17 Feb 2023 20:36:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=34840183</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=34840183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34840183</guid></item><item><title><![CDATA[New comment by orclev in "Best practices can slow your application down"]]></title><description><![CDATA[
<p>That's interesting, it didn't used to be linear, particularly on the very large instances. Oddly when looking at the prices for RHEL those are much closer to what all the prices used to look like (I hadn't actually looked at AWS pricing in a few years). I wonder if AWSes virtualization tech has just reached the point now where all processing is effectively fungible and it's all really just executing on clusters of mid-range CPUs no matter how large your virtual server is.</p>
]]></description><pubDate>Tue, 09 Mar 2021 00:13:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=26393485</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=26393485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26393485</guid></item><item><title><![CDATA[New comment by orclev in "Best practices can slow your application down"]]></title><description><![CDATA[
<p>It's because Erlang abstracts the difference between distributed and centralized services. In Erlang everything is message passing, and Erlang doesn't much care if the components passing messages are running on the same server or multiple ones in a DC, it will route the messages either way. In many ways Erlang is the ultimate micro-service, to the point where the entire language is built around it.</p>
]]></description><pubDate>Mon, 08 Mar 2021 17:50:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=26388957</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=26388957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26388957</guid></item><item><title><![CDATA[New comment by orclev in "Best practices can slow your application down"]]></title><description><![CDATA[
<p>Perhaps I should have said he was a manager on the Excel team.</p>
]]></description><pubDate>Mon, 08 Mar 2021 17:36:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=26388713</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=26388713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26388713</guid></item><item><title><![CDATA[New comment by orclev in "Best practices can slow your application down"]]></title><description><![CDATA[
<p>Pretty sure Amazons entire backend is C++. Hard to get much bigger than that.</p>
]]></description><pubDate>Mon, 08 Mar 2021 16:58:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=26388046</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=26388046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26388046</guid></item><item><title><![CDATA[New comment by orclev in "Best practices can slow your application down"]]></title><description><![CDATA[
<p>Eh, not sure that follows. Here's the thing, costs aren't linear. If you do what AWS does and create some artificial "compute units" as a sort of fungible measure of processing power, what you'll find is that the sweet spot for price per compute unit is a medium power system. The current mid-range processors tend to be slightly more expensive than the low-end processors, but significantly cheaper than the high-end processors.<p>So, hypothetically, lets say you can get one of 4 processors, a low end one that gives you 75 units for $80, a mid-range processor that gives you 100 units for $100, a high-end processor that gives you 125 units for $150, and the top of the line processor that gives you 150 units for $300.  If you normalize those costs, your 4 processors get price-per-compute values of $1.01, $1, $1.20, and $2. The best value is at the $1 per compute unit price point of the $100 processor. Logically if you need 150 units of compute power you have 2 choices, you can use 2 $100 processors, or 1 $300 processor. Clearly the better option is the 2 $100 processor. This would be scaling out. In the case of what SO did though, they took that off the table, because their formula isn't just the cost of the processor (ignoring related things like RAM and storage), but also includes a per-instance license cost. Their math ends up looking more like $100 CPU + $150 windows license times 2 totaling to $500, vs, $300 CPU + $150 windows license times 1, totaling to $450, which ends up making the more expensive processor the cheaper option in terms of total costs.</p>
]]></description><pubDate>Mon, 08 Mar 2021 16:38:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=26387711</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=26387711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26387711</guid></item><item><title><![CDATA[New comment by orclev in "Best practices can slow your application down"]]></title><description><![CDATA[
<p>It's important to remember Joel Spolsky, one of the founders of stackoverflow is an old school Microsoft guy that managed the Excel team before starting his own company. He has always been a bit of a Microsoft fanboy, and used to advocate a lot for VB as a serious language back before it got swallowed by .Net and turned into a less powerful syntax for C#. The fact they chose C# isn't surprising at all given that. It actually would have been far more surprising to see them pick anything but C#.<p>I think the previous poster was a bit off the mark though. The language performance wasn't <i>really</i> the issue there, rather it's the fact that they picked a language that at the time really only ran on Windows, and as a consequence they were forced into running their web servers on Windows. That choice then forces them to scale up rather than out since each instance has license costs attached to it. For most companies running on Linux, it's trivial to scale out since your only costs are the compute cost (or the hardware cost in a non-cloud model), where as it tends to be far more expensive to scale up as more powerful hardware tends more towards geometric increases rather than linear. These days the choice of C# wouldn't be such a big issue as .Net core can easily run on Linux servers, but back in the 2000s using C# was putting a pretty big albatross around your neck by way of Windows licenses.</p>
]]></description><pubDate>Mon, 08 Mar 2021 15:47:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=26386934</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=26386934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26386934</guid></item><item><title><![CDATA[New comment by orclev in "Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring Interconnect"]]></title><description><![CDATA[
<p>Considering the shift to AMD from Intel that has happened over the last couple years it will be interesting to see if we'll see AMD as more of a priority target in the future. AMD has certainly been leading in sales in most demographics for the last year or two, and there's little sign of Intel closing that gap. AMD has even made some moves recently that indicate some desire to chase Intel out of the lead in the few niches they still control, like the x86 low power/cost device market that has been dominated by atom/celeron.</p>
]]></description><pubDate>Mon, 08 Mar 2021 14:58:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=26386413</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=26386413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26386413</guid></item><item><title><![CDATA[New comment by orclev in "The Code It Yourself Manifesto (2016)"]]></title><description><![CDATA[
<p>There's an important difference between using something because it's popular, and using something because it's a standard designed for your problem. For something simple like just sending some numbers across the wire XML is massive overkill (as an aside, there's actually very little XML is a good solution to). CSV, TSV, JSON arrays, one of dozens of serialization formats, or even just a simple EOL separated value like you proposed are all both standardized and very simple solutions to the problem. On the other hand, had they proposed inventing some new binary serialization protocol and using that to transmit the numbers, that would be even worse than using XML.<p>You should always pick the simplest solution to the problem that meets all your requirements, but when considering solutions you should favor standards compliant solutions. A common example is date formats. Lots of places roll their own date format string when sending dates, but using ISO-8601 will save you (and your clients) so many headaches in the long run.<p>Honestly for your example, not knowing all the details I can't say for sure if a EOL separated value is a good solution, but based on just the description I probably would have gone with a CSV, or possibly a JSON array. I definitely would not have used XML (dear god, why would anyone pick XML in this day and age?), although if they were concerned about needing to add more data down the line I could maybe see an argument for something a bit more involved than a CSV.</p>
]]></description><pubDate>Fri, 27 Nov 2020 20:49:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=25232455</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=25232455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25232455</guid></item><item><title><![CDATA[New comment by orclev in "How io_uring and eBPF Will Revolutionize Programming in Linux"]]></title><description><![CDATA[
<p>L4 uses a similar model, and the last ~20 years of research around L4 has mostly focused on improving IPC performance and security. The core abstraction is a mechanism to control message passing between apps via routing through light weight kernel invocations (which is indeed practically the only thing the kernel does, it being a microkernel architecture).<p>Memory access is enforced, although not technically via the kernel. Rather at boot time the kernel owns all memory, then during init it slices off all the memory it doesn't need for itself and passes it to a user space memory service, and thereafter all memory requests get routed through that process. L4 uses a security model where permissions (including resource access) and their derivatives can be passed from one process to another. Using that system the memory manager process can slice off chunks of its memory and delegate access to those chunks to other processes.</p>
]]></description><pubDate>Fri, 27 Nov 2020 18:15:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=25231203</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=25231203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25231203</guid></item><item><title><![CDATA[New comment by orclev in "The Code It Yourself Manifesto (2016)"]]></title><description><![CDATA[
<p>A large complicated well maintained and widely used library is infinitely preferable to a large complicated library you need to maintain yourself and used only by you. In a similar vein, a well known standard format (or encoding) will always be a better choice than some ad-hoc format you create yourself because not only will that standard have encountered and dealt with problems you haven't even considered, but there are also likely to be a plethora of libraries, frameworks, and tools that support that format, where as if you create something yourself <i></i>you<i></i> end up needing to create anything you need.<p>Your time is generally better spent working on solving your core problem rather than the dozens of ancillary problems that end up needing to be solved along the way (particularly where a whole bunch of other people have spent a whole bunch of time already solving those problems).</p>
]]></description><pubDate>Fri, 27 Nov 2020 17:45:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=25230818</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=25230818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25230818</guid></item><item><title><![CDATA[New comment by orclev in "CAPTCHAs don’t prove you’re human – they prove you’re American (2017)"]]></title><description><![CDATA[
<p>The problem is that security questions are fundamentally flawed. Most of them are easily guessable with a little bit of research, and because they can often be used to bypass your password they're effectively a backdoor into your account. You're generally better off using them as either a backup password (that is, not guessable even given knowledge about you), or simply not using them at all. If you forgot your password then reset it via your e-mail account. In short, don't use security questions, they're fundamentally broken.</p>
]]></description><pubDate>Fri, 27 Nov 2020 17:23:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=25230568</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=25230568</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25230568</guid></item><item><title><![CDATA[New comment by orclev in "Big Trouble in a Deep Void"]]></title><description><![CDATA[
<p>The term dark matter itself is kind of misleading in the first place. The math in general relativity just doesn't add up when applied to certain observations. They've essentially gotten 2 + 2 = 5. In order to fix that they just assume that one of those 2's was actually a 3 somehow, and the extra 1 it picked up got labelled as dark matter. In other words, dark matter is just a term for some missing numbers somewhere in the calculation. Based on the different places where the extra numbers might be included they <i>think</i> it's something with mass, but really that's just a guess based on the existing formula and the changes that would be necessary to make it match the observation.</p>
]]></description><pubDate>Mon, 26 Oct 2020 17:43:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=24898843</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=24898843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24898843</guid></item><item><title><![CDATA[New comment by orclev in "Big Trouble in a Deep Void"]]></title><description><![CDATA[
<p>So, first thing, I'm not a astrophysicist nor even a physicist of any type, so I might be misunderstanding things here, but this is how I interpret all this. Hopefully if I've gotten something significantly wrong someone will correct it.<p>General Relativity matches observations to a point. The issue is that it stops matching observations once you reach galactic scales. In order to explain why that doesn't work you need to start hand waving, and the start of that is dark matter. MOND was thought up not so much as an alternative to General Relativity but as an alternative to dark matter. It tweaks some of the math used in General Relativity to assume that gravity behaves differently at different levels. Basically once you have a strong enough gravitational field it behaves like the gravity we know, but until you hit that point its effects diminish at a different rate. Doing that explains why galaxies behave like they do. For the bulk of the galaxy gravity is strong enough that it behaves exactly like General Relativity says it should, but out near the edges of the galaxy gravity has grown weak enough that it behaves differently. It's sort of hand wavy and leaves a bit of a bad taste in the mouth since there's no real explanation of <i>why</i> gravity should behave that way. On the other hand it doesn't require some phantom matter that we have no observational data to back up.<p>Either theory falls far short, and both of them require a lot of fudging around the edges to align with galactic scale observations, although MOND once you get past the arbitrary change to gravity seems to require less hand waving. Importantly for the linked paper it also seems to line up with the proposed theory and predict the kind of void the paper is predicated on which would be a strong point in favor of MOND.<p>Of key point to the proposed theory, General Relativity predicts that in the first moments after the big bang that the universe was essentially uniform, that everything spread out more or less evenly, and it wasn't until much later when things started to form the likes of planets and stars that we started seeing significant variation in matter distribution of the universe. MOND on the other hand allows for variation in that initial expansion. That's important for the paper because there simply isn't enough time in the General Relativity model to explain a void the size that their theory predicts would be necessary to form. MOND allowing for more variability early on on the other hand does allow enough time that a void of the necessary size could exist.<p>Basically General Relativity on its own doesn't work for things galaxy size and bigger. MOND on its own doesn't work at galactic cluster levels and above. The theory proposed in the paper could explain the discrepancy we see in the rate of expansion of the universe, but doesn't seem to be possible under General Relativity, but is possible under MOND. Both General Relativity and MOND rely on the presence of things not observed yet in order to match with our observations once you scale things back far enough, and neither on its own can explain why the universe seems to be expanding faster than they predict it should. The paper proposes one theory for that, but it's only possible with MOND.</p>
]]></description><pubDate>Mon, 26 Oct 2020 16:43:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=24898139</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=24898139</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24898139</guid></item><item><title><![CDATA[New comment by orclev in "Big Trouble in a Deep Void"]]></title><description><![CDATA[
<p>Not sure honestly. That might have something to do with the vHDM theory mentioned in the paper. I didn't really follow a lot of it, but I <i>think</i> (major grain of salt here) it's saying that that theory predicts some very light neutrinos exist and tend to collect inside of galactic clusters, but not galaxies themselves and that it's those neutrinos that make up the missing mass at universal scales that general relativity explains using dark matter.<p>Either case seems pretty hand wavy honestly. When it comes to galaxy and universe level physics it all seems pretty weak compared to the sort of particle physics and classical physics that we can actually measure and test on Earth. It's all just a bunch of theoretical math with relatively few actual measurements to pin it all down. I don't think we're anywhere near having a solid theory of the universe so it's mostly an exercise in trying to prove which theory is the least wrong at this point, rather than which one is correct.</p>
]]></description><pubDate>Mon, 26 Oct 2020 14:55:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=24896805</link><dc:creator>orclev</dc:creator><comments>https://news.ycombinator.com/item?id=24896805</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24896805</guid></item></channel></rss>