<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: BadInformatics</title><link>https://news.ycombinator.com/user?id=BadInformatics</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 01:48:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=BadInformatics" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by BadInformatics in "State of Machine Learning in Julia"]]></title><description><![CDATA[
<p>I think it's more complicated than that. The projects that are getting the funding are usually the hard, technical ones, but that funding also supports better docs + more time for API design. This doesn't apply to bleeding edge stuff, but look back through the core SciML libraries and there's no shortage of effort directed towards "dull" stuff like docs + improving compile times. Likewise for the core language: a lot of recent work is bread and butter engineering like (again) improving compile times, filing rough edges off of APIs and (gradually) tackling the deployment story.<p>Now, one area where this dull problem work isn't as noticeable is on the "core" deep learning libraries (Flux and Zygote). AFAICT those two haven't received any significant funding for a couple of years, and there is at most 1 full time, active contributor for both of them. Compare with JAX or even higher-level wrapper libraries like Flax, Haiku or PyTorch Lightning, which have 5-10+ full time core devs. Given this, is it surprising that progress on anything (including docs + interface design) is slow?</p>
]]></description><pubDate>Wed, 12 Jan 2022 15:36:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=29907799</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=29907799</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29907799</guid></item><item><title><![CDATA[New comment by BadInformatics in "CT Scan of a Pumpkin"]]></title><description><![CDATA[
<p>You're in luck, because (assuming the scans are in a compatible format), this is exactly what 3D Slicer was designed for.</p>
]]></description><pubDate>Sun, 31 Oct 2021 21:37:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=29060704</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=29060704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29060704</guid></item><item><title><![CDATA[New comment by BadInformatics in "Senior developers are leading the great resignation movement"]]></title><description><![CDATA[
<p>Who is doing more than a couple rounds of interviews outside of (pardon the scare quotes) "tech companies"? Has anyone run into, say, a bank pulling 4+ rounds of interviews, or is this limited to FAANG(M), SV companies and startups that seek to emulate them?</p>
]]></description><pubDate>Fri, 01 Oct 2021 16:47:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=28720702</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28720702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28720702</guid></item><item><title><![CDATA[New comment by BadInformatics in "Why is a Dollar like a Neanderthal"]]></title><description><![CDATA[
<p>Bronson and Pigott were a nice read, but it's unclear where Föll is getting most of his conjecture from because they certainly don't talk about it. Frankly, the whole piece smacks of the same Diamond-esque "they made fireworks, we made guns" trope that has been thoroughly torn apart since. Now to his credit, he does claim ignorance at the top of the page, but that seems to be quickly forgotten given how much hyperbole is spewed later on.<p>[1] is a better lay overview of medieval-era steel-making. And for a great breakdown of forces behind European success in the modern era, see Brett Devereaux's series on EU4 [2]<p>[1] <a href="https://www.youtube.com/watch?v=5djVkOgu8vs" rel="nofollow">https://www.youtube.com/watch?v=5djVkOgu8vs</a><p>[2] <a href="https://acoup.blog/2021/05/28/collections-teaching-paradox-europa-universalis-iv-part-iv-why-europe/" rel="nofollow">https://acoup.blog/2021/05/28/collections-teaching-paradox-e...</a></p>
]]></description><pubDate>Sun, 05 Sep 2021 18:15:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=28426404</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28426404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28426404</guid></item><item><title><![CDATA[New comment by BadInformatics in "Why is a Dollar like a Neanderthal"]]></title><description><![CDATA[
<p>Trying to interpret individual radicals of a character as standalone components using their original meaning is enticing, but more often than not incorrect. For example, the character for maternal aunt uses the same radical. Phonetic-semantic compound characters are very, very, common. The standalone pronunciation of 夷 doesn't appear to have turkic/steppe origins either [1].<p>Moreover, we know Mongolian writing (because of the geopolitics of the time and its status as a younger written tradition) borrowed quite liberally from its southern neighbours. Including, but not limited to, China [2]. So while Wagner's point about proliferation of ironmaking techniques from outside the (nominal) Chinese state at the time makes sense, the whole phonetic angle doesn't.<p>As for the points about centralization and family name elitism, the first lasted less than 200 years, by which time many formerly aristocratic family names had become _so_ diluted so as to be almost meaningless. One of the main conceits of a major character in RoTK is that he's an average Joe who only gets a modicum of respect for having the same surname as the dynastic family. It also completely ignores the existence of profession-based surnames like 匠 ("artisan", notably 1/2 of 铁匠/blacksmith).<p>[1] <a href="https://en.wikipedia.org/wiki/Dongyi#Yi" rel="nofollow">https://en.wikipedia.org/wiki/Dongyi#Yi</a>
[2] <a href="https://en.wikipedia.org/wiki/Mongolian_writing_systems" rel="nofollow">https://en.wikipedia.org/wiki/Mongolian_writing_systems</a></p>
]]></description><pubDate>Sun, 05 Sep 2021 17:52:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=28426193</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28426193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28426193</guid></item><item><title><![CDATA[New comment by BadInformatics in "Canada, April: Under-65 excess mortality exceeds under-65 Covid-19 deaths"]]></title><description><![CDATA[
<p>It does not, and any discussion of whether certain public health measures should've been implemented should take that into consideration. Toronto-area hospitals were literally sending ICU patients to smaller cities because their own wards were overflowing. Moreover, attrition rates among clinicians (nurses especially) has been atrocious over the past year or so. People are only willing to put up with so much shit for so long, and most provincial systems have zero slack at the moment.<p>That said, measures like GP described were/are in play in many cities. Seniors time was a fixture in the first few months of the pandemic, especially in smaller areas that did not experience a large caseload.<p>That's another point too: I think a lot of HN commenters are unaware of just how fragmented and regional the Canadian healthcare system is. No two provinces implemented the same restrictions or policies at the same time, and only a couple put in strict stay-at-home style lockdowns. Note how the article mentions large increases in both Ontario (lax policies, then sudden strict lockdowns) with Alberta (very few restrictions). Even in Ontario, walking outside the biggest few cities would result in an immediate drop of most of the strict measures present in, say, the GTA. I know it's hard to capture this nuance discussing with strangers on some random online forum, but it's essential if we are to properly discuss cause and effect.</p>
]]></description><pubDate>Sun, 29 Aug 2021 02:17:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=28343496</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28343496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28343496</guid></item><item><title><![CDATA[New comment by BadInformatics in "Rent control isn’t working in Sweden"]]></title><description><![CDATA[
<p>Canadaland did a great series on the pathology of Vancouver real estate recently [1]. TL;DL there is no consensus on the root cause, but the usual suspects of bureaucracy, NIMBYism and foreign investment all make an appearance.<p>[1] <a href="https://www.canadaland.com/podcast/real-estate-3-terminal-city/" rel="nofollow">https://www.canadaland.com/podcast/real-estate-3-terminal-ci...</a></p>
]]></description><pubDate>Fri, 27 Aug 2021 02:05:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=28323529</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28323529</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28323529</guid></item><item><title><![CDATA[New comment by BadInformatics in "Rent control isn’t working in Sweden"]]></title><description><![CDATA[
<p>Vancouver may have "no industry" relative to SV, but it is a veritable black hole for tech on the west coast of Canada. The same rat race of high-skilled, well-paying jobs only being available in HCoL cities is just as much of an issue north of the border. The even smaller gap between compensation and CoL in Vancouver, Toronto, etc. just serves to make things more miserable.</p>
]]></description><pubDate>Fri, 27 Aug 2021 01:54:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=28323463</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28323463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28323463</guid></item><item><title><![CDATA[The Trouble with SPIR-V]]></title><description><![CDATA[
<p>Article URL: <a href="https://xol.io/blah/the-trouble-with-spirv/">https://xol.io/blah/the-trouble-with-spirv/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=28308508">https://news.ycombinator.com/item?id=28308508</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 25 Aug 2021 23:26:48 +0000</pubDate><link>https://xol.io/blah/the-trouble-with-spirv/</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28308508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28308508</guid></item><item><title><![CDATA[New comment by BadInformatics in "Intel’s Arc GPUs will compete with GeForce and Radeon in early 2022"]]></title><description><![CDATA[
<p>Saying all the underdog competitors should team up is a nice idea, but as anyone who has seen how the standards sausage is made (or, indeed, has tried something similar) will tell you, it is often more difficult than everyone going their own way. It might be unintuitive, but coordination is <i>hard</i> even when you're not jockeying for position with your collaborators. This is why I mentioned the silver bullet part: a surface level analysis leads one to believe collaboration is the optimal path, but that starts to show cracks real quickly once one starts actually digging into the details.<p>To end things on a somewhat brighter note, there will be no sea change unless people put in the time and effort to get stuff like Vulkan compute working. As-is, most ML people (somewhat rightfully) expect accelerator support to be handed to them on a silver platter. That's fine, but I'd argue by doing so we lose the right to complain about big libraries and hardware vendors doing what's best for their own interests instead of for the ecosystem as a whole.</p>
]]></description><pubDate>Tue, 17 Aug 2021 05:17:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=28206455</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28206455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28206455</guid></item><item><title><![CDATA[New comment by BadInformatics in "Intel’s Arc GPUs will compete with GeForce and Radeon in early 2022"]]></title><description><![CDATA[
<p>> Do you think AMD should solve every problem CUDA solves for their customers too?<p>They had no choice. Getting a bunch of HPC people to completely rewrite their code for a different API is a tough pill to swallow when you're trying to win supercomputer contracts. Would they have preferred to spend development resources elsewhere? Probably, they've even got their own standards and SDKs from days past.<p>> everyone else using GPU's is running fast as they can towards Vulkan<p>I'm not qualified to comment on the entirety of it, but I can say that basically no claim in this statement is true:<p>1. Not everyone doing compute is using GPUs. Companies are increasingly designing and releasing their own custom hardware (TPUs, IPUs, NPUs, etc.)<p>2. Not everyone using GPUs is cares about Vulkan. Certainly many folks doing graphics stuff don't, and DirectX is as healthy as ever. There have been bits and pieces of work around Vulkan compute for mobile ML model deployment, but it's a tiny niche and doesn't involve discrete GPUs at all.<p>> Is it just too soon too early in the adoption curve<p>Yes. Vulkan compute is still missing many of the niceties of more developed compute APIs. Tooling is one big part of that: writing shaders using GLSL is a pretty big step down from using whatever language you were using before (C++, Fortran, Python, etc).<p>> do ya'll think there are more serious obstructions long term to building a more Vulkan centric AI/ML toolkit<p>You could probably write a whole page about this, but TL;DR yes. It would take <i>at least</i> as much effort as AMD and Intel put into their respective compute stacks to get Vulkan ML anywhere near ready for prime time. You need to have inference, training, cross-device communication, headless GPU usage, reasonably wide compatibility, not garbage performance, framework integration, passable tooling and more.<p>Sure these are all feasible, but who has the incentive to put in the time to do it? The big 3 vendors have their supercomputer contracts already, so all they need to do is keep maintaining their 1st-party compute stacks. Interop also requires going through Khronos, which is its own political quagmire when it comes to standardization. Nvidia already managed to obstruct OpenCL into obscurity, why would they do anything different here? Downstream libraries have also poured untold millions into existing compute stacks, OR rely on the vendors to implement that functionality for them. This is before we even get into custom hardware like TPUs that don't behave like a GPU at all.<p>So in short, there is little inevitable about this at all. The reason people may have been frustrated by your comment is because Vulkan compute comes up all the time as some silver bullet that will save us from the walled gardens of CUDA and co (especially for ML, arguably the most complex and expensive subdomain of them all). We'd all like it to come true, but until all of the aforementioned points are addressed this will remain primarily in pipe dream territory.</p>
]]></description><pubDate>Mon, 16 Aug 2021 21:50:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=28203391</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28203391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28203391</guid></item><item><title><![CDATA[New comment by BadInformatics in "Intel’s Arc GPUs will compete with GeForce and Radeon in early 2022"]]></title><description><![CDATA[
<p>I've mentioned this on other forums, but it would help to have some kind of easily visible, public tracker for this progress. Even a text file, set of GitHub issues or project board would do.<p>Why? Because as-is, most people still believe support for gfx1000 cards is non-existent in any ROCm library. Of course that's not the case as you've pointed out here, but without any good sign of forward progress, your average user is going to assume close to zero support. Vague comments like <a href="https://github.com/RadeonOpenCompute/ROCm/issues/1542" rel="nofollow">https://github.com/RadeonOpenCompute/ROCm/issues/1542</a> are better than nothing, but don't inspire that much confidence without some more detail.</p>
]]></description><pubDate>Mon, 16 Aug 2021 21:24:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=28203183</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28203183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28203183</guid></item><item><title><![CDATA[New comment by BadInformatics in "A future for SQL on the web"]]></title><description><![CDATA[
<p>Google may have coined the term (not sure about this), but it's far from their own thing [1]. PWA should've been a blanket term for a set of standards and guidelines for developing web apps. Those include progressive enhancement, which I don't think most people would expect.<p>Unfortunately, the term has been co-opted to mean "website I can install/pin as an app". Again, Google is probably to blame for this, but AFAICT it was never meant to be the meaning of the term. What it does do is create misunderstandings, like a sibling thread claiming that (desktop) Firefox doesn't support PWAs because you can't install anything.<p>[1] <a href="https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/Progressive_web...</a></p>
]]></description><pubDate>Thu, 12 Aug 2021 20:36:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=28161107</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=28161107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28161107</guid></item><item><title><![CDATA[New comment by BadInformatics in "FastAI.jl: FastAI for Julia"]]></title><description><![CDATA[
<p>Transformers.jl and TextAnalysis.jl already provide quite a bit of functionality for NLP, though to my knowledge neither makes use of RNNs. You may be interested in commenting on <a href="https://github.com/FluxML/Flux.jl/issues/1678" rel="nofollow">https://github.com/FluxML/Flux.jl/issues/1678</a>.</p>
]]></description><pubDate>Wed, 28 Jul 2021 14:58:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=27984589</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=27984589</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27984589</guid></item><item><title><![CDATA[New comment by BadInformatics in "In defense of hard counters in real time strategy games"]]></title><description><![CDATA[
<p>The community staying very active has been the biggest factor here IMO. The predecessor to the current Definitive Edition, the rather disastrous (engine-wise, not content-wise) HD Edition, was created in part by co-opting community made mod content and hiring on some of the creators. This has continued for both the Definitive Edition and AOE 4.<p>Likewise, most of the biggest pros and casters started their careers 5-10+ years ago working on community tournaments and other grassroots events. Even though there's a lot more money now with investment from Microsoft, Red Bull and others, that grassroots core has stuck around and feels (at least to me) more fresh than the very corporate machinery around Blizzard RTSes. It's funny to think that the most anticipated LAN tournament is literally held in someone's apartment (<a href="https://www.ageofempires.com/news/nac3-tournament/" rel="nofollow">https://www.ageofempires.com/news/nac3-tournament/</a>)!</p>
]]></description><pubDate>Tue, 27 Jul 2021 22:23:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=27978397</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=27978397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27978397</guid></item><item><title><![CDATA[New comment by BadInformatics in "FastAI.jl: FastAI for Julia"]]></title><description><![CDATA[
<p>From TFA:<p>> We’ll also be hosting a Q&A session 02.08., 10PM UTC (03.08., 12AM CEST | 8AM AEST). Jeremy will be there, too. Meeting link will follow soon.<p>My understanding is that he's at least been aware of this since early development (just over a year), so make of that what you will.</p>
]]></description><pubDate>Tue, 27 Jul 2021 22:06:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=27978268</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=27978268</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27978268</guid></item><item><title><![CDATA[New comment by BadInformatics in "FastAI.jl: FastAI for Julia"]]></title><description><![CDATA[
<p>Having tried fastai for a "serious" research project and helped (just a bit) towards FastAI.jl development, here's my take:<p>> motivation behind this is unclear.<p>Julia currently has two main DL libraries. Flux, which is somewhere between PyTorch and (tf.)Keras abstraction wise, and Knet, which is a little lower level (think just below PyTorch/around where MXNet Gluon sits). Frameworks like fastai, PyTorch Lightning and Keras demonstrate that there's a desire for higher-level, more batteries included libraries. FastAI.jl is looking to fill that gap in Julia.<p>> Since FastAI.jl uses Flux, and not PyTorch, functionality has to be reimplemented. FastAI.jl has vision support but no text support yet.<p>This is correct. That said, FastAI.jl is not and does not plan to be a copy of the Python API (hence "inspired by"). One consequence of this is that integration with other libraries is much easier, e.g. <a href="https://github.com/chengchingwen/Transformers.jl" rel="nofollow">https://github.com/chengchingwen/Transformers.jl</a> for NLP tasks.<p>> What is the timeline for FastAI.jl to achieve parity?<p>> When should I choose FastAI.jl vs fastai?<p>This depends on your use cases and how comfortable you are with a) Julia b) having to roll some of your own code. For the first, I'd recommend poking around with the language before as well as using the linked dev channel in TFA to get an informed opinion.<p>FastAI.jl itself is composed of multiple constituent packages that can and are used independently, so there's also the option of mixing and matching. For example, <a href="https://github.com/lorenzoh/DataLoaders.jl" rel="nofollow">https://github.com/lorenzoh/DataLoaders.jl</a> is completely library agnostic.</p>
]]></description><pubDate>Tue, 27 Jul 2021 22:04:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=27978256</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=27978256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27978256</guid></item><item><title><![CDATA[New comment by BadInformatics in "What's bad about Julia?"]]></title><description><![CDATA[
<p><a href="https://github.com/jkrumbiegel/Chain.jl" rel="nofollow">https://github.com/jkrumbiegel/Chain.jl</a> would probably work there, but you're not the only one who would like to see this in the language/stdlib itself.</p>
]]></description><pubDate>Mon, 26 Jul 2021 23:13:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=27966653</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=27966653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27966653</guid></item><item><title><![CDATA[New comment by BadInformatics in "What's bad about Julia?"]]></title><description><![CDATA[
<p>For sure, I didn't mean to imply they weren't looking at compute too! <a href="https://github.com/apache/arrow-datafusion" rel="nofollow">https://github.com/apache/arrow-datafusion</a> is another example of the shared compute vision. What I was trying to point out is that (at least for Arrow core) they seem to eschew FFI and generating shared libraries in favour of from scratch implementations in other compiled languages and direct bindings in interpreted ones.</p>
]]></description><pubDate>Mon, 26 Jul 2021 23:10:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=27966620</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=27966620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27966620</guid></item><item><title><![CDATA[New comment by BadInformatics in "What's bad about Julia?"]]></title><description><![CDATA[
<p>As much as many of us would like it to be, the kind of data science work you see Scala used for is a pretty small part of what Julia is used for.<p>I think a big part of that is because DS rarely involves writing fast numeric kernels or hot inner loops, i.e. user code that needs to do numeric stuff quickly. This is in large part because very large organizations have poured untold millions into libraries that already handle this (e.g Spark).<p>In domains where this has not happened or that have more bespoke requirements (e.g. modelling and simulation), something like Julia is far more compelling. That's not to say it's not viable, but unless more practitioners start feeling stuck in a rut [1] I don't see the mindshare changing dramatically.<p>[1] <a href="https://dl.acm.org/doi/10.1145/3317550.3321441" rel="nofollow">https://dl.acm.org/doi/10.1145/3317550.3321441</a></p>
]]></description><pubDate>Mon, 26 Jul 2021 22:40:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=27966366</link><dc:creator>BadInformatics</dc:creator><comments>https://news.ycombinator.com/item?id=27966366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27966366</guid></item></channel></rss>