<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: chas</title><link>https://news.ycombinator.com/user?id=chas</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 01 May 2026 01:39:06 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=chas" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by chas in "Gradients are not all you need"]]></title><description><![CDATA[
<p>The major ML conferences all have pretty tight page limits, so more expository sentences usually get cut. This also means that papers usually only explain how their work is different from previous work, so they assume you are familiar with the papers they cite or are willing to read the cited papers.<p>This means that people who have an up-to-date knowledge of a given subfield can quickly get a lot out of a new papers. Unfortunately, it also means that it usually takes a pretty decent stack of papers to get up to speed on a new subfield since you have to read the important segments of the commonly cited papers in order to gain the common knowledge that papers are being diffed against.<p>Traditionally, this issue is solved by textbooks, since the base set of ideas in a given field or subfield is pretty stable. ML has been moving pretty fast in recent years, so there is still a sizable gap between the base knowledge required for productive paper reading and what you can get out of a textbook. For example, Goodfellow et al [1] is a great intro to the core ideas of deep learning, but it was published before transformers were invented, so it doesn’t mention them at all.<p>[1] <a href="https://www.deeplearningbook.org/" rel="nofollow">https://www.deeplearningbook.org/</a></p>
]]></description><pubDate>Sun, 23 Apr 2023 21:08:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=35680641</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=35680641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35680641</guid></item><item><title><![CDATA[New comment by chas in "Are your memory-bound benchmarking timings normally distributed?"]]></title><description><![CDATA[
<p>I think it’s worthwhile to distinguish between throughput and latency for these sorts of discussions, rather than just talking about performance since scratchpads are usually better for latency (even best-case latency) and caches are usually better for throughput. Though of course in this as in any sort of discussion of computer performance, caveats abound.</p>
]]></description><pubDate>Fri, 07 Apr 2023 22:51:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=35488101</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=35488101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35488101</guid></item><item><title><![CDATA[New comment by chas in "Walking 8k Steps Twice a Week Could Prolong Your Life, Study Finds"]]></title><description><![CDATA[
<p>I think it’s important to note that you are assuming a gradual increase in activity over time as well as adequate food and rest between sessions. In the limit case things like rhabdomyolysis definitely exist and in the less-extreme case, you can have problems with connective tissue injuries from increasing the load or volume of exercise too fast. Over the long term, you can also have health problems and injuries caused by doing too much exercise relative to how much food and rest you are getting. All of these are real possibilities if someone started an intense exercise program after 20 years of completely sedentary life as an adult.<p>That said, we are discussing  this in the context of walking a bit more, which should be very well-tolerated for pretty much everyone as long as they don’t have other significant health problems and work up gradually. For example, if you take 1000 steps a day, space out the increase by each week increasing your daily steps by 500-1000 steps a day until you reach 10000 steps a day over the course of a few months (temporarily pause or reverse the increases if you see anything other than passing muscular discomfort).<p>This also explains how ultramarathon (or other “extreme” physical activities) can be tolerated without much issue—bodies are very good at adapting to the loads that are placed on them as long as that increase is gradual and they have enough food and rest to rebuild after particularly intense bouts of activity.<p>A further corollary is that over the long term, being sedentary is not well-tolerated at all, so maintaining a certain baseline level of physical activity is definitely a better idea and as long as your body isn’t giving you any feedback to the contrary, you shouldn’t avoid high loads or intense activity just out of abstract fear of injury.</p>
]]></description><pubDate>Sun, 02 Apr 2023 20:31:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=35414867</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=35414867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35414867</guid></item><item><title><![CDATA[New comment by chas in "An Update on Dianna's Health (Physics Girl) [video]"]]></title><description><![CDATA[
<p>He developed ME/CFS in 2018 without a known cause (to the best of my knowledge).</p>
]]></description><pubDate>Fri, 10 Mar 2023 04:15:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=35090303</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=35090303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35090303</guid></item><item><title><![CDATA[New comment by chas in "Training Deep Networks with Data Parallelism in Jax"]]></title><description><![CDATA[
<p>Jax is a great tool, but it’s really best for training and experimentation. The transformations outlined in this post (amongst others) make it easy to turn simple and straightforward code into high performance parallel code. While this is changing, inferences hasn’t been a historical area of emphasis for the project, so it wouldn’t be my first choice if that was your primary goal.</p>
]]></description><pubDate>Fri, 24 Feb 2023 21:13:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=34930216</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=34930216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34930216</guid></item><item><title><![CDATA[New comment by chas in "Facts about State Machines"]]></title><description><![CDATA[
<p>It really depends on what you want in terms of implementation. On one hand, a big chunk of digital logic is about implementing large, high-speed state machines using circuits.<p>On the other hand, regular expressions and things like lexers are largely based on finite-state machines so there is a ton of information related to those, but they often implement things more complex than finite-state machines in order to be more expressive.<p>In terms of manually implementing them yourself, I think the naive implementation based on the mathematical definition of a deterministic finite automata is pretty good for a small number of states for things like tracking program state. In particular, explicitly listing the total set of expected inputs/transition criteria, explicitly listing the states, and having a single function that takes the current state and current transition-relevant data and produces a new state.<p>This representation is nice because it makes the behavior inspectable in one location. This makes it easier to notice the edge cases and prevent the state representation and possible transitions from getting spread all over code. The transition function can be a switch statement or a table. Really any way of writing a two-input function with a finite number of inputs and outputs will work. Many people have also explored strongly-typed versions of this, which are worth a look as well.</p>
]]></description><pubDate>Fri, 30 Sep 2022 23:24:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=33041998</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=33041998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33041998</guid></item><item><title><![CDATA[New comment by chas in "Avoiding space leaks at all costs"]]></title><description><![CDATA[
<p>The combination of lazy evaluation and state mutation/side effects can be pretty difficult to reason about. For example, if you have a function that changes a global variable as a part of a lazy computation, once that function could have been called you have no way of knowing if or when that global variable will change in the future. If you have other functions that depend on the value of that variable, their future behavior is now much more challenging to reason about than in a strict language. You can also imagine something akin to a race condition in which there are multiple lazy computations which could eventually set that variable to different values and the actual sequence of state transitions depends entirely on the dependency order of a possibly unrelated piece of code. In practice, this means that in languages that are strict by default, lazy computations are often forced to run in order to reason about the code, rather than because the actual results of the computation are required.<p>Since pure functions compute the same results under lazy or strict evaluation and require that any data dependencies they have are explicitly provided as inputs, they interact with lazy computations in a much more tractable way. This means that adding a strictness operator to a lazy language is much easier than adding a laziness operator a a strict language.<p>An alternate approach is what python did with generators where there is a data type for lazy computation, but it lives apart from the rest of the language, so it is mostly used for e.g. stream processing where a default-lazy approach is conceptually straightforward and is less likely to lead to extremely non-trivial control flow. This approach does, however, basically give up on having a laziness operator that will turn a strict computation into a lazy one.</p>
]]></description><pubDate>Thu, 01 Sep 2022 15:53:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=32678240</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=32678240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32678240</guid></item><item><title><![CDATA[New comment by chas in "Introduction to Haskell Typeclasses"]]></title><description><![CDATA[
<p>Funny enough, monoids/semigroups are one of my favorite parts of Haskell and I really miss having an explicit abstraction for them in other languages.<p>A lot of the code I write is looping over all of the values of a data structure to either look for a certain condition being true (any/all boolean monoid), look for an extremal value (min/max semigroup), or combine all of the values in the data structure (sum/product/average monoid). The product of a monoid is also a monoid, so if you want to do two or more of those operations at once, the code remains simple and orthogonal while only traversing the structure once.<p>In particular, all of the those operations are nicely expressed as one-lines using foldMap (<a href="https://hackage.haskell.org/package/base-4.9.1.0/docs/Data-Foldable.html#v:foldMap" rel="nofollow">https://hackage.haskell.org/package/base-4.9.1.0/docs/Data-F...</a>). If you have a data structure called `xs` and each element in data structure has an integer field called `size` you can do the following by using foldMap with different Monoid instances.<p>Sum the sizes:<p><pre><code>  let (Sum total_size) = foldMap (\x -> Sum (size x)) xs
</code></pre>
Check if any size is greater than 5:<p><pre><code>  let (Any over_threshold) = foldMap (\x -> Any ((size x) > 5)) xs
</code></pre>
Do both in one pass:<p><pre><code>  let (Sum total_size, Any over_threshold) = foldMap (\x -> (Sum (size x), Any ((size x) > 5))) xs 
</code></pre>
Get the first entry with a size greater than 5:<p><pre><code>  let (First over_threshold) = foldMap (\x -> First (if (size x) > 5 then Just x else Nothing)) xs
</code></pre>
Get all entries with a size greater than 5:<p><pre><code>  let over_threshold = foldMap (\x -> if (size x) > 5 then [x] else []) xs
</code></pre>
Since all of these operations are associative, we can completely change the data structure or run the operations in parallel or concurrently and the code will still behave exactly the same. These nice properties mean that when I'm thinking about these sorts of tasks in any language, I think about what monoid or semigroup captures a given operation, rather than any of the mechanics of writing the loop. I find clarifies my thinking a lot.</p>
]]></description><pubDate>Wed, 27 Apr 2022 17:50:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=31183382</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=31183382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31183382</guid></item><item><title><![CDATA[New comment by chas in "Functional programming with bananas, lenses, envelopes and barbed wire [pdf] (1991)"]]></title><description><![CDATA[
<p>But lenses as functional references are very common in the Haskell world, whereas the only place I've heard of anamorphisms (unfolds) being referred to as lenses is in this paper. It also confused me quite a bit when I first ran into it, but the name just comes from the notation used in the paper. It is  super hard to google for other references to anamorphisms being called lenses for short though because of yet another unrelated concept: <a href="https://en.wikipedia.org/wiki/Anamorphic_format" rel="nofollow">https://en.wikipedia.org/wiki/Anamorphic_format</a></p>
]]></description><pubDate>Tue, 26 Apr 2022 07:06:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=31164823</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=31164823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31164823</guid></item><item><title><![CDATA[New comment by chas in "Functional programming with bananas, lenses, envelopes and barbed wire [pdf] (1991)"]]></title><description><![CDATA[
<p>While the first author of this paper went on to be heavily involved in Microsoft's Reactive Extensions (among many other things), I think it's better to think of it as making  common recursive patterns explicit and introducing abstraction for those patterns. For example, a common use of goto is to run a certain block of code until a condition becomes true. This is abstracted into a pattern called "while" which takes a condition and a block of code to run. A common use of recursion is the reduce (also called fold) operation which, in the case of a list, computes a single value from a list by combining all of the values of the list with a binary function. As an example, this code sums all of the values in an integer list:<p><pre><code>  def sum(xs):
    if xs == []:
      return 0
    else:
      x = xs.pop(0)
       return x + sum(xs)
</code></pre>
This sort of reduction operation is very broadly useful, so it has been abstracted in frameworks like MapReduce as well as library functions like functools.reduce (<a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow">https://docs.python.org/3/library/functools.html#functools.r...</a>). Recursion schemes build on explicit reduce functions by being strongly typed (which, in addition to reducing bugs, enables a something like a super-powered visitor pattern), very orthogonal (which reduce redundancy and code duplication as a user of the abstraction), and very general (which let you solve a lot of problems with the same small set of programming tools without needing to remember special cases). In particular, the way that they look at data structures lets you interleave the recursion across nested data structures in a way that would be a huge pain in the butt with e.g. Python's __iter__ interface. There are some other nifty things that the approach brings to the table as well, but I think those are the major wins from the perspective of someone not already interested in strongly-typed functional programming.<p>While I covered the case of things that are sort of like reduce (called catamorphisms in the language of recursion schemes), this paper also has analogous abstractions for things that are sort of like itertools.accumulate (<a href="https://docs.python.org/3/library/itertools.html#itertools.accumulate" rel="nofollow">https://docs.python.org/3/library/itertools.html#itertools.a...</a>, called anamorphisms in recursion schemes), as well as combinations thereof (called hylomorphisms). They all use a relatively small number of building blocks and combine in a quite beautiful and useful way, but it's hard to describe precisely without leaning quite heavily on the language of strongly-typed functional programming.</p>
]]></description><pubDate>Tue, 26 Apr 2022 06:58:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=31164781</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=31164781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31164781</guid></item><item><title><![CDATA[New comment by chas in "Functional programming with bananas, lenses, envelopes and barbed wire [pdf] (1991)"]]></title><description><![CDATA[
<p>This is a great paper, but I found it to be very challenging to read when I first ran into it, even though I had some experience programming in Haskell. As a one sentence pitch, this paper is to recursion what if and while are to goto. For a more detailed intro to the concepts see: <a href="https://blog.sumtypeofway.com/posts/introduction-to-recursion-schemes.html" rel="nofollow">https://blog.sumtypeofway.com/posts/introduction-to-recursio...</a> If you are comfortable with Haskell, it should be relatively accessible. If you aren’t, but want a bit more flavor than my one sentence, the intro section in that blog post is still worth a read.</p>
]]></description><pubDate>Tue, 26 Apr 2022 05:03:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=31164110</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=31164110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31164110</guid></item><item><title><![CDATA[New comment by chas in "The Principles of Deep Learning Theory"]]></title><description><![CDATA[
<p>Where are the uses of the calculus of variations with neural nets?</p>
]]></description><pubDate>Sun, 17 Apr 2022 02:06:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=31057811</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=31057811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31057811</guid></item><item><title><![CDATA[New comment by chas in "Ask HN: Beating depression with or without anti-depressants?"]]></title><description><![CDATA[
<p>This isn’t an accurate description of the possible side-effects. Anti-depressants can cause meaningful side-effects including changes to appetite, difficulty regulating body temperature, nausea, changes to sleep, and akathisia. They are also very, very not fun to quit abruptly if you run into an intolerable side-effect. That said, nothing that you said about depression is wrong and the effects of untreated depression can be far worse than the side-effects of anti-depressants. It can just be hard for people to tell what is normal and what is abnormal and potentially caused by their medication, so they should be informed so that they can switch medication if they run into side-effects. (And know if their doctor is not providing them with good information.)</p>
]]></description><pubDate>Tue, 05 Apr 2022 23:03:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=30926263</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=30926263</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30926263</guid></item><item><title><![CDATA[New comment by chas in "An Intuitive Guide to Linear Algebra"]]></title><description><![CDATA[
<p>A semi-decidable problem is still pretty bad news from a computational perspective, but I agree that it's not the best example of what I was trying to illustrate. I was aiming for something dramatic and (somewhat) approachable, but ended up emphasizing properties of vector spaces as free abelian groups, rather than as vector spaces per se (which undermines my emphasis of the specialness of vector spaces in comparison to other algebraic structures). That said, to the best of my knowledge, the algorithms for computing whether two finitely-generated* abelian groups are isomorphic take advantage of the close relationship between finitely-generated abelian groups and vector spaces to compute the Smith normal form of matrices associated with the groups and then compare the normal forms. This takes roughly O(n<i>m</i>sublinear factors) for n x m matrices[0]. So to revise my example, vector spaces with a finite basis (and any finitely-generated free abelian group) can be compared for isomorphism in constant time and finitely-generated non-free abelian groups take time roughly quadratic in the number of generators, so there is a huge win there still.<p>Do you have a favorite example that highlights the unique computational properties of vector spaces?<p>*I don't know how this changes in the finitely-presented case, but I assume the extra constraint can be used to improve the performance of the algorithms. It's a lot easier to find asymptotic analysis of the finitely-generated case though and I don't see a way around dealing with the fact that it's still not free.<p>[0] - I'm basing this on Chapter 8 of <a href="https://cs.uwaterloo.ca/~astorjoh/diss2up.pdf" rel="nofollow">https://cs.uwaterloo.ca/~astorjoh/diss2up.pdf</a>, but this is a deep field in which I am not an expert, so if you are, I'd love to hear more.</p>
]]></description><pubDate>Sat, 02 Apr 2022 21:42:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=30891938</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=30891938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30891938</guid></item><item><title><![CDATA[New comment by chas in "An Intuitive Guide to Linear Algebra"]]></title><description><![CDATA[
<p>Excel has the built in MMULT function, but I’m not aware of any built-in support for eigenvalues or eigenvectors. Many people have written such functions though.<p>That said, I would be surprised if Excel spreadsheets were implemented as matrices. Since you can update one cell and have it automatically update any computation that uses that cell, I would expect spreadsheets to be implemented with some sort of dependency graph so it’s easy to traverse and update the values that need to be changed. (This could be implemented as an adjacency matrix, but I haven’t seen that representation used before for programming language dataflow analysis.)</p>
]]></description><pubDate>Thu, 31 Mar 2022 21:09:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=30872231</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=30872231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30872231</guid></item><item><title><![CDATA[New comment by chas in "An Intuitive Guide to Linear Algebra"]]></title><description><![CDATA[
<p>And for extra magic, since every vector space has a basis, every linear transform between vector spaces with a finite basis can be represented by a finite matrix (<a href="https://en.m.wikipedia.org/wiki/Transformation_matrix" rel="nofollow">https://en.m.wikipedia.org/wiki/Transformation_matrix</a>). While this might feel obvious if you haven’t explored structure-preserving transforms between other types of algebraic objects (e.g. groups, rings), it is in fact very special. Learning this made me a lot more interested in linear algebra. It unifies the algebraic viewpoint that emphasizes things like the superposition property (T(x+y) = T(x) + T(y) and T(ax) = aT(x)) with the computational viewpoint that emphasizes calculations using matrices.<p>Since all linear transforms between vector spaces with a finite basis are finite matrices, the computational tools make it tractable to calculate properties of vector spaces that aren’t even decidable for e.g. groups. For a simple, but remarkable example: All finite vector spaces of the same dimension are isomorphic, but in general, it’s undecidable to compute if two finitely-presented groups are isomorphic.</p>
]]></description><pubDate>Thu, 31 Mar 2022 19:57:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=30871596</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=30871596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30871596</guid></item><item><title><![CDATA[New comment by chas in "Koenigsegg's Tiny Electric Motor Makes 335 HP and 443 LB-FT of Torque"]]></title><description><![CDATA[
<p>For most brushless motors--though not axial flux motors like the one under consideration here--it's much easier to get heat out by passing air axially across the stator. This means that while the surface area is increasing proportionally with length, your ability to flow air across that surface area is not keeping up. For that same motor volume, a shorter and fatter motor will have better airflow and cooling than a long skinny motor.</p>
]]></description><pubDate>Wed, 02 Feb 2022 20:41:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=30183730</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=30183730</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30183730</guid></item><item><title><![CDATA[New comment by chas in "Ask HN: Burnout at 20, How to Recover?"]]></title><description><![CDATA[
<p>It sounds like both your work and hobby involve pretty intense coding. If you are coding for 16-18 hours a day, that doesn't leave enough time for sleep, exercise, and socializing. I've found I need a regular amount of all three of those (and enough high-quality food) in order to have my mind working well for long periods of time. Even if you really love coding, these things ebb and flow, so it's okay for your job to just be a job instead of this all-consuming thing. You'll have to more aggressively prioritize which tasks you work on instead of trying to do everything, but that's an important part of maturing as a programmer and a leader. Maybe take some time doing work only in a fixed time period of the day and spend the rest of the time on some hobbies, ideally ones that are physical, outside, or with other people. (I understand that these are harder to find right now than they were a few years ago).<p>While having a more restorative life outside your job will help, I also agree that a therapist is a good idea to help with your anxiety and to help while navigating the transition from coding being your entire life to it being part of your life (at least for now).</p>
]]></description><pubDate>Wed, 02 Feb 2022 18:18:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=30181584</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=30181584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30181584</guid></item><item><title><![CDATA[New comment by chas in "Koenigsegg's Tiny Electric Motor Makes 335 HP and 443 LB-FT of Torque"]]></title><description><![CDATA[
<p>Heat dissipation mostly. Square-cubed laws are brutal for getting the heat out.</p>
]]></description><pubDate>Tue, 01 Feb 2022 09:21:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=30160286</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=30160286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30160286</guid></item><item><title><![CDATA[New comment by chas in "CPM MagnaCut"]]></title><description><![CDATA[
<p>Folks are certainly working on it from many dimensions, but it’s a pretty hard problem since getting ground truth data involves making and testing materials, which is a very different problem to automate than training machine learning models. You need a fairly cross disciplinary team to make progress. As an example of folks doing good work: <a href="https://a3md.utoronto.ca/" rel="nofollow">https://a3md.utoronto.ca/</a></p>
]]></description><pubDate>Mon, 27 Dec 2021 00:44:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=29697831</link><dc:creator>chas</dc:creator><comments>https://news.ycombinator.com/item?id=29697831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29697831</guid></item></channel></rss>