<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: brosco</title><link>https://news.ycombinator.com/user?id=brosco</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 05:59:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=brosco" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by brosco in "What is a manifold?"]]></title><description><![CDATA[
<p>One reason is that it would be like hanging a picture using a sledgehammer. If you're just studying various ways of unwrapping a sphere, the (very deep) theory of manifolds is not necessary. I'm not a cartographer but I would assume they care mostly about how space is distorted in the projection, and have developed appropriate ways of dealing with that already.<p>Another is that when working with manifolds, you usually don't get a set of <i>global</i> coordinates. Manifolds are defined by various <i>local</i> coordinate charts. A smooth manifold just means that you can change coordinates in a smooth (differentiable) way, but that doesn't mean two people on opposite sides of the manifold will agree on their coordinate system. On a sphere or circle, you can get an "almost global" coordinate system by removing the line or point where the coordinates would be ambiguous.<p>I'm not very well versed in the history, but the study of cartography certainly predates the modern idea of an abstract manifold. In fact, the modern view was born in an effort to unify a lot of classical ideas from the study of calculus on spheres etc.</p>
]]></description><pubDate>Tue, 04 Nov 2025 16:34:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45812871</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=45812871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45812871</guid></item><item><title><![CDATA[New comment by brosco in "Doing well in your courses: Andrej's advice for success (2013)"]]></title><description><![CDATA[
<p>Good point! I used to be guilty of this myself, so now I'm pretty sensitive about other people doing it. I am now one of the more senior students in an academic research group, and some of the younger members would benefit from this advice. I think it's a symptom of sophomorism, and hopefully most will grow out of it.<p>I agree it's especially frustrating when they don't even get it right. That crosses the line for me, and I will admonish them to let me finish what I am saying.</p>
]]></description><pubDate>Mon, 20 Oct 2025 06:55:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45640710</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=45640710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45640710</guid></item><item><title><![CDATA[New comment by brosco in "Doing well in your courses: Andrej's advice for success (2013)"]]></title><description><![CDATA[
<p>I'm not saying it's a learning method. And I don't see how anyone could mistake this for science, so why would it be pseudoscience? It's not really about effort either.<p>It's just a trick that helps me pay attention in lectures, which a lot of people struggle with. Certainly you have to put the work outside of the classroom as well.</p>
]]></description><pubDate>Sun, 19 Oct 2025 17:57:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45636321</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=45636321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45636321</guid></item><item><title><![CDATA[New comment by brosco in "Doing well in your courses: Andrej's advice for success (2013)"]]></title><description><![CDATA[
<p>I have a tip for following lectures (or any technical talk, really) that I've been meaning to write about for a while.<p>As you follow along with the speaker, try to predict what they will say next. These can be either local or global predictions. Guess what they will write next, or what will be on the next slide. With some practice (and exposure to the subject area) you can usually get it right. Also try to keep track of how things fit into the big picture. For example in a math class, there may be a big theorem that they're working towards using lots of smaller lemmas. How will it all come together?<p>When you get it right, it will feel like you are figuring out the material on your own, rather than having it explained to you. This is the most important part.<p>If you can manage to stay one step ahead of the lecturer, it will keep you way more engaged than trying to write everything down. Writing puts you one step behind what the speaker is saying. Because of this, I usually don't take any notes at all. It obviously works better when lecture notes are made available, but you can always look at the textbook.<p>People often assume that I have read the material or otherwise prepared for lectures, seminars, etc., because of how closely I follow what the speaker is saying. But really most talks are quite logical, and if you stay engaged it's easy to follow along. The key is to not zone out or break your concentration, and I find this method helps me immensely.</p>
]]></description><pubDate>Sun, 19 Oct 2025 17:41:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45636152</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=45636152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45636152</guid></item><item><title><![CDATA[New comment by brosco in "Researchers Discover the Optimal Way to Optimize"]]></title><description><![CDATA[
<p>One of OpenAI's founding team members developed Adam [0] well before it was flashy and profitable. It's not like nobody is out there trying to develop new algorithms.<p>The reality is that there are some great, mature solvers out there that work well enough for most cases. And while it might be possible to eke out more performance in specific problems, it would be very hard to beat existing solvers in general.<p>Theoretical developments like this, while interesting on their own, don't really contribute much to day-to-day users of linear programming. A lot of smart people have worked very hard to "optimize the optimizers" from a practical standpoint.<p>[0] <a href="https://arxiv.org/abs/1412.6980" rel="nofollow">https://arxiv.org/abs/1412.6980</a></p>
]]></description><pubDate>Sat, 18 Oct 2025 06:40:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45625410</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=45625410</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45625410</guid></item><item><title><![CDATA[New comment by brosco in "Who Invented Backpropagation?"]]></title><description><![CDATA[
<p>There is indeed a lot of crossover, and a lot of neural networks can be written in a state space form. The optimal control problem should be equivalent to training the weights, as you mention.<p>However, from what I have seen, this isn't really a useful way of reframing the problem. The optimal control problem is at least as hard, if not harder, than the original problem of training the neural network, and the latter has mature and performant software for doing it efficiently. That's not to say there isn't good software for optimal control, but it's a more general problem and therefore off-the-shelf solvers can't leverage the network structure very well.<p>Some researchers have made interesting theoretical connections like in neural ODEs, but even there the practicality is limited.</p>
]]></description><pubDate>Mon, 18 Aug 2025 20:29:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44944944</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=44944944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44944944</guid></item><item><title><![CDATA[New comment by brosco in "A Lean companion to Analysis I"]]></title><description><![CDATA[
<p>Very cool. Analysis I was the first "real" math textbook that I (an engineer, not a mathematician) felt like I could completely follow and work through, after a few attempts to get through others like Rudin. Hopefully the Lean companion will help make it even more accessible to people who are familiar with math and programming and looking to learn things rigorously.</p>
]]></description><pubDate>Sun, 01 Jun 2025 01:56:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44148146</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=44148146</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44148146</guid></item><item><title><![CDATA[New comment by brosco in "I Don't Like NumPy"]]></title><description><![CDATA[
<p>Have you heard of JIT libraries like numba (<a href="https://github.com/numba/numba">https://github.com/numba/numba</a>)? It doesn't work for all python code, but can be helpful for the type of function you gave as an example. There's no need to rewrite anything, just add a decorator to the function. I don't really know how performance compares to C, for example.</p>
]]></description><pubDate>Thu, 15 May 2025 19:05:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43998235</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=43998235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43998235</guid></item><item><title><![CDATA[New comment by brosco in "I don't like NumPy"]]></title><description><![CDATA[
<p>Compared to Matlab (and to some extent Julia), my complaints about numpy are summed up in these two paragraphs:<p>> Some functions have axes arguments. Some have different versions with different names. Some have Conventions. Some have Conventions and axes arguments. And some don’t provide any vectorized version at all.<p>> But the biggest flaw of NumPy is this: Say you create a function that solves some problem with arrays of some given shape. Now, how do you apply it to particular dimensions of some larger arrays? The answer is: You re-write your function from scratch in a much more complex way. The basic principle of programming is abstraction—solving simple problems and then using the solutions as building blocks for more complex problems. NumPy doesn’t let you do that.<p>Usually when I write Matlab code, the vectorized version just works, and if there are any changes needed, they're pretty minor and intuitive. With numpy I feel like I have to look up the documentation for every single function, transposing and reshaping the array into whatever shape that particular function expects. It's not very consistent.</p>
]]></description><pubDate>Thu, 15 May 2025 18:54:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=43998126</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=43998126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43998126</guid></item><item><title><![CDATA[New comment by brosco in "Show HN: Single-Header Profiler for C++17"]]></title><description><![CDATA[
<p>This looks great! I've been needing something like this for a while, for a project which is quite compute-heavy and uses lots of threads and recursion. I've been using valgrind to profile small test examples, but that's definitely the nuclear option since it slows down the execution so much. I'm going to try this out right away.</p>
]]></description><pubDate>Mon, 14 Apr 2025 16:51:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=43683295</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=43683295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43683295</guid></item><item><title><![CDATA[New comment by brosco in "Markov Chains Explained Visually (2014)"]]></title><description><![CDATA[
<p>That's a good observation, and it is indeed true for many Markov chains. But your counterexample of the identity matrix is not quite right; every vector is an eigenvector of the identity, so there is no "realignment" needed.<p>More generally speaking, you're asking when the iteration `x_+ = Ax` converges to a fixed point which is an eigenvector of A. This can happen a few different ways. The obvious way is that A has an eigenvector `v` with eigenvalue 1, and all other eigenvalues with magnitude < 1. Then those other components will die out with repeated application of A, leaving only `v` in the limit.<p>For Markov chains, we can get this exact property from the Perron-Frobenius theorem, which applies to non-negative irreducible matrices. Irreducible means that the transition graph of the Markov chain is strongly connected. If that's the case, then there is a unique eigenvector called the stationary distribution (with eigenvalue 1), and all initial conditions will converge to it.<p>In case A is not irreducible, you may have different connected components, and the stationary distribution may depend on which component your initial condition is in. Going back to the n x n identity matrix, it has n connected components (it's a completely disconnected graph with all the self-transition probabilities = 1). So every initial condition is stationary, because you can't change anything after the initial step.</p>
]]></description><pubDate>Fri, 28 Feb 2025 16:49:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=43207700</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=43207700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43207700</guid></item><item><title><![CDATA[New comment by brosco in "Challenging projects every programmer should try (2019)"]]></title><description><![CDATA[
<p>It's absolutely being explored. There is a lot of active research into using ML to learn solutions of PDEs (Navier-Stokes in this case). It's not my field so I don't know much about the specifics.<p>The works that I've read train an NN on numerical solutions for different geometries and boundary conditions. Then they try to infer the solutions for configurations outside the training set, which should be much faster than recomputing the numerical solution.</p>
]]></description><pubDate>Tue, 26 Dec 2023 20:05:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=38775517</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=38775517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38775517</guid></item><item><title><![CDATA[New comment by brosco in "Challenging projects every programmer should try (2019)"]]></title><description><![CDATA[
<p>If you're interested to learn more about aerodynamics I would highly suggest learning a bit of classical aerodynamics. It will not be software oriented, since most of the theory deals with approximating very complicated behavior with simple analytical models.<p>It could be interesting to do a comparison with finite volume methods to see when/how those approximations break down.</p>
]]></description><pubDate>Tue, 26 Dec 2023 08:42:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=38769967</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=38769967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38769967</guid></item><item><title><![CDATA[New comment by brosco in "Can a transformer represent a Kalman filter?"]]></title><description><![CDATA[
<p>Thanks for clarifying the motivation, that makes a lot of sense.</p>
]]></description><pubDate>Thu, 14 Dec 2023 03:00:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=38637271</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=38637271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38637271</guid></item><item><title><![CDATA[New comment by brosco in "Can a transformer represent a Kalman filter?"]]></title><description><![CDATA[
<p>I guess this will probably come up in the reviews but the presentation of the Kalman filter is lacking. I know it's not the point of the paper, but getting these details wrong in a paper about Kalman filters is not encouraging.<p>The statement that the Kalman filter is mean-square optimal because it generates a correct estimate in expectation is false. In fact, any gain L will generate an estimate whose expected value is x_k, as long as w_k and v_k are zero mean. The Kalman gain is a specific choice of L that is optimal in the mean square sense only when the disturbances are Gaussian. The Kalman gain is also time-varying and depends on the evolution of the estimate covariance, although it will converge to a steady-state value.<p>What's being described here is more properly called a Luenberger observer, but I guess that name doesn't get the same recognition outside the control community.<p>I'm also wondering why they chose to include H past estimates and measurements in the transformer. They're already embedding the Kalman gain into the weights of the transformer, so taking just one past estimate/measurement should exactly recover the Kalman filter. Going further into the past just makes the estimate worse, because of the softmax.</p>
]]></description><pubDate>Thu, 14 Dec 2023 01:27:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=38636682</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=38636682</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38636682</guid></item><item><title><![CDATA[New comment by brosco in "Ask HN: What are you passionate about at the moment?"]]></title><description><![CDATA[
<p>What? If you start light like Wendler recommends, the program is completely manageable. In fact, most people I know think there is too little training volume at first. I used it for several months in a row a few years ago and it led to great strength gains in every lift.</p>
]]></description><pubDate>Tue, 07 Nov 2023 02:34:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=38172471</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=38172471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38172471</guid></item><item><title><![CDATA[New comment by brosco in "Framework 13 AMD 7040 Series: A Developer's First Impressions"]]></title><description><![CDATA[
<p>I've had my Framework 12 over a year now, so maybe I can give some perspective. I had a lot of similar issues (mostly with wifi) running Fedora when I first got it. I also felt like I wasted a lot of time getting it set up and fixing little bugs here and there.<p>But I'm happy to say that after the first two weeks or so, it's been rock solid. The issues I had with wifi were patched pretty quickly, and everything else pretty much just works with the default configuration. The only thing is the battery life is still bad, partly because I have way too much RAM...</p>
]]></description><pubDate>Fri, 13 Oct 2023 02:01:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=37865675</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=37865675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37865675</guid></item><item><title><![CDATA[New comment by brosco in "Chess and solution pool with linear programming (2018)"]]></title><description><![CDATA[
<p>More specifically, using mixed integer linear programming.<p>I've never seen an MILP used this way, to characterize the entire feasible set (or "solution pool"). Is this one of the fastest ways to do so? The usual branch-and-bound type methods won't apply, since the solver has to enumerate every feasible solution.<p>The CPLEX docs (<a href="https://www.ibm.com/docs/en/icos/22.1.1?topic=solutions-how-enumerate-all" rel="nofollow noreferrer">https://www.ibm.com/docs/en/icos/22.1.1?topic=solutions-how-...</a>) mention the potential slowness and also the numerical issues the author faces in the article.</p>
]]></description><pubDate>Mon, 02 Oct 2023 19:57:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=37743633</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=37743633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37743633</guid></item><item><title><![CDATA[New comment by brosco in "Ask HN: What's the best book you read in 2021?"]]></title><description><![CDATA[
<p>My favorite of the year was Anna Karenina. If you like Brothers Karamazov it would be right up your alley.</p>
]]></description><pubDate>Fri, 24 Dec 2021 16:25:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=29674857</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=29674857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29674857</guid></item><item><title><![CDATA[New comment by brosco in "Ask HN: Literature for mathematical optimization?"]]></title><description><![CDATA[
<p>In addition to Boyd and Vandenberghe, I like "Lectures on Modern Convex Optimization" by Ben-Tal and Nemirovski. Particularly the section comparing linear and conic optimization problems.</p>
]]></description><pubDate>Sun, 04 Jul 2021 18:09:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=27731512</link><dc:creator>brosco</dc:creator><comments>https://news.ycombinator.com/item?id=27731512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27731512</guid></item></channel></rss>