<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tel</title><link>https://news.ycombinator.com/user?id=tel</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 09:24:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tel" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tel in "Gaussian Splatting – A$AP Rocky "Helicopter" music video"]]></title><description><![CDATA[
<p>I think, yes, with greater splat density—and, critically, more and better inputs to train on, others have stated that these performances were captured with 56 RealSense D455fs—then splats will more accurately estimate light at more angles and distances. I think it's likely that during capture they had to make some choices about lighting and bake those in, so you might still run into issues matching lighting to your shots, but still.<p><a href="https://www.realsenseai.com/products/real-sense-depth-camera-d455f/" rel="nofollow">https://www.realsenseai.com/products/real-sense-depth-camera...</a><p>That said, I don't think splats:voxels as pixels:vector graphics. Maybe a closer analogy would be pixels:vectors is the same as voxels:3d mesh modeling. You might imagine a sophisticated animated character being created and then animated using motion capture techniques.<p>But notice where these things fall apart, too. SVG shines when it's not just estimating the true form, but literally is it (fonts, simplified graphics made from simple strokes). If you try to estimate a photo using SVG it tends to get messy. Similar problems arise when reconstructing a 3d mesh from real-world data.<p>I agree that splats are a bit like pixels, though. They're samples of color and light in 3d (2d) space. They represent the source more faithfully when they're more densely sampled.<p>The difference is that a splat is sampled irregularly, just where it's needed within the scene. That makes it more efficient at representing most useful 3d scenes (i.e., ones where there are a few subjects and objects in mostly empty space). It just uses data where that data has an impact.</p>
]]></description><pubDate>Mon, 19 Jan 2026 11:25:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46677754</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=46677754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46677754</guid></item><item><title><![CDATA[New comment by tel in "Gaussian Splatting – A$AP Rocky "Helicopter" music video"]]></title><description><![CDATA[
<p>Gaussian splatting is a way to record 3-dimensional video. You capture a scene from many angles simultaneously and then combine all of those into a single representation. Ideally, that representation is good enough that you can then, post-production, simulate camera angles you didn't originally record.<p>For example, the camera orbits around the performers in this music video are difficult to imagine in real space. Even if you could pull it off using robotic motion control arms, it would require that the entire choreography is fixed in place before filming. This video clearly takes advantage of being able to direct whatever camera motion the artist wanted in the 3d virtual space of the final composed scene.<p>To do this, the representation needs to estimate the radiance field, i.e. the amount and color of light visible at every point in your 3d volume, viewed from every angle. It's not possible to do this at high resolution by breaking that space up into voxels, those scale badly, O(n^3). You could attempt to guess at some mesh geometry and paint textures on to it compatible with the camera views, but that's difficult to automate.<p>Gaussian splatting estimates these radiance fields by assuming that the radiance is build from millions of fuzzy, colored balls positioned, stretched, and rotated in space. These are the Gaussian splats.<p>Once you have that representation, constructing a novel camera angle is as simple as positioning and angling your virtual camera and then recording the colors and positions of all the splats that are visible.<p>It turns out that this approach is pretty amenable to techniques similar to modern deep learning. You basically train the positions/shapes/rotations of the splats via gradient descent. It's mostly been explored in research labs but lately production-oriented tools have been built for popular 3d motion graphics tools like Houdini, making it more available.</p>
]]></description><pubDate>Sun, 18 Jan 2026 18:49:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46670863</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=46670863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46670863</guid></item><item><title><![CDATA[New comment by tel in "Org Mode syntax is one of the most reasonable markup languages for text (2017)"]]></title><description><![CDATA[
<p>Definitely not common! Nice to hear I'm not alone either.<p>And yeah, I agree. Practically, it's the thing that annoys me the most day-to-day. I've mostly got wrapping set up to handle it now, but it remains a little finicky.</p>
]]></description><pubDate>Sat, 10 Jan 2026 16:55:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46567369</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=46567369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46567369</guid></item><item><title><![CDATA[New comment by tel in "Org Mode syntax is one of the most reasonable markup languages for text (2017)"]]></title><description><![CDATA[
<p>I've recently begun replacing Markdown with Gemini's .gmi/gemtext format. It is Markdown with fewer features. I appreciate the simplicity and it's tremendously easy for custom tools to parse.<p>It has no inline formatting, only 3 levels of ATX headers (without trailing #s), one level of bullet points using only asterisk and not dash to delimit, does not merge touching non-whitespace lines (thus expecting one line per paragraph), and supports only triple-backtick fenced preformatted text areas that just flip on and off.<p>Maybe the biggest change is that links are necessarily listed on their own line, proceeded by a `=>` and optionally followed by alt-text.<p>My gemtext parser is maybe 70 lines and it is arguably 95% of what one needs from Markdown.</p>
]]></description><pubDate>Sat, 10 Jan 2026 15:10:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46566325</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=46566325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46566325</guid></item><item><title><![CDATA[New comment by tel in "Shaders: How to draw high fidelity graphics with just x and y coordinates"]]></title><description><![CDATA[
<p>SDFs still scale by geometry complexity, though. It costs instructions to evaluate each SDF component. You could still use something like BvH (or Matt Keeter’s interval arithmetic trick) to speed things up.</p>
]]></description><pubDate>Sun, 23 Nov 2025 20:24:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46027007</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=46027007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46027007</guid></item><item><title><![CDATA[New comment by tel in "Boring is what we wanted"]]></title><description><![CDATA[
<p>Genuine question, how does SPIR-V compare with CUDA? Why is SPIR-V in a trench coat less desirable? What is it about Metal that makes it SPIR-V in a trench coat (assuming that's what you meant)?</p>
]]></description><pubDate>Tue, 28 Oct 2025 21:07:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45739204</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=45739204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45739204</guid></item><item><title><![CDATA[New comment by tel in "I'm too dumb for Zig's new IO interface"]]></title><description><![CDATA[
<p>At the same time, if you want to use Claude to read the source and narrate how it works to you that’s trivial to do as a user.</p>
]]></description><pubDate>Sat, 23 Aug 2025 13:55:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44996012</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44996012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44996012</guid></item><item><title><![CDATA[New comment by tel in "Functions Are Vectors (2023)"]]></title><description><![CDATA[
<p>If you're familiar with Zorn's Lemma, the construction is just to order bases by inclusion and to consider chains created by noting that there must be an independent dimension and adding it inductively. You can upper bound each of these chains by unioning the members of the chain (which preserves linear independence). By Zorn's Lemma that means there is a maximal linearly independent system and if an element existed outside of that system's span it would contradict that maximality.</p>
]]></description><pubDate>Mon, 07 Jul 2025 14:32:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44490787</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44490787</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44490787</guid></item><item><title><![CDATA[New comment by tel in "A list is a monad"]]></title><description><![CDATA[
<p>Yeah, that's correct. You also often see it as having that for any method `X -> T<Y>` there's a corresponding method `T<X> -> T<Y>`. Or you can have that for any two arrows `X -> T<Y>` and `Y -> T<Z>` there's a composed arrow `X -> T<Z>`. All are equivalent.</p>
]]></description><pubDate>Thu, 03 Jul 2025 18:38:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44457984</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44457984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44457984</guid></item><item><title><![CDATA[New comment by tel in "A list is a monad"]]></title><description><![CDATA[
<p>Every monad is also an applicative and liftA2 does/is the same thing as liftM2. The only reason they both exist was due to Monad being popularized in Haskell earlier than Applicative and thus not having it as a superclass until the Functor-Applicative-Monad Proposal in Haskell 2014. It was obviously correct, but a major breaking change that also got pork barreled a bit and so took a while to land.</p>
]]></description><pubDate>Wed, 02 Jul 2025 23:43:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44450039</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44450039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44450039</guid></item><item><title><![CDATA[New comment by tel in "A list is a monad"]]></title><description><![CDATA[
<p>Monad tutorials are on the rise again.<p>Let's start with function composition. We know that for any two types A and B we can consider functions from A to B, written A -> B. We can also <i>compose</i> them, the heart of sequentiality. If f: A -> B and g: B -> C then we might write (f;g) or (g . f) as two different, equivalent syntaxes for doing one thing and then the other, f and then g.<p>I'll posit this is an extremely fundamental idea of "sequence". Sure something like [a, b, c] is also a sequence, but (f;g) really shows us the idea of piping, of one <i>operation</i> following the first. This is because of how composition is only defined for things with compatible input and output types. It's a little implicit promise that we're feeding the output of f into g, not just putting them side-by-side on the shelf to admire.<p>Anyway, we characterize composition in two ways. First, we want to be clear that composition <i>only</i> cares about the order that the pipes are plugged together, not how you assemble them. Specifically, for three functions, f: A->B, g: B->C, h: C->D, (f;g);h = f;(g;h). The parentheses don't matter.<p>Second, we know that for any type A there's the "do nothing" identity function id_A: A->A. This doesn't <i>have</i> to exist, but it does and it's useful. It helps us characterize composition again by saying that f;id = id;f = f. If you're playing along by metaphor to lists, id is the empty list.<p>Together, composition and identity and the rules of associativity (parentheses don't matter) and how we can omit identity really serve to show what the idea of "sequences of pipes" mean. This is a super popular structure (technically, a category) and whenever you see it you can get a large intuition that some kind of sequencing might be happening.<p>Now, let's consider a slightly different sort of function. Given any type types, what about the functions A -> F B for some fixed other type F. F here exists to somehow "modulate" B, annotate it with additional meaning. Having a value of F B is kind of like having a value of type B, but maybe seen through some kind of lens.<p>Presumably, we care about that particular sort of lens and you can go look up dozens of useful choices of F later, but for now we can just focus on how functions A -> F B sort of still look like little machines that we might want to pipe together. Maybe we'd like there to be composition and identity here as well.<p>It should be obvious that we can't use identity or composition from normal function spaces. They don't type-check (id_A: A -> A, not A -> F A) and they don't semantically make sense (we don't offhand have a way to get Bs out of an F B, which would be the obvious way to "pipe" the result onward in composition).<p>But let's say that for <i>some</i> type constructors F, they did make sense. We'd have for any type A a function pure_A: A -> F A as well as a kind of composition such that f: A -> F B and g: B -> F C become f >=> g : A -> F C. These operations might only exist for <i>some</i> kinds of  F, but whenever they do exist we'd again capture this very primal form of sequencing that we had with functions above.<p>We'd again capture the idea of little A -> F B machines which can be plugged into one another as long as their input and output types align and built into larger and larger sequences of piped machines. It's a very pleasant kind of structure, easy to work with.<p>And those F which support these operations (and follow the associativity and identity rules) are exactly the things we call monads. They're type constructors which allow for sequential piping very similar to how we can compose normal functions.</p>
]]></description><pubDate>Wed, 02 Jul 2025 22:26:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44449498</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44449498</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44449498</guid></item><item><title><![CDATA[New comment by tel in "A list is a monad"]]></title><description><![CDATA[
<p>The more constrained your theory is, the fewer models you have of it and also the more structure you can exploit.<p>Monads, I think, offer enough structure in that we can exploit things like monad composition (as fraught as it is), monadic do/for syntax, and abstracting out "traversals" (over data structures most concretely, but also other sorts of traversals) with monadic accumulators.<p>There's at least one other practical advantage as well, that of "chunking".<p>A chess master is more capable of quickly memorizing realistic board states than an amateur (and equally good at memorizing randomized board states). When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly. People familiar with monads often can hand-wave a set of unknowns in a problem by recognizing it to be a monad-shaped problem that can be independently solved later.</p>
]]></description><pubDate>Wed, 02 Jul 2025 18:43:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44447360</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44447360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44447360</guid></item><item><title><![CDATA[New comment by tel in "Proofs Without Words"]]></title><description><![CDATA[
<p>I’m not a huge fan of these, but this time I noticed that the best ones feel a lot like naturality arguments. As in, moving structural bits in a way that makes it clear that we’re not touching anything that ought to be universally quantifiable.<p>I still don’t love this sort of thing being presented as “proof”, but I thought that idea is interesting. Is there a way to formalize naturality into technical diagrams? Probably!</p>
]]></description><pubDate>Wed, 18 Jun 2025 15:40:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44310854</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44310854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44310854</guid></item><item><title><![CDATA[New comment by tel in "100 years of Zermelo's axiom of choice: What was the problem with it? (2006)"]]></title><description><![CDATA[
<p>Often it's easy to construct a family of sets representing something of interest. For example, we like to define integration initially as a finite process of breaking the integrand's domain into pieces, computing their area, and summing.<p>To compute the contribution of some piece indexed i, we measure the size of its domain, call it the area Ai, and then evaluate the integrand, f, at some point xi within that domain, then the contribution is Ai * f(xi).<p>Summing all of these across i produces a finite approximation of the integral. Then we take a limit on this process, breaking the domain into larger and larger families of sets with smaller and smaller areas. At the limit, we have the integral.<p>This process seems intuitive, but it contains an application of the axiom of choice---in the limit, we have an infinite number of subsets of our domain and we still have to pick a representative xi for each one to evaluate the integrand at.<p>It's quite obvious how to pick an arbitrary representative from each set in a finite family of sets: you just go through one-by-one picking an element.<p>But this argument breaks down for an infinite family. Going one-by-one will never complete. We need to be able to select these representative xis "all at once". And the Axiom of Choice asserts that this is possible.<p>(Note: I'm being fast-and-loose, but the nature of the argument is correct. This doesn't prove integration demands AoC or anything like that, just shows how this one sketch of an argument would. Specifically, integration normally avoids AoC because we can constructively specify our choice function - for example, picking the lexicographically smallest point within each axis-aligned rectangular cell. Generalize to something like Monte Carlo integration, however...)</p>
]]></description><pubDate>Sat, 14 Jun 2025 00:04:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44273297</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44273297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44273297</guid></item><item><title><![CDATA[New comment by tel in "What does “Undecidable” mean, anyway"]]></title><description><![CDATA[
<p>The quantification over T is still kind of weird, though. In a formulation like `for all T, (T and P consistent and T and neg P consistent)` is trivially false, just take `T = {neg P}` and now `{P, neg P}` is inconsistent.<p>We're never trying to show P is independent of all theories, just some specific one.</p>
]]></description><pubDate>Sat, 31 May 2025 21:39:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44147094</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44147094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44147094</guid></item><item><title><![CDATA[New comment by tel in "What does “Undecidable” mean, anyway"]]></title><description><![CDATA[
<p>Yeah, I agree. "Independence" is fundamentally a property of the formal system you're working within (or really, it's a property of the system you're using <i>and</i> of the axiomatic system under test, the system a proposition would be independent from). I'm holding out a bit to unify that with "undecidability" because undecidability takes on a particular character in constructive systems that happens to align with Turing's notion.<p>So at some level, this was just an acknowledgement that "undecidability" in this form is well represented in formal logic. In that sense, at least in constructive logics, it's not just a synonym for "independence".</p>
]]></description><pubDate>Thu, 29 May 2025 16:00:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44127363</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44127363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44127363</guid></item><item><title><![CDATA[New comment by tel in "What does “Undecidable” mean, anyway"]]></title><description><![CDATA[
<p>Quantifying over T is probably not going to work. In informal terms that reads like "No logic exists where P is independent", which probably wasn't quite what you wanted, but also we can trivially disprove that with T = {}. As long as P is self-consistent, then "not P" should be too.<p>We're interested in a proposition's status with respect to some theory that we enjoy (i.e. Zermelo–Fraenkel set theory).</p>
]]></description><pubDate>Thu, 29 May 2025 14:42:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44126603</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44126603</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44126603</guid></item><item><title><![CDATA[New comment by tel in "What does “Undecidable” mean, anyway"]]></title><description><![CDATA[
<p>Independent and undecidable aren't quite the same, even in formal logic. Or rather, sometimes they are but it’s worth being specific.<p>A proposition P being independent of a theory T means that both (T and P) and (T and not P) are consistent. T has nothing to say about P. This may very well be what Gödel was indicating in his paper.<p>On the other hand, undecidable has a sharper meaning in computation contexts as well as constructive logics without excluded middle. In these cases we can comprehend the “reachability” of propositions. A proposition is not true or false, but may instead be “constructively true”, “constructively false”, or “undecidable”.<p>So in a formal logic without excluded middle we have a new, more specific way of discussing undecidability. And this turns out to correspond to the computation idea, too.</p>
]]></description><pubDate>Wed, 28 May 2025 23:35:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44121559</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44121559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44121559</guid></item><item><title><![CDATA[New comment by tel in "What does “Undecidable” mean, anyway"]]></title><description><![CDATA[
<p>I really like TAPL but would recommend Harper’s Practical Foundations of Programming Languages (PFPL) first (though skip the first 2 chapters I think?).<p><a href="https://www.cs.cmu.edu/~rwh/pfpl.html" rel="nofollow">https://www.cs.cmu.edu/~rwh/pfpl.html</a><p>It’s far more directed than TAPL, so I think it’s easier to read from start to finish. TAPL feels better as a reference.</p>
]]></description><pubDate>Wed, 28 May 2025 23:15:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44121453</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44121453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44121453</guid></item><item><title><![CDATA[New comment by tel in "Why Algebraic Effects?"]]></title><description><![CDATA[
<p>Sorry, Haskell’s “monad transformer library”. One of the earliest approaches to composability of multiple monadic effects. It’s pretty similar to an algebraic effect system allowing you to write effectual computations with types like `(Error m, Nondet m, WithState App m) => m ()` to indicate a computation that returns nothing but must be executed with access to error handling, nondeterminism, and access to the App type as state.<p>There are a few drawbacks to it, but it is a pretty simple way to get 80% of the ergonomics of algebraic effects (in Haskell).</p>
]]></description><pubDate>Sat, 24 May 2025 21:19:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44083867</link><dc:creator>tel</dc:creator><comments>https://news.ycombinator.com/item?id=44083867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44083867</guid></item></channel></rss>