<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: justin_</title><link>https://news.ycombinator.com/user?id=justin_</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 11:56:02 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=justin_" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by justin_ in "How to create an OS from scratch"]]></title><description><![CDATA[
<p>The program in the boot sector is run similar to other software: the code is loaded into memory, and then the processor is jumped to the first instruction. This loading and jumping is done by the firmware, which is included in your hardware, separate from your disks.<p>Let's back up to the start. When you switch on a computer, the power rails on a bunch of the chips come up. As this happens, the chips enter a "reset" state with initial data that is baked into the circuitry. This is the power-on reset (PoR) circuitry. When the power is up and stable, the "reset" is over and the processor starts executing. The initial value of program counter / instruction pointer is called the reset vector, and this is where software execution begins. On an x86 PC, this is something like 0xFFFFFFF0. The memory controller on the system is configured for reads from this address to go NOT to the main RAM, but to a flash memory chip on the board that holds the firmware software. From there the firmware will find your bootable disks, load the bootloader, and pass control to it.<p>In practice, systems vary wildly.</p>
]]></description><pubDate>Tue, 30 Sep 2025 07:23:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45422825</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=45422825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45422825</guid></item><item><title><![CDATA[New comment by justin_ in "Pixel is a unit of length and area"]]></title><description><![CDATA[
<p>> Light is integrated over a finite area to form a singke color sample. During Bayer mosaicking, contributions from neighbouring pixels are integrated to form samples of complementary color channels.<p>Integrated into a single color sample indeed. After all the integration, mosaicking, and filtering, a single sample is calculated. That’s the pixel. I think that’s where the confusion is coming from. To Smith, the “pixel” is the sample that lives in the computer.<p>The actual realization of the image sensors and their filters are not encoded in a typical image format, nor used in a typical <i>high level</i> image processing pipelines. For abstract representations of images, the “pixel” abstraction is used.<p>The initial reply to this chain focused on how camera sensors capture information about light, and yes, those sensors take up space and operate over time. But the pixel, the piece of data in the computer, is just a point among many.</p>
]]></description><pubDate>Wed, 23 Apr 2025 13:49:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43772195</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=43772195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43772195</guid></item><item><title><![CDATA[New comment by justin_ in "A Pixel Is Not a Little Square (1995) [pdf]"]]></title><description><![CDATA[
<p>> Usually people choose nearest neighbor in scenarios like that to be faithful to the original<p>Perhaps I should have chosen a higher resolution. AIUI, in many modern systems, such as your OS, it’s usually bilinear or Lanczos resampling.<p>You say that the resize should be faithful to the “100x100 display”, but we don’t know whether it was used from such a display, or coming from a camera, or generated by software.<p>> I'm almost certain this process didn't preserve sharp pixels<p>Sure, but modern image processing pipelines work the same way. They are working to capture the original signal, with a hopeful representation of the <i>continuous</i> signal, not just a grid of squares.<p>I suppose this is different for a “pixel art” situation, where resampling has to be explicitly set to nearest neighbor. Even so, images like that have problems in modern video codecs, which model samples of a continuous signal.<p>And yes, I am aware that the “pixel” in “pixel art” means a little square :). The terminology being overloaded is what makes these discussions so confusing.</p>
]]></description><pubDate>Wed, 23 Apr 2025 12:01:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43771103</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=43771103</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43771103</guid></item><item><title><![CDATA[New comment by justin_ in "Pixel is a unit of length and area"]]></title><description><![CDATA[
<p>> The camera integrates incoming light into a tiny square [...] giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel<p>This is where I was trying to go. The pixel, the result at the end of all that, is the single value (which may be a color with multiple components, sure). The physical reality of the sensor having an area and generating a charge is not relevant to the signal processing that happens after that. For Smith, he's saying that this sample is best understood as a point, rather than a rectangle. This makes more sense for Smith, who was working in image processing within software, unrelated to displays and sensors.</p>
]]></description><pubDate>Wed, 23 Apr 2025 10:56:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43770639</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=43770639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43770639</guid></item><item><title><![CDATA[New comment by justin_ in "A Pixel Is Not a Little Square (1995) [pdf]"]]></title><description><![CDATA[
<p>> Audio samples are point samples (usually). This is nice, because there's a whole theory on how to upsample point samples without loss of information.<p>This signal processing applies to images as well. Resampling is used very often for upscaling, for example. Here's an example: <a href="https://en.wikipedia.org/wiki/Lanczos_resampling" rel="nofollow">https://en.wikipedia.org/wiki/Lanczos_resampling</a><p>> It was already wrong in 1995 when monitors where CRTs, and it's way wrong in 2025 in the LCD/OLED era where pixels are truly discrete.<p>I don't think it has anything to do with display technologies though. Imagine this: there is a computer that is dedicated to image processing. It has no display, no CRT, no LCD, nothing. The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?<p>Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar. In that case, there were render farms doing all sorts of graphics processing before the final image was displayed anywhere.</p>
]]></description><pubDate>Wed, 23 Apr 2025 10:43:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=43770555</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=43770555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43770555</guid></item><item><title><![CDATA[New comment by justin_ in "Pixel is a unit of length and area"]]></title><description><![CDATA[
<p>> A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas.<p>Integrates this information into what? :)<p>> A modern display does not reconstruct an image the way a DAC reconstructs sounds<p>Sure, but some software may apply resampling over the original signal for the purposes of upscaling, for example. "Pixels as samples" makes more sense in that context.<p>> It is pretty reasonable in the modern day to say that an idealized pixel is a little square.<p>I do agree with this actually. A "pixel" in popular terminology is a rectangular subdivision of an image, leading us right back to TFA. The term "pixel art" makes sense with this definition.<p>Perhaps we need better names for these things. Is the "pixel" the name for the sample, or is it the name of the square-ish thing that you reconstruct from image data when you're ready to send to a display?</p>
]]></description><pubDate>Wed, 23 Apr 2025 10:22:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43770452</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=43770452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43770452</guid></item><item><title><![CDATA[New comment by justin_ in "Pixel is a unit of length and area"]]></title><description><![CDATA[
<p>> A Pixel Is Not A Little Square!<p>> This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper.<p>> A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.<p>Alvy Ray Smith, 1995
<a href="http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf" rel="nofollow">http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf</a></p>
]]></description><pubDate>Wed, 23 Apr 2025 09:09:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43770049</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=43770049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43770049</guid></item><item><title><![CDATA[A Pixel Is Not a Little Square (1995) [pdf]]]></title><description><![CDATA[
<p>Article URL: <a href="http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf">http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43769959">https://news.ycombinator.com/item?id=43769959</a></p>
<p>Points: 30</p>
<p># Comments: 29</p>
]]></description><pubDate>Wed, 23 Apr 2025 08:52:32 +0000</pubDate><link>http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=43769959</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43769959</guid></item><item><title><![CDATA[New comment by justin_ in "Magic/tragic email links: don't make them the only option"]]></title><description><![CDATA[
<p>Related thread from September 2024:<p><pre><code>    The "email is authentication" pattern
    https://news.ycombinator.com/item?id=41475218
</code></pre>
Some users use email flows, such as "magic links", instead of bothering with passwords at all.</p>
]]></description><pubDate>Wed, 08 Jan 2025 07:11:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=42631837</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=42631837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42631837</guid></item><item><title><![CDATA[New comment by justin_ in "I thought I found a bug"]]></title><description><![CDATA[
<p>I believe it is a bug in the the emulator's implementation of COMMAND.COM. Often, these DOS "emulators" re-implement the standard commands of DOS, including the shell[1]. This is in addition to emulating weird 16-bit environment stuff and the BIOS.<p>The bug can pop up in any C program using stdio that assumes it's fine to do `fread` followed immediately by `fwrite`. The spec forbids this. To make matters more confusing, this behavior does _not_ seem to be in modern libc implementations. Or at least, it works on my machine. I bet modern implementations are able to be more sane about managing different buffers for reading and writing.<p>The original COMMAND.COM from MS-DOS probably did not have this problem, since at least in some versions it was written in assembly[2]. Even for a shell written in C, the fix is pretty easy: seek the file before switching between reading/writing.<p>The title of this post is confusing, since it clearly _is_ a bug somewhere. But I think the author was excited about possibly finding a bug in libc:<p>> Sitting down with a debugger, I could just see how the C run-time library (Open Watcom) could be fixed to avoid this problem.<p>[1] Here's DOSBox, for example: <a href="https://github.com/dosbox-staging/dosbox-staging/blob/main/src/shell/shell_cmds.cpp">https://github.com/dosbox-staging/dosbox-staging/blob/main/s...</a><p>[2] MS-DOS 4.0: <a href="https://github.com/microsoft/MS-DOS/tree/main/v4.0/src/CMD/COMMAND">https://github.com/microsoft/MS-DOS/tree/main/v4.0/src/CMD/C...</a></p>
]]></description><pubDate>Thu, 26 Dec 2024 07:03:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42513593</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=42513593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42513593</guid></item><item><title><![CDATA[New comment by justin_ in "25 Years of Dillo"]]></title><description><![CDATA[
<p>For those curious about Dillo, you can try it right now in your browser. On the JSLinux site, the graphical VMs come with Dillo 3.0.5:<p><a href="https://bellard.org/jslinux/vm.html?url=alpine-x86-xwin.cfg&mem=256&graphic=1" rel="nofollow">https://bellard.org/jslinux/vm.html?url=alpine-x86-xwin.cfg&...</a><p>(Warning: this will download 30+ MB)<p>After it starts up, right click, and then choose Browser - Dillo. There's no HTTPS support, but Google and httpbin.org work at least.</p>
]]></description><pubDate>Mon, 16 Dec 2024 06:58:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=42428637</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=42428637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42428637</guid></item><item><title><![CDATA[New comment by justin_ in "UTC, Tai, and Unix Time (2001)"]]></title><description><![CDATA[
<p>> UNIX time counts the number of seconds since an ``epoch.'' This is very convenient for programs that work with time intervals: the difference between two UNIX time values is a real-time difference measured in seconds, within the accuracy of the local clock. Thousands of programmers rely on this fact.<p>Contrary to this, since at least 2008[0], the POSIX standard (which is just paper not necessarily how real systems worked at that time) has said that "every day shall be accounted for by exactly 86400 seconds." That means that in modern systems using NTP, your Unix timestamps will be off from the expected number of TAI seconds. And yes, it means that a Unix timestamp _can repeat_ on a leap second day.<p>There's really no perfect way of doing things though. Should Unix time - an integer - represent the number of physical seconds since some epoch moment, or a packed encoding of a "date time" that can be quickly mapped to a calendar day? "The answer is obvious" say both sides simultaneously :^)<p>EDIT: I know DJB is calling out POSIX's choices in this article, but it seems like his "definition" does diverge from what the count actually meant to a lot of people.<p>[0] Also: "The relationship between the actual time of day and the current value for seconds since the Epoch is unspecified." <a href="https://pubs.opengroup.org/onlinepubs/9699919799.2008edition/basedefs/V1_chap04.html" rel="nofollow">https://pubs.opengroup.org/onlinepubs/9699919799.2008edition...</a></p>
]]></description><pubDate>Thu, 09 May 2024 10:23:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=40306768</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=40306768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40306768</guid></item><item><title><![CDATA[New comment by justin_ in "The state of merging technology"]]></title><description><![CDATA[
<p>Linus himself has credited Monotone with the content-addressing by SHA1:
<a href="https://marc.info/?l=git&m=114685143200012" rel="nofollow noreferrer">https://marc.info/?l=git&m=114685143200012</a><p>I think the main issue with Monotone was the performance. Linus also hates databases and C++.<p>--<p>Hoare didn't come up with this idea either, but he did apply it to version control. He had potentially been influenced by his earlier work on distributed file systems and object systems. Here's his 1999 project making use of hashes: <a href="https://web.archive.org/web/20010420023937/venge.net/graydon/SFS.html" rel="nofollow noreferrer">https://web.archive.org/web/20010420023937/venge.net/graydon...</a><p>He was in contact with Ian Clarke of Freenet fame (also 1999). There seems to have been a rise in distributed and encrypted communications around the time, as kragen mentions in his other post.<p>BitTorrent would also come to use hashes for identifying torrents in a tracker, and would come out in 2001, created by Bram Cohen, the author of the post here :)</p>
]]></description><pubDate>Thu, 14 Dec 2023 06:48:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=38638539</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=38638539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38638539</guid></item><item><title><![CDATA[New comment by justin_ in "Not setting up Find My bricked my MacBook"]]></title><description><![CDATA[
<p>This isn’t about signing the OS updates. This is about the device contacting Apple’s activation servers during initial setup.<p>In the linked thread, the users have a “working” device with software already installed, but the activation fails with a cryptic “Activation Error” without any reference to the fact that they must update.<p>Typically, Apple continues to allow activation on older OSes, as long as they were installed previously. I’ve intentionally kept some devices on older versions, and personally have never had a problem activating after a reset. But I suppose that’s only possible because Apple allowed it.<p>EDIT: I was wrong about the "Activation Error" not mentioning a software upgrade. It does mention it. But my point about Apple controlling access to the software still stands. It's something weird that Apple did specifically for iOS 9 and the iPhone 6s. Naturally, who really cares about one random version combination? I don't in this particular case, but the fact that Apple can control it is weird to me.<p>Even more concerning is that some users report that the device locked them out _even during normal use of the phone_: <a href="https://www.reddit.com/r/iphone/comments/acmytg/iphone_6s_plus_verizon_activation_error/" rel="nofollow noreferrer">https://www.reddit.com/r/iphone/comments/acmytg/iphone_6s_pl...</a></p>
]]></description><pubDate>Fri, 13 Oct 2023 16:37:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=37872485</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=37872485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37872485</guid></item><item><title><![CDATA[New comment by justin_ in "Not setting up Find My bricked my MacBook"]]></title><description><![CDATA[
<p>Activation Lock is a weird feature. Users love it when it helps prevent device theft. But really, at the end of the day, you're giving up control of the device to Apple.<p>Apple has already stopped supporting activation of certain versions of iOS[0]. Also, just as a user can request locking down a phone at any time, Apple could technically lock down your device when they see fit. They're not going to do that, but the fact that it's possible is a bit freaky.<p>Someday, far in the future, their activation servers will go down, and no unactivated Apple device will be usable. Already, it is not possible to set up a  new device without an Internet connection. Hopefully the jailbreak community figures something out by then, or maybe Apple would release a tool...<p>[0] <a href="https://forums.macrumors.com/threads/apple-no-longer-activating-iphones-on-ios-9.2113888/" rel="nofollow noreferrer">https://forums.macrumors.com/threads/apple-no-longer-activat...</a></p>
]]></description><pubDate>Fri, 13 Oct 2023 05:46:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=37867130</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=37867130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37867130</guid></item><item><title><![CDATA[New comment by justin_ in "A 286 running like a 386? (2021)"]]></title><description><![CDATA[
<p>AIUI, another big problem with the 80286 was that it did not support returning to real mode after switching to protected mode. This made compatibility a huge issue, which was a big problem for Microsoft. The 80836, besides marking a switch to 32-bit, added virtual 8086 mode, which allowed for emulating real mode after already having switched to protected mode.<p><a href="https://web.archive.org/web/20021003235610/http://osdev.berlios.de/v86.html" rel="nofollow noreferrer">https://web.archive.org/web/20021003235610/http://osdev.berl...</a><p>This "virtual 8086" mode is what was used for the VMM kernel of Windows 9x, and later the NTVDM system on 32-bit versions of Windows NT. I remember being able to run some DOS software on Windows XP (but it wasn't perfect).</p>
]]></description><pubDate>Mon, 10 Jul 2023 11:35:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=36664927</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=36664927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36664927</guid></item><item><title><![CDATA[New comment by justin_ in "How to think about async/await in Rust"]]></title><description><![CDATA[
<p>I was referring to the fact that interaction with the disk itself is asynchronous. Indeed, the interface provided by a kernel for files is synchronous, and for most cases, that's what programmers probably want.<p>But I also think the interest in things like io_uring in Linux reflect that people are open to asynchronous file IO, since the kernel is doing asynchronous work internally. To be honest, I don't know much about io_uring though - I haven't used it for anything serious.<p>There's no perfect choice (as always) -- After all, for extremely high-performance scenarios, people avoid the async nature of IO entirely, and dedicate a thread to busy-looping and polling for readiness. That's what DPDK does for networking. And I think for io_uring and other Linux disk interfaces have options to use polling internally.</p>
]]></description><pubDate>Wed, 05 Jul 2023 09:19:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=36597814</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=36597814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36597814</guid></item><item><title><![CDATA[New comment by justin_ in "How to think about async/await in Rust"]]></title><description><![CDATA[
<p>Asynchronous programming is a great fit for IO-driven programs, because modern IO is inherently asynchronous. This is clearly true for networking, but even for disk IO, generally commands are sent to the disks and results come back later. Another thing that’s asynchronous is user input, and that’s why JS has it.<p>As for threading vs. explicit yielding (e.g. coroutines), I’d say it’s a matter of taste. I generally prefer to see where code is going to yield. Something like gevent can make control flow confusing, since it’s unclear what will yield, and you need to implement explicit yielding for CPU-bound tasks anyway. Its green threads are based on greenlet, which are cooperative coroutines.<p>Cooperative multitasking was a big problem in operating systems, where you can’t tell whether other processes are looking for CPU time or not. But within your own code, you can control it however you want!</p>
]]></description><pubDate>Wed, 05 Jul 2023 07:36:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=36596947</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=36596947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36596947</guid></item><item><title><![CDATA[New comment by justin_ in "Are You Sure You Want to Use MMAP in Your Database Management System? (2022)"]]></title><description><![CDATA[
<p>This is the kind of debate that has been going on surrounding virtual memory forever[0][1]. If you can keep everything in memory, then you're golden. But eventually you won't, and you'll need to rely on secondary storage.<p>Is there a performance benefit to be had by managing the memory and paging yourself? Yes. But eventually you will also consider running processes next to your database, for logging, auditing, ingesting data, running backups, etc. Virtual memory across the whole system helps with that, especially if other people will be using your database in ways you can't predict. As for the efficiency of MMUs and the OS, seems like for almost all cases it's "satisfactory" enough[1].<p>[0] <a href="http://denninginstitute.com/pjd/PUBS/bvm.pdf" rel="nofollow noreferrer">http://denninginstitute.com/pjd/PUBS/bvm.pdf</a><p>[1] From 1969! <a href="https://dl.acm.org/doi/pdf/10.1145/363626.363629" rel="nofollow noreferrer">https://dl.acm.org/doi/pdf/10.1145/363626.363629</a></p>
]]></description><pubDate>Mon, 03 Jul 2023 04:25:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=36568826</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=36568826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36568826</guid></item><item><title><![CDATA[New comment by justin_ in "Why is the volume of a cone one third of the volume of a cylinder? (2010)"]]></title><description><![CDATA[
<p>This is an excellent explanation. If we imagine the triangle being swept about in a circle, the points in the "outer" section (forming the circle at edge of the base our our cone) will cover more distance the points in the "inner" section (the points near the center line of our cone that we're revolving around). The distance moved by the centroid, then, is the average distance a point travels.</p>
]]></description><pubDate>Sun, 02 Jul 2023 16:01:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=36562728</link><dc:creator>justin_</dc:creator><comments>https://news.ycombinator.com/item?id=36562728</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36562728</guid></item></channel></rss>