<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kruador</title><link>https://news.ycombinator.com/user?id=kruador</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 04:11:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kruador" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kruador in "Treasures found on HS2 route"]]></title><description><![CDATA[
<p>To be clear, it is <i>currently</i> really easy to find because major earthworks are being done, and that requires space to move in the equipment to do it, along with new roads to get to points that were previously inaccessible, being the middle of nowhere.<p>To see what it will look like <i>afterwards</i>, try to find High Speed 1, aka the Channel Tunnel Rail Link, now that it's had nearly 20 years to be landscaped and vegetation to grow back. If you don't know what you're looking for, you won't see it.</p>
]]></description><pubDate>Tue, 03 Feb 2026 10:22:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46869118</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=46869118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46869118</guid></item><item><title><![CDATA[New comment by kruador in "Treasures found on HS2 route"]]></title><description><![CDATA[
<p>The international platforms are not deleted! They were brought back into use from 2018-2019 to serve the Windsor Lines, which includes the service to Reading - platforms 20-24. That somewhat reduces the congestion at Waterloo; the station throat limits adding more services.<p>The extension to Euston was supposed to have 11 platforms. Even the reduced scope now being implemented is 6 platforms, I believe. All 11 were required to handle the eastern leg of HS2 [providing bypass capacity for the East Coast Main Line out of King's Cross and the Midland Main Line out of St Pancras], and services to Scotland and Manchester [bypassing the West Coast Main Line from Euston's classic platforms].</p>
]]></description><pubDate>Tue, 03 Feb 2026 10:05:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46868970</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=46868970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46868970</guid></item><item><title><![CDATA[New comment by kruador in "Transfering Files with gRPC"]]></title><description><![CDATA[
<p>I would add a further advantage of plain HTTP (REST) compared to gRPC. Splitting the response into blocks and having the client request the next block, as in the gRPC solution, causes round-trip delays. The server can't send the second block of data until the client requests it, so the server is essentially idle until the client has received all packets of the first block, parsed them and generated the next request.<p>In contrast, while HTTP/2 does impose framing of streams, <i>that framing is done entirely server-side</i>. If all one end has to send to the other is a single stream, it'll be DATA frame after DATA frame for the same stream. The client is not required to acknowledge anything. (At least, nothing above the TCP layer!)<p>It probably wasn't noticeable in this experiment as, if I'm reading it correctly, the server and client were on the same box, but if you were separated by any significant distance, plain HTTP should be noticeably faster.</p>
]]></description><pubDate>Mon, 26 Jan 2026 16:43:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46767870</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=46767870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46767870</guid></item><item><title><![CDATA[New comment by kruador in "Uninitialized garbage on ia64 can be deadly (2004)"]]></title><description><![CDATA[
<p>The 8086 was a stop-gap solution until iAPX432 was ready.<p>The 80286 was a stop-gap solution until iAPX432 was ready.<p>The 80386 started as a stop-gap solution until iAPX432 was ready, until someone higher up finally decided to kill that one.</p>
]]></description><pubDate>Mon, 08 Dec 2025 09:56:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46190446</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=46190446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46190446</guid></item><item><title><![CDATA[New comment by kruador in "Why xor eax, eax?"]]></title><description><![CDATA[
<p>ARM64 also has fixed length 32-bit instructions. Yes, immediates are normally small and it's not particularly orthogonal as to how many bits are available.<p>The largest MOV available is 16 bits, but those 16 bits can be shifted by 0, 16, 32 or 48 bits, so the worst case for a 64-bit immediate is 4 instructions. Or the compiler can decide to put the data in a PC-relative pool and use ADR or ADRP to calculate the address.<p>ADD immediate is 12 bits but can optionally apply a 12-bit left-shift to that immediate, so for immediates up to 24 bits it can be done in two instructions.<p>ARM64 decoding is also pretty complex, <i>far</i> less orthogonal than ARM32. Then again, ARM32 was designed to be decodable on a chip with 25,000 transistors, not where you can spend thousands of transistors to decode a single instruction.</p>
]]></description><pubDate>Mon, 01 Dec 2025 17:51:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46110512</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=46110512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46110512</guid></item><item><title><![CDATA[New comment by kruador in "Why xor eax, eax?"]]></title><description><![CDATA[
<p>ARM64 assembly has a MOV instruction, but for most of the ways it's used, it's an alias in the assembler to something else. For example, MOV between two registers actually generates ORR rd, rZR, rm, i.e. rd := (zero-register) OR rm. Or, a MOV with a small immediate is ORR rd, rZR, #imm.<p>If trying to set the stack pointer, or copy the stack pointer, instead the underlying instruction is ADD SP, Xn, #0 i.e. SP = Xn + 0. This is because the stack pointer and zero register are both encoded as register 31 (11111). Some instructions allow you to use the zero register, others the stack pointer. Presumably ORR uses the zero register and ADD the stack pointer.<p>NOP maps to HINT #0. There are 128 HINT values available; anything not implemented on this processor executes as a NOP.<p>There are other operations that are aliased like CMP Xm, Xn is really an alias for SUBS XZR, Xm, Xn: subtract Xn from Xm, store the result in the zero register [i.e. discard it], and set the flags. RISC-V doesn't have flags, of course. ARM Ltd clearly considered them still useful.<p>There are other oddities, things like 'rotate right' is encoded as 'extract register from pair of registers', but it specifies the same source register twice.<p>Disassemblers do their best to hide this from you. ARM list a 'preferred decoding' for any instruction that has aliases, to map back to a more meaningful alias wherever possible.</p>
]]></description><pubDate>Mon, 01 Dec 2025 17:27:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46110174</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=46110174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46110174</guid></item><item><title><![CDATA[New comment by kruador in "Launch HN: Flywheel (YC S25) – Waymo for Excavators"]]></title><description><![CDATA[
<p>Toyota's hybrids, at least, have valves in the hydraulic system. If everything is working, the driver's pedal <i>is</i> isolated from the physical pistons. Pressing the pedal instead moves a 'stroke simulator' (a cylinder with a spring in it), and the pressure is measured with a transducer. The Brake ECU tries to satisfy as much braking demand through regenerative braking as possible, applying the rear brakes to keep balance and front brakes if you brake too hard, requesting more braking than can be generated or the battery can absorb.<p>If there's a failure of the electrical supply to the brake ECU, or another fault condition occurs, various valves then revert to their normally-open or normally-closed positions to allow hydraulic pressure from the pedal through to the brake cylinders, and isolate the stroke simulator.<p>Because the engine isn't constantly running and providing a vacuum that can be used to assist with brake force, the system also includes a 'brake accumulator' and pump to boost the brake pressure.<p>Reference: <a href="https://pmmonline.co.uk/technical/blue-prints-insight-into-the-toyota-brake-system/" rel="nofollow">https://pmmonline.co.uk/technical/blue-prints-insight-into-t...</a><p>I don't know for certain, but I would assume that other hybrids and EVs have similar systems to maximise regenerative braking.</p>
]]></description><pubDate>Thu, 25 Sep 2025 09:04:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45370707</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=45370707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45370707</guid></item><item><title><![CDATA[New comment by kruador in "My new Git utility `what-changed-twice` needs a new name"]]></title><description><![CDATA[
<p>It's a pain in the backside to run on Windows, for two reasons. Firstly, Windows doesn't have (by default) a lot of the tools that are preinstalled in most <i>nix environments. Git for Windows ships half a Cygwin distribution (MSYS2) including Bash, Perl, and Tcl.<p>Second, Windows doesn't really have a 'fork' API. Creating a new process on Windows is a heavyweight operation compared to </i>nix. As such, scripts that repeatedly invoke other commands are sluggish. Converting them to C and calling plumbing commands in-process has a radical effect on performance.<p>Git for Windows is more of a maintained fork than a real first-class platform.<p>Also, I believe it's a goal to make it possible to use Git as a library rather than as an executable. That's hard to do if half the logic is in a random scripting language. Library implementations exist - notably libgit2 - but it can never be fully up to date with the original. Search for 'git libification'.<p>Many IDEs started their Git integration with libgit2, but subsequently fell foul of things that libgit2 can't do or does inconsistently. Therefore they fall back on executing `git` with some fixed-format output.</p>
]]></description><pubDate>Tue, 23 Sep 2025 11:19:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45345454</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=45345454</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45345454</guid></item><item><title><![CDATA[New comment by kruador in "Betty Crocker broke recipes by shrinking boxes"]]></title><description><![CDATA[
<p>The text that the footnote is attached to is:<p>"Large Language Models can gall on an aesthetic level because they are IMPish slurries of thought itself, every word ever written dried into weights and vectors and lubricated with the margarine of RLHF." I infer 'IMPish' as meaning 'like Instant Mashed Potato'.<p>I read that footnote as a somewhat oblique criticism of two LLMs, rather than on the statistic itself - which may indeed have just been fabricated by the LLM as opposed to an actual statistic somehow dredged from its training data, or pulled from a web search.</p>
]]></description><pubDate>Mon, 15 Sep 2025 11:57:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45248626</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=45248626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45248626</guid></item><item><title><![CDATA[New comment by kruador in "Betty Crocker broke recipes by shrinking boxes"]]></title><description><![CDATA[
<p>Strictly, UK teaspoons are 5 ml and tablespoons 15 ml. The metric tablespoons already used in Europe were probably close enough to half an Imperial fluid ounce for it not to matter for most purposes.<p>My kids' baby bottles were labelled with measurements in metric (30 ml increments) and in both US and Imperial fluid ounces. The cans of formula were supplied with scoops for measuring the powder, which were also somewhere close to 2 tablespoons/one fluid ounce (use one scoop per 30 ml of water). There are dire warnings about not varying the concentration from the recommended amount, but I assume that it's not really that precise within 1-2% - more about not varying by 10-20%. My kids seem to have survived, anyway.</p>
]]></description><pubDate>Mon, 15 Sep 2025 11:41:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45248527</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=45248527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45248527</guid></item><item><title><![CDATA[New comment by kruador in "Betty Crocker broke recipes by shrinking boxes"]]></title><description><![CDATA[
<p>Blame the European regulators who decided that it was no longer necessary to have standard pack sizes.<p>Pack sizes were regulated in 1975 for volume measures (wine, beer, spirits, vinegar, oils, milk, water, and fruit juice) and in 1980 for weights (butter, cheese, salt, sugar, cereals [flour, pasta, rice, prepared cereals], dried fruits and vegetables, coffee, and a number of other things). In 2007, all of that was repealed - and member states were now <i>forbidden</i> from regulating pack sizes!<p>I think the rationale was that now the unit price (price per unit of measurement) was mandatory to display, consumers would still know which of two different packs on the same shelf was better value. But standard pack sizes don't <i>just</i> provide value-for-money comparisons, as this article shows.</p>
]]></description><pubDate>Mon, 15 Sep 2025 11:02:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45248286</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=45248286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45248286</guid></item><item><title><![CDATA[New comment by kruador in "Let's Learn x86-64 Assembly (2020)"]]></title><description><![CDATA[
<p>It wasn't possible on the 386. Ken Shirriff discusses how the Intel 80386's register file was built at <a href="https://www.righto.com/2025/05/intel-386-register-circuitry.html?m=0" rel="nofollow">https://www.righto.com/2025/05/intel-386-register-circuitry....</a>. Only four of the registers are built to allow 32-, 16- or 8-bit writes. Reads output the entire register onto the bus and the ALU does the appropriate masking. The twist is for the legacy 16-bit upper half-registers - themselves really a legacy of the 8080, and the requirement to be able to directly translate 8080 code opcode-for-opcode. The output of these has to be shifted down 8 bits to be in the right place for the ALU, then these bits have to be selected.<p>AMD seem to have decided to regularise the instruction set for 64-bit long mode, making all the registers consistently able to operate as 64-bit, 32-bit, 16-bit, and 8-bit, using the lowest bits of each register. This only occurs if using a REX prefix, usually to select one of the 8 additional architectural registers added for 64-bit mode. To achieve this, the bits that are used to select the 'high' part of the legacy 8086 registers in 32- or 16-bit code (and when not using the REX prefix) are used instead to select the lowest 8 bits of the index and pointer registers.<p>From the "Intel 64 and IA-32 Architectures Software Developer's Manual":<p>"In 64-bit mode, there are limitations on accessing byte registers. An instruction cannot reference legacy high-bytes (for example: AH, BH, CH, DH) and one of the new byte registers at the same time (for example: the low byte of the RAX register). However, instructions may reference legacy low-bytes (for example: AL, BL, CL, or DL) and new byte registers at the same time (for example: the low byte of the R8 register, or RBP). The architecture enforces this limitation by changing high-byte references (AH, BH, CH, DH) to low byte references (BPL, SPL, DIL, SIL: the low 8 bits for RBP, RSP, RDI, and RSI) for instructions using a REX prefix."<p>In 64-bit code there is very little reason at all to be using bits 15:8 of a longer register.<p>This possibly puts another spin on Intel's desire to remove legacy 16- and 32-bit support (termed 'X86S'). It would remove the need to support AH, BH, CH and DH - and therefore some of the complex wiring from the register file to support the shifting. If that's what it currently does.<p>Actually, looking at Agner Fog's optimisation tables (<a href="https://www.agner.org/optimize/instruction_tables.pdf" rel="nofollow">https://www.agner.org/optimize/instruction_tables.pdf</a>) it appears there is significant extra latency in using AH/BH/CH/DH, which suggests to me that the processor <i>actually</i> implements shifting into and out of the high byte using extra micro-ops.</p>
]]></description><pubDate>Mon, 14 Jul 2025 11:04:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44558677</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=44558677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44558677</guid></item><item><title><![CDATA[New comment by kruador in "Donkey Kong Country 2 and Open Bus"]]></title><description><![CDATA[
<p>No, SDRAM means Synchronous DRAM, where the data is clocked out of the DRAM chips instead of just appearing on the bus some time after the Column Address Strobe is asserted. Clocking it means that the data doesn't appear before the CPU (or other bus master) is ready to receive it, and that it doesn't disappear before the CPU has read it.<p>Static RAM (SRAM) is a circuit that retains its data as long as the power is supplied to it. Dynamic RAM (DRAM) must be refreshed frequently. It's basically a large array of tiny capacitors which leak their stored charge through imperfect transistor switches, so a charged capacitor must be regularly recharged. You would think that you would need to read the bit and rewrite its value in a second cycle, but it turns out that reading the value is itself a destructive operation and requires the chip to internally recharge the capacitors.<p>Further, the chip is organised in rows and columns - generally there are the same number of Sense Amplifiers as columns, with a whole row of cells discharging into their corresponding Sense Amplifiers on each read cycle, the Sense Amplifiers then being used to recharge that row of cells. The column signals select which Sense Amplifier is connected to the output. So you don't need to read every row and column of a chip, just some column on every row. The Sense Amplifier is a circuit that takes the very tiny charge from the cell transistor and brings it up to a stable signal voltage for the output.<p>So why use DRAM at all if it has this need to be constantly refreshed? Because the Static RAM circuit requires 4-6 transistors per cell, while DRAM only requires 1. You get close to 4-6 times as much storage from the same number of transistors.</p>
]]></description><pubDate>Tue, 01 Jul 2025 12:21:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44433160</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=44433160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44433160</guid></item><item><title><![CDATA[New comment by kruador in "Donkey Kong Country 2 and Open Bus"]]></title><description><![CDATA[
<p>The Sinclair ZX80 and ZX81 have static RAM internally, which you wouldn't expect for a) a computer that's designed to be as cheap as possible and b) uses a Zilog Z80 <i>which has built-in refresh circuitry</i>.<p>The reason is that the designers saved a few chips by repurposing the Z80's refresh circuit as a counter/address generator, when generating the video signal. Specifically, it uses the instruction fetch cycle to read the character code from RAM, then it uses the refresh cycle to read the actual line of character data from the ROM. The ZX80 nominally clocks the Z80 at 3.25MHz, but a machine cycle is four clocks (two for fetch, two for refresh), so it's effectively the same speed as a 0.8125 MHz 6502.<p>I wrote a long section here about how the ZX80 uses the CPU to generate the screen and the extra logic that involves, but it was getting too long :) The ZX81 is basically just a cost-reduced ZX80 where all the discrete logic chips are moved into one semi-custom chip.<p>Doing this makes external RAM packs more expensive too. You couldn't use the real refresh address coming from the Z80 because the video generator would be hopping around a small range of addresses in the ROM, rather than covering the whole of RAM (or at least each row of the DRAM). The designer has two options:<p>1. Use static RAM in the external RAM pack, making it substantially more expensive for the RAM itself;
2. Use DRAM in the external RAM pack, and add extra refresh circuitry to refresh the DRAM when the main computer is using the refresh cycle doing its video madness.<p>I think most RAM packs did the second option.</p>
]]></description><pubDate>Tue, 01 Jul 2025 12:08:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44433055</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=44433055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44433055</guid></item><item><title><![CDATA[New comment by kruador in "A leap year check in three instructions"]]></title><description><![CDATA[
<p>Most 8-bit CPUs didn't even <i>have</i> a hardware multiply instruction. To multiply on a 6502, for example, or a Z80, you have to add repeatedly. You can multiply by a power of 2 by shifting left, so you can get a bigger result by switching between shifting and adding or subtracting. Although, again, on these earlier CPUs you can only shift by one bit at a time, rather than by a variable number of bits.<p>There's also the difference between multiplying by a hard-coded value, which can be implemented with shifts and adds, and multiplying two variables, which has to be done with an algorithm.<p>The 8086 did have multiply instructions, but they were implemented as a loop in the microcode, adding the multiplicand, or not, once for each bit in the multiplier. More at <a href="https://www.righto.com/2023/03/8086-multiplication-microcode.html" rel="nofollow">https://www.righto.com/2023/03/8086-multiplication-microcode...</a>. Multiplying by a fixed value using shifts and adds could be faster.<p>The prototype ARM1 did not have a multiply instruction. The architecture <i>does</i> have a barrel shifter which can shift one of the operands by any number of bits. For a fixed multiplication, it's possible to compute multiplying by a power of two, by (power of two plus 1), or by (power of two minus 1) in a single instruction. The latter is why ARM has both a SUB (subtract) instruction, computing rd := rs1 - Operand2, and a RSB (Reverse SuBtract) instruction, computing rd := Operand2 - rs1. The second operand goes through the barrel shifter, allowing you to write an instruction like 'RSB R0, R1, R1, #4' meaning 'R0 := (R1 << 4) - R1', or in other words '(R1 * 16) - R1', or R1 * 15.<p>ARMv2 added in MUL and MLA (MuLtiply and Accumulate) instructions. The hardware ARM2 implementation uses a Booth's encoder to multiply 2 bits at a time, taking up to 16 cycles for 32 bits. It can exit early if the remaining bits are all 0s.<p>Later ARM cores implemented an optional wider multiplier (that's the 'M' in 'ARM7TDMI', for example) that could multiply more bits at a time, therefore executing in fewer cycles. I believe ARM7TDMI was 8-bit, completing in up to 4 cycles (again, offering early exit). Modern ARM cores can do 64-bit multiplies in a single cycle.</p>
]]></description><pubDate>Fri, 16 May 2025 11:52:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44004271</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=44004271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44004271</guid></item><item><title><![CDATA[New comment by kruador in "20 years of Git"]]></title><description><![CDATA[
<p>This is, in some ways, reintroducing something that other source control systems forced on you (and you can see it in one of the videos that Scott linked, about using BitKeeper - Ep.4 Bits and Booze, <a href="https://www.youtube.com/watch?v=MPFgOnACULU" rel="nofollow">https://www.youtube.com/watch?v=MPFgOnACULU</a>). The previous tools I used (SourceGear Vault, MS Team Foundation Services) required you to have a separate working tree for each branch - the two were directly tied together. That's sometimes useful if you need to have the two versions running concurrently, but for short-lived topic branches or, as you say, working on multiple topics at the same time, it can be very inconvenient.<p>Initially it was jarring to <i>not</i> get a different working directory for each branch, but I soon got used to it. Working in the same directory for multiple branches means that untracked files stay around - can be helpful for things like IDE workspace configuration, which is specific to me and the project, but not the branch.<p>You can of course have multiple clones of the repository - even clones of clones - but pushing/pulling branches from one to another is a lot more work than just checking out a branch in a different worktree.<p>My general working practice now is to keep release versions in their own worktree, and using the default worktree (where the .git directory lives) for development on the main branch. That means I don't need to keep resyncing up my external dependencies (node_modules, for example) when switching between working on different releases. But I can see a good overview of my branches, and everything on the remote, from any worktree.</p>
]]></description><pubDate>Tue, 08 Apr 2025 11:57:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43620642</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=43620642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43620642</guid></item><item><title><![CDATA[New comment by kruador in "US Administration announces 34% tariffs on China, 20% on EU"]]></title><description><![CDATA[
<p>I've seen a suggestion that they're using ccTLDs.<p>Which might explain why the British Indian Ocean Territory - population, one US military base - has such a high tariff. The BIOT, aka Diego Garcia, has the ccTLD .io.</p>
]]></description><pubDate>Thu, 03 Apr 2025 12:02:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43568402</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=43568402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43568402</guid></item><item><title><![CDATA[New comment by kruador in "Git clone –depth 2 is vastly better than –depth 1 if you want to Git push later"]]></title><description><![CDATA[
<p>I can't replicate the initial problem, at least pushing to Bitbucket. I'm using Windows, so I didn't use `touch` - instead I used 'echo' to create a new file in a shallow clone of my repo. That repo is 126 MB on Bitbucket, and the shallow clone downloaded 6395 objects taking 40.68 MB.<p>I've tried with a new file both having content ('Test shallow clone push'), and again with an empty file. In both cases it pushed 3 objects, and in the empty file case it reused one (it turns out my repo already has some empty files in it).<p>It's always possible that this is (or was) a GitHub bug - I haven't tried it there.</p>
]]></description><pubDate>Wed, 12 Feb 2025 13:25:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43025162</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=43025162</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43025162</guid></item><item><title><![CDATA[New comment by kruador in "Git clone –depth 2 is vastly better than –depth 1 if you want to Git push later"]]></title><description><![CDATA[
<p>See my top-level response, but basically nothing is mangled. Instead Git internally treats it as a 'graft' and knows not to look for parents of the prior commit.<p>I started that comment as a reply to you but I realised that a) it may just have been a bug that might already be fixed and b) it looks like the Stack Overflow answer was speculative and not tested!</p>
]]></description><pubDate>Wed, 12 Feb 2025 13:05:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43025013</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=43025013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43025013</guid></item><item><title><![CDATA[New comment by kruador in "Git clone –depth 2 is vastly better than –depth 1 if you want to Git push later"]]></title><description><![CDATA[
<p>It isn't mangled. The commit is there as-is. Instead the repository has a file, ".git/shallow", which tells it not to look for the parents of any commit listed there. If you do a '--depth 1' clone, the file will list the single commit that was retrieved.<p>This is similar to the 'grafts' feature. Indeed 'git log' says 'grafted'.<p>You can test this using "git cat-file -p" with the commit that got retrieved, to print the raw object.<p>> git clone --depth 1 <a href="https://github.com/git/git">https://github.com/git/git</a>
> git log<p>commit 388218fac77d0405a5083cd4b4ee20f6694609c3 (grafted, HEAD -> master, origin/master, origin/HEAD)
Author: Junio C Hamano <gitster@pobox.com>
Date:   Mon Feb 10 10:18:17 2025 -0800<p><pre><code>    The ninth batch

    Signed-off-by: Junio C Hamano <gitster@pobox.com>
</code></pre>
> git cat-file -p 388218fac77d0405a5083cd4b4ee20f6694609c3<p>tree fc620998515e75437810cb1ba80e9b5173458d1c
parent 50e1821529fd0a096fe03f137eab143b31e8ef55
author Junio C Hamano <gitster@pobox.com> 1739211497 -0800
committer Junio C Hamano <gitster@pobox.com> 1739211512 -0800<p>The ninth batch<p>Signed-off-by: Junio C Hamano <gitster@pobox.com><p>I can't reproduce the problem pushing to Bitbucket, using the most recent Git for Windows (2.47.1.windows.2). It only sent 3 objects (which would be the blob of the new file, the tree object containing the new file, and the commit object describing the tree), not the 6000+ in the repository I tested it on.<p>It may be that there was a bug that has now been fixed. Or it may be something that only happens/happened with GitHub (i.e. a bug at the receiving end, not the sending one!)<p>I note that the Stack Overflow user who wrote the answer left a comment underneath saying<p>"worth noting: I haven't tested this; it's just some simple applied math. One clone-and-push will tell you if I was right. :-)"</p>
]]></description><pubDate>Wed, 12 Feb 2025 13:03:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43025001</link><dc:creator>kruador</dc:creator><comments>https://news.ycombinator.com/item?id=43025001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43025001</guid></item></channel></rss>