<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dhx</title><link>https://news.ycombinator.com/user?id=dhx</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 08:34:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dhx" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dhx in "Fast16: High-precision software sabotage 5 years before Stuxnet"]]></title><description><![CDATA[
<p>I dug into how software such as LS-DYNA could have been modified. Take for example the EOS_JWL equation at [1] (vendor website, public manual) which is implemented by LS-DYNA. This equation seemingly could be used, alongside other equations implemented within LS-DYNA, to answer questions such as how long it'd take for a detonator in a missile warhead to detonate a primary explosive substance to cause a particular pressure wave at 20m distance. Working backwards from this result may provide a required fuze timing. Equations and parameters used with LS-DYNA are derived from scientific research, such as [2], which is US government research from the 1980's providing experimental results for high explosive substances. One such example from [2] is experimentation to determine the friction an explosive substance has against different materials which may enclose it. Given the software has equations purposely designed for explosives modelling, it'd be fairly easy to just target those equations in ways which will just slightly frustrate a scientist/engineer into thinking they've got a problem with the manufacturing quality of steel, rather than suspect the software is deliberately adding +/-20% noise to a friction coefficient.<p>The modern equivalent may be something like {insert adversarial country name here} downloading a pirated version of Ansys Autodyn 2026 R1 shortly after official release from a Chinese cracking group on a Chinese bulletin board forum, where just a handful of seeders sit behind Russian ISPs. And then {insert adversarial country name here} later notice during experimentation that the software calculations never quite match experimental results, and maybe then suspecting the pirated copy was deliberately tampered with and distributed. However, this situation may be fairly easily solved by {insert adversarial country name here} by just grabbing a copy of the software they want off a hacked network of a random university or engineering consulting firm in the aerospace and defence sector. Plus it may be naive to assume {insert adversarial country name here} in 2026 couldn't develop their own software from scratch (and/or perform calculations manually), or just rely on experiments, to achieve whatever outcome some other nation state group of hackers is trying to avoid. {insert adversarial country name here} would have to have experimentation equipment and skills regardless to verify manufacturing quality. Simulation software mostly reduces costs and timeframes by reducing the number of mockups and physical experiments needed. For example, it's cheap to run 1000 simulations of an artillery shell hitting vehicle armor plates as shown in [3], and more expensive and time consuming to do the same repetitive thing in the real world.<p>[1] <a href="https://ftp.lstc.com/anonymous/outgoing/jday/manuals/LS-DYNA_manual_Vol_I_R6.1.0.pdf#page=1189" rel="nofollow">https://ftp.lstc.com/anonymous/outgoing/jday/manuals/LS-DYNA...</a><p>[2] <a href="https://www.osti.gov/servlets/purl/6530310" rel="nofollow">https://www.osti.gov/servlets/purl/6530310</a><p>[3] <a href="https://www.youtube.com/watch?v=_dv2PecKUBM" rel="nofollow">https://www.youtube.com/watch?v=_dv2PecKUBM</a></p>
]]></description><pubDate>Mon, 27 Apr 2026 15:27:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47922915</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=47922915</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47922915</guid></item><item><title><![CDATA[New comment by dhx in "Experiment with ICEYE Open Data"]]></title><description><![CDATA[
<p>For more, there is also Umbra SAR open data at [1]. Most of this imagery is also copied to Wikimedia Commons where if someone has got around to doing so, much better metadata may have been added, and smaller images of key features within a larger scene extracted.<p>[1] <a href="https://registry.opendata.aws/umbra-open-data/" rel="nofollow">https://registry.opendata.aws/umbra-open-data/</a></p>
]]></description><pubDate>Sat, 18 Apr 2026 11:53:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47815182</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=47815182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47815182</guid></item><item><title><![CDATA[New comment by dhx in "QGIS 4.0"]]></title><description><![CDATA[
<p><a href="https://web.archive.org/web/20260303144625/https://changelog.qgis.org/en/version/4.0/" rel="nofollow">https://web.archive.org/web/20260303144625/https://changelog...</a></p>
]]></description><pubDate>Sat, 07 Mar 2026 16:51:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47289266</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=47289266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47289266</guid></item><item><title><![CDATA[New comment by dhx in "OsmAnd’s Faster Offline Navigation (2025)"]]></title><description><![CDATA[
<p>Nothing ready-to-go that I'm aware of. ATP will just observe in the next weekly crawl that a shop is no longer returned by the storefinder API call or sitemap crawl, and that shop will simply not be present in the next weekly dataset generated.<p>To set up archives of shop-specific pages (e.g. record of opening hours, address, etc at a point in time), one could monitor the latest builds of <a href="https://alltheplaces.xyz/builds.html" rel="nofollow">https://alltheplaces.xyz/builds.html</a> and when a new build completes, take the new build and 2nd oldest build to compare differences. Then for any feature whose attributes have changed (address, phone number, opening hours, etc) archive the `website` and/or `source_uri` attribute pages again to ensure the latest snapshot is captured. Any new feature would get the same treatment so the page for the newly observed shop/feature is archived for the first time.<p>I'm also aware ArchiveTeam projects tend to commence once the impending collapse of a retail chain is known and someone realises there is a website not archived which would be useful to preserve. Monitoring of ATP feature counts for brands across time may give some hint of how a brand is performing and whether it is growing or shrinking without having to find press releases and financial statements of the brand. Even if a brand suddenly announces bankruptcy (it happens all the time), generally the website will remain online for at least a few months whilst a new buyer is sought or whilst each retail location has a fire sale to get rid of remaining merchandise. It's also worthwhile to be aware of acquisitions of retail chains as this often results in the new parent company changing websites soon after acquisition closes, possibly removing useful content that once existed. Websites also change "just because" and this could be observed after-the-fact by seeing when ATP spiders break and get replaced/fixed.</p>
]]></description><pubDate>Fri, 27 Feb 2026 06:21:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47177190</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=47177190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47177190</guid></item><item><title><![CDATA[New comment by dhx in "OsmAnd’s Faster Offline Navigation (2025)"]]></title><description><![CDATA[
<p>The situation with retail chains is improving thanks to projects such as <a href="https://alltheplaces.xyz/" rel="nofollow">https://alltheplaces.xyz/</a> (disclaimer: I'm a contributor) and efforts of some OSM contributors to focus their contributions towards comparing OSM and ATP features to add missing shops, remove closed shops, update opening hours, etc. For one such example, see <a href="https://matkoniecz.codeberg.page/improving_openstreetmap_using_alltheplaces_dataset/" rel="nofollow">https://matkoniecz.codeberg.page/improving_openstreetmap_usi...</a> for a tool (created by <a href="https://news.ycombinator.com/user?id=matkoniecz">https://news.ycombinator.com/user?id=matkoniecz</a>) which is used to match and compare OSM and ATP features.<p>This work has been slow to take off though as the OSM community has traditionally been stuck on time wasting debates about whether opening hours displayed on the wall of a shop are copyrighted (just the raw data, not a photo of their presentation), and debating the merits and pitfalls of armchair mapping vs. on-the-ground mapping. At least these historical roadblocks seem to now be mostly resolved.<p>For OsmAnd, you might be able to use the OBF import feature (see <a href="https://www.osmand.net/docs/user/personal/import-export/" rel="nofollow">https://www.osmand.net/docs/user/personal/import-export/</a>) to add the raw ATP dataset, or potentially other open data such as Overture Maps if that is more to your liking. Data is mostly sourced direct from brand websites, APIs, etc (as if you were using a storefinder map on their website).</p>
]]></description><pubDate>Fri, 27 Feb 2026 02:19:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47175533</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=47175533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47175533</guid></item><item><title><![CDATA[New comment by dhx in "Is the author of pdf-lib okay?"]]></title><description><![CDATA[
<p>Last account activity was 2024-07-08 to push changes to the personal website andrewjdillon.com.[1][2]<p>Last account activity to contribute to any to any repository was 2021-11-28 to comment on the hopding/pdf-lib repository.[2]<p>It's clearly now an unmaintained repository with 4+ years of inactivity, and likely now also a mostly unused GitHub account in general.<p>[1] <a href="https://github.com/Hopding/andrewjdillon.com/commit/0657c69084b0045ba28fbc492fef3fc70dc933e4" rel="nofollow">https://github.com/Hopding/andrewjdillon.com/commit/0657c690...</a><p>[2] <a href="https://play.clickhouse.com/play?user=play#U0VMRUNUICogRlJPTSBnaXRodWJfZXZlbnRzIFdIRVJFIGFjdG9yX2xvZ2luID0gJ0hvcGRpbmcnIE9SREVSIEJZIGNyZWF0ZWRfYXQgREVTQw==" rel="nofollow">https://play.clickhouse.com/play?user=play#U0VMRUNUICogRlJPT...</a></p>
]]></description><pubDate>Mon, 09 Feb 2026 13:35:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46945043</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46945043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46945043</guid></item><item><title><![CDATA[New comment by dhx in "Is the author of pdf-lib okay?"]]></title><description><![CDATA[
<p>The SANs associated with <a href="https://crt.sh/?q=andrewjdillon.com" rel="nofollow">https://crt.sh/?q=andrewjdillon.com</a> are extremely suspicious. They reminded me straight away of <a href="https://ourbigbook.com/cirosantilli/cia-2010-covert-communication-websites" rel="nofollow">https://ourbigbook.com/cirosantilli/cia-2010-covert-communic...</a><p>There appears to be no obvious plausible link between the SANs other than very obvious lack of plausibility to each website. They're mostly pretend (or knock-off) business websites in random countries (everywhere from Trinidad and Tobago, Germany, mainland USA, Hawaii...) in various languages and all the ones I checked have no verifiable substance to them. For example, one domain is a supposed USA shipping/logistics company whose website states they have 1949 customers and have only delivered 7126 packages, and claims a head office as a house in Renton WA, an office at a different house in Stockbridge GA and a supposed warehouse at a third house in Portland OR. Most domains don't include any valid contact or business information, even a supposed restaurant where you'd want people to find your location easily!<p>There does appear to be heavy use of Google Firebase, and many of the sites share the same IP address(es) for hosting. A reverse IP lookup of domains hosted at those IP addresses reveals more random suspicious domains beyond just those just listed at <a href="https://crt.sh/?q=andrewjdillon.com" rel="nofollow">https://crt.sh/?q=andrewjdillon.com</a></p>
]]></description><pubDate>Mon, 09 Feb 2026 12:46:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46944654</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46944654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46944654</guid></item><item><title><![CDATA[New comment by dhx in "72M Points of Interest"]]></title><description><![CDATA[
<p>I contribute to ATP and can confirm that the author of the wildberries spider was deliberately trying to collect <a href="https://wiki.openstreetmap.org/wiki/Tag:shop%3Doutpost" rel="nofollow">https://wiki.openstreetmap.org/wiki/Tag:shop%3Doutpost</a> (online order pickup locations). It's not a common occurrence within the current set of ATP spiders to capture such features. A quick search indicates that OSM doesn't appear to have tags designed to capture pickup/dropoff partnerships between retail brands, for example, an agreement from a pet supply shop to allow collection of parcels from select fuel stations of a partner brand. Thus I think the author of the wildberries spider has used shop=outpost as the closest tag available in OSM, and Overture Map's filters wouldn't be able to omit these features from their dataset unless Overture Maps adds wildberries to their exclusion list.<p>Ideally ATP's "located_in" and "located_in:wikidata" fields would be populated for these wildberries pickup locations, making it clear the pickup location is part of a parent feature (e.g. fuel station, supermarket). These fields are specific to ATP and are not OSM fields. OSM would expect features to be merged and a hypothetical field such as "pickup_brands:wikidata=Q1;Q2;Q3" be used instead on the parent feature.<p>ATP has a much more inclusive set of features it can extract than what Overture Maps, TomTom et al care about. As Overture Maps is more opinionated on what they aggregate they will filter out ATP extracted features such as individual power poles, park bench seats, local government managed street and park trees, stormwater drain manholes, cemetery plots, weather stations, tsunami buoys, etc. I think there might be some exceptions if it helps TomTom et al with their products such as speed camera locations, national postal provider drop-off/pick-up locations within other branded retail shops, etc.</p>
]]></description><pubDate>Sun, 08 Feb 2026 01:01:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46930206</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46930206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46930206</guid></item><item><title><![CDATA[New comment by dhx in "Microsoft gave FBI set of BitLocker encryption keys to unlock suspects' laptops"]]></title><description><![CDATA[
<p>BitLocker encrypts data on a disk using what it calls a Full Volume Encryption Key (FVEK).[1][2] This FVEK is encrypted with a separate key which it calls a Volume Management Key (VMK) and the VMK-encrypted FVEK is stored in one to three (for redundancy) metadata blocks on the disk.[1][2] The VMK is then encrypted with one or more times with a key which is derived/stored using one or more methods which are identified with VolumeKeyProtectorID.[2][3] These methods include what I think would now be the default for modern Windows installations of 3 "Numerical password" (128-bit recovery key formatted with checksums) and 4 "TPM And PIN". Previously instead of 4 "TPM And PIN" most Windows installations (without TPMs forced to be used) would probably be using just 8 "Passphrase". Unless things have changed recently, in mode 4 "TPM And PIN", the TPM stores a partial key, and the PIN supplied by the user is the other partial key, and both partial keys are combined together to produce the key used to decrypt the VMK.<p>Seemingly once you've installed Windows and given the Microsoft your BitLocker keys in escrow, you could then use Remove-BitLockerKeyProtector to delete the VMK which is protected with mode 3 "Numerical password" (recovery key).[4] It appears that the escrow process (possibly the same as used by BackupToAAD-BitLockerKeyProtector) might only send the numerical key, rather than the VMK itself.[5][6] I couldn't find from a quick Internet search someone who has reverse engineered fveskybackup.dll to confirm this is the case though. If Microsoft are sending the VMK _and_ the numerical key, then they have everything needed to decrypt a disk. If Microsoft are only sending the numerical key, and all numerical key protected VMKs are later securely erased from the disk, the numerical key they hold in escrow wouldn't be useful later on.<p>Someone did however ask the same question I first had. What if I had, for example, a billion BitLocker recovery keys I wanted to ensure were backed up for my protection, safety and peace of mind? This curious person did however already know the limit was 200 recovery keys per device, and found out re-encryption would fail if this limit had been reached, then realised Microsoft had fixed this bug by adding a mechanism to automatically delete stale recovery keys in escrow, then reverse engineered fveskybackup.dll and an undocumented Microsoft Graph API call used to delete (or "delete") escrowed BitLocker recovery keys in batches of 16.[7]<p>It also appears you might only be able to encrypt 10000 disks per day or change your mind on your disk's BitLocker recovery keys 10000 times per day.[8] That might sound like a lot for particularly an individual, but the API also perhaps applies a limit of 150 disks being encrypted every 15 minutes for an entire organisation/tenancy. It doesn't look like anyone has written up an investigation into the limits that might apply for personal Microsoft accounts, or if limits differ if the MS-Organization-Access certificate is presented, or what happens to a Windows installation if a limit is encountered (does it skip BitLocker and continue the installation with it disabled?).<p>[1] <a href="https://learn.microsoft.com/en-us/purview/office-365-bitlocker-and-distributed-key-manager-for-encryption" rel="nofollow">https://learn.microsoft.com/en-us/purview/office-365-bitlock...</a><p>[2] <a href="https://itm4n.github.io/tpm-based-bitlocker/" rel="nofollow">https://itm4n.github.io/tpm-based-bitlocker/</a><p>[3] <a href="https://learn.microsoft.com/en-us/windows/win32/secprov/getkeyprotectortype-win32-encryptablevolume" rel="nofollow">https://learn.microsoft.com/en-us/windows/win32/secprov/getk...</a><p>[4] <a href="https://learn.microsoft.com/en-us/powershell/module/bitlocker/remove-bitlockerkeyprotector?view=windowsserver2025-ps" rel="nofollow">https://learn.microsoft.com/en-us/powershell/module/bitlocke...</a><p>[5] <a href="https://learn.microsoft.com/en-us/graph/api/bitlockerrecoverykey-get?view=graph-rest-1.0&tabs=http" rel="nofollow">https://learn.microsoft.com/en-us/graph/api/bitlockerrecover...</a><p>[6] <a href="https://learn.microsoft.com/en-us/powershell/module/bitlocker/backuptoaad-bitlockerkeyprotector?view=windowsserver2025-ps" rel="nofollow">https://learn.microsoft.com/en-us/powershell/module/bitlocke...</a><p>[7] <a href="https://patchmypc.com/blog/bitlocker-recovery-key-cleanup/" rel="nofollow">https://patchmypc.com/blog/bitlocker-recovery-key-cleanup/</a><p>[8] <a href="https://learn.microsoft.com/en-us/graph/throttling-limits#information-protection-service-limits" rel="nofollow">https://learn.microsoft.com/en-us/graph/throttling-limits#in...</a></p>
]]></description><pubDate>Sat, 24 Jan 2026 13:20:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46743291</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46743291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46743291</guid></item><item><title><![CDATA[New comment by dhx in "Microsoft gave FBI set of BitLocker encryption keys to unlock suspects' laptops"]]></title><description><![CDATA[
<p>Not really, but it's quite complex for Linux because there are so many ways one can manage the configuration of a Linux environment. For something high security, I'd recommend something like Gentoo or NixOS because they have several huge advantages:<p>- They're easy to setup and maintain immutable and reproducible builds.<p>- You only install the software you need, and even within each software item, you only build/install the specific features you need. For example, if you are building a server that will sit in a datacentre, you don't need to build software with Bluetooth support, and by extension, you won't need to install Bluetooth utilities and libraries.<p>- Both have a monolithic Git repository for packages, which is advantageous because you gain the benefit of a giant distributed Merkle tree for verifying you have the same packages everyone else has. As observed with xz-utils, you want a supply chain attacker to be forced to infect as many people as possible so more people are likely to detect it.<p>- Sandboxing is used to minimise the lines of code during build/install which need to have any sort of privileges. Most packages are built and configured as "nobody" in an isolated sandbox, then a privileged process outside of the sandbox peeks inside to copy out whatever the package ended up installing. Obviously the outside process also performs checks such as preventing cool-new-free-game from overwriting /usr/bin/sudo.<p>- The time between a patch hitting an upstream repository and that patch being part of a package installed in these distributions is fast. This is important at the moment because there are many efforts underway to replace and rewrite old insecure software with modern secure equivalents, so you want to be using software with a modern design, not just 5 year old long-term-support software. E.g. glycin is a relatively new library used by GNOME applications for loading of untrusted images. You don't want to be waiting 3 years for a new long-support-support release of your distribution for this software.<p>No matter which distribution you use, you'll get some common benefits such as:<p>- Ability to deploy user applications using something like Flatpak which ensures they are used within a sandbox.<p>- Ability to deploy system applications using something like systemd which ensures they are used within a sandbox.<p>Microsoft have long underinvested in Windows (particularly the kernel), and have made numerous poor and failed attempts to introduce secure application packaging/sandboxing over the years. Windows is now akin to the horse and buggy when compared to the flying cars of open source Linux, iOS, Android and HarmonyOS (v5+ in particular which uses the HongMeng kernel that is even EAL6+, ASIL D and SIL 3 rated).</p>
]]></description><pubDate>Sat, 24 Jan 2026 09:23:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46742177</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46742177</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46742177</guid></item><item><title><![CDATA[New comment by dhx in "Open Infrastructure Map"]]></title><description><![CDATA[
<p>If you want more up-to-date/accurate data on NZ transmission infrastructure there is also the following "Point" geometry tagged per OSM conventions:<p>[1] Transpower pylons: <a href="https://alltheplaces-data.openaddresses.io/map.html?show=https%3A%2F%2Falltheplaces-data.openaddresses.io%2Fruns%2F2026-01-03-13-32-38%2Foutput%2Ftranspower_poles_nz.geojson" rel="nofollow">https://alltheplaces-data.openaddresses.io/map.html?show=htt...</a><p>[2] Transpower substations: <a href="https://alltheplaces-data.openaddresses.io/map.html?show=https%3A%2F%2Falltheplaces-data.openaddresses.io%2Fruns%2F2026-01-03-13-32-38%2Foutput%2Ftranspower_substations_nz.geojson" rel="nofollow">https://alltheplaces-data.openaddresses.io/map.html?show=htt...</a><p>The public source of this data (ArcGIS Feature Server account of Transpower) shows data last modified by Transpower in October 2025 for pylons and February 2025 for substations. At the rate of development of NZ, you wouldn't expect major changes to any of this data unless it's a major transmission upgrade project identified years in advance in hundreds of public announcements and documents.<p>For distribution, the largest distributor in NZ (Vector) provides "Line" geometry at <a href="https://www.arcgis.com/apps/mapviewer/index.html?url=https://services6.arcgis.com/8RWEO35G1ALMME0I/ArcGIS/rest/services/distribution_feeder_network_and_zone_substations/FeatureServer/2&source=sd" rel="nofollow">https://www.arcgis.com/apps/mapviewer/index.html?url=https:/...</a>  (note: not included in AllThePlaces due to ATP not currently collecting geometry other than points)</p>
]]></description><pubDate>Thu, 08 Jan 2026 12:20:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46540209</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46540209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46540209</guid></item><item><title><![CDATA[New comment by dhx in "Open Infrastructure Map"]]></title><description><![CDATA[
<p>Australia is miniscule by global standards and Alice Springs is miniscule by Australian standards. Alice Springs isn't connected to the grid servicing most of Australia's population crammed up along the East coast and doesn't have much in the way of heavy industrial users nearby. The difficulty for OSM mappers is the low-capacity above-ground power lines in Alice Springs have no more pixels as the trunk of any 20 year old tree so at satellite imagery resolutions of >30cm you may need to find an image taken at sunrise or sunset where the long shadow of a pole is visible on the ground. I also think it is preferred in remote locations such as Alice Springs to run lines underground (particularly along roads) due to decreased total cost of ownership of not having to worry about bushfire and flood damage to infrastructure.<p>The ACT government provides ~10cm aerial imagery of Canberra and surrounds a few times a year and from this imagery, unless a minor power pole is obscured by trees or a building, it is generally easy to identify most poles. Evoenergy (distribution operator for the ACT) also publicly provide detailed maps of poles and lines no matter how minor they are. The reason this detail won't be mapped in OSM is lack of interest and availability of mappers to micro-map every minor power pole from aerial imagery, and OSM's very conservative approach to importing datasets, particularly from a licensing perspective (e.g. attempting to apply European database directive concerns in countries like Australia which don't have equivalent laws, and even have opposing case law precedents).<p>Australia is one of the most open countries when it comes to supplying electrical grid data. Even underground conduit locations are available publicly for most distributors, as well as designed summer/winter constraints for each transmission line (e.g. maximum kA per line). See [1] for some links to maps and other data that is made publicly available.<p>[1] <a href="https://query.wikidata.org/#SELECT%20%3Foperator%20%3FoperatorLabel%20%28GROUP_CONCAT%28DISTINCT%20%3Foperating_areaLabel%3B%20SEPARATOR%20%3D%20%22%3B%20%22%29%20AS%20%3Foperating_areas%29%20%28GROUP_CONCAT%28DISTINCT%20%3Fexternal_data_URL%3B%20SEPARATOR%20%3D%20%22%3B%20%22%29%20AS%20%3Fexternal_data_URLs%29%20%28GROUP_CONCAT%28DISTINCT%20%3Fofficial_map_URL%3B%20SEPARATOR%20%3D%20%22%3B%20%22%29%20AS%20%3Fofficial_map_URLs%29%20%28GROUP_CONCAT%28DISTINCT%20%3Fofficial_website%3B%20SEPARATOR%20%3D%20%22%3B%20%22%29%20AS%20%3Fofficial_websites%29%20WHERE%20%7B%0A%20%20VALUES%20%3Foperator_types%20%7Bwd%3AQ472093%20wd%3AQ112046%7D.%0A%20%20%3Foperator%20wdt%3AP31%20%3Foperator_types.%0A%20%20%3Foperator%20wdt%3AP17%20wd%3AQ408.%0A%20%20OPTIONAL%20%7B%20%3Foperator%20wdt%3AP2541%20%3Foperating_area.%20%7D%0A%20%20OPTIONAL%20%7B%20%3Foperator%20wdt%3AP1325%20%3Fexternal_data_URL.%20%7D%0A%20%20OPTIONAL%20%7B%20%3Foperator%20wdt%3AP9601%20%3Fofficial_map_URL.%20%7D%0A%20%20OPTIONAL%20%7B%20%3Foperator%20wdt%3AP856%20%3Fofficial_website.%20%7D%0A%20%20SERVICE%20wikibase%3Alabel%20%7B%0A%20%20%20%20bd%3AserviceParam%20wikibase%3Alanguage%20%22en%22.%0A%20%20%20%20%3Foperator%20rdfs%3Alabel%20%3FoperatorLabel%20.%0A%20%20%20%20%3Foperating_area%20rdfs%3Alabel%20%3Foperating_areaLabel%20.%0A%20%20%7D%0A%7D%0AGROUP%20BY%20%3Foperator%20%3FoperatorLabel%0AORDER%20BY%20%3FoperatorLabel" rel="nofollow">https://query.wikidata.org/#SELECT%20%3Foperator%20%3Foperat...</a></p>
]]></description><pubDate>Thu, 08 Jan 2026 11:51:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46539998</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46539998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46539998</guid></item><item><title><![CDATA[New comment by dhx in "The Most Popular Blogs of Hacker News in 2025"]]></title><description><![CDATA[
<p>I was looking for @marklit (Mark Litwintschik / <a href="https://tech.marksblogg.com" rel="nofollow">https://tech.marksblogg.com</a>) in the list as he's a geospatial-focussed blogger I've seen regularly on HN with interesting blog posts where he finds and presents open source datasets I'd never thought about, walks through some basic processing/querying steps, and provides some examples of what can be produced with the data. Many of the blog posts have left me thinking about possibilities to setup bots for uploading maps to Wikimedia Commons (for embedding within Wikipedias etc) based on these interesting datasets, or at least automating via scripts the development/upload of maps on a country-by-country basis (or other criteria) for static once-off datasets.<p>Unfortunately he doesn't show in the top 100. Also unfortunately, there is no blogger described in the top 100 as having a geospatial interest/focus.</p>
]]></description><pubDate>Sun, 04 Jan 2026 12:01:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46487163</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46487163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46487163</guid></item><item><title><![CDATA[New comment by dhx in "BYD Sells 4.6M Vehicles in 2025, Meets Revised Sales Goal"]]></title><description><![CDATA[
<p>Compare at the same scale:<p>Vantor Legion-2 image of the BYD plant in Zhengzhou as captured on 18 January 2025: <a href="https://livingatlas.arcgis.com/wayback/#mapCenter=113.9361%2C34.3946%2C16" rel="nofollow">https://livingatlas.arcgis.com/wayback/#mapCenter=113.9361%2...</a><p>Vantor WorldView-3 image of the Tesla plant in Austin as captured on 31 January 2024: <a href="https://livingatlas.arcgis.com/wayback/#mapCenter=-97.6189%2C30.2212%2C16" rel="nofollow">https://livingatlas.arcgis.com/wayback/#mapCenter=-97.6189%2...</a></p>
]]></description><pubDate>Thu, 01 Jan 2026 17:06:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46455737</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46455737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46455737</guid></item><item><title><![CDATA[New comment by dhx in "Trump Isn't Building a Ballroom"]]></title><description><![CDATA[
<p>I don't understand the pretend secrecy. Why not just state openly they're rebuilding or expanding the bunker? Wouldn't it be obvious from the high number of trucks coming and going from the area (even if disguised within a shed) that significant excavation is occurring, or that a significant amount of steel and concrete is then being added?<p>It didn't take long for people to link a growing mound of spoil on a nearby golf course (East Potomac Golf Links) to excavations occurring at the White House.[1] A few simple volume calculations, and/or truck movement counts, is all one would seemingly need to estimate the size of the new bunker.<p>USD$300m also doesn't buy much of anything underground, especially at top secret military bunker pricing. Look at the cost of typical rail tunneling projects (much simpler and more efficient construction) for rough comparison.<p>[1] <a href="https://golf.com/news/white-house-carting-dirt-golf-course-potomac/" rel="nofollow">https://golf.com/news/white-house-carting-dirt-golf-course-p...</a></p>
]]></description><pubDate>Tue, 23 Dec 2025 11:15:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46364368</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46364368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46364368</guid></item><item><title><![CDATA[New comment by dhx in "$96M AUD revamp of Bom website bombs out on launch"]]></title><description><![CDATA[
<p>Are you referring to the 'GovPort' website episode of Utopia, season 3, episode 3 'Nation Shapers'[1] or a different episode?<p>[1] <a href="https://www.youtube.com/watch?v=_otJbx-PVOw" rel="nofollow">https://www.youtube.com/watch?v=_otJbx-PVOw</a></p>
]]></description><pubDate>Thu, 27 Nov 2025 12:26:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46068634</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46068634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46068634</guid></item><item><title><![CDATA[New comment by dhx in "Ask HN: Should account creation/origin country be displayed on HN profiles?"]]></title><description><![CDATA[
<p>It might have a minor beneficial impact to tourism in Saint Barthelemy and Norfolk Island from geeks wanting a trendy new account registered in a territory with fewer than 1000 IPv4 addresses allocated.[1]<p>A more useful addition would be a contributions calendar similar to GitHub's [2] but focused on which time zones the user is active within, and importantly, the latency of the user's replies. It's trivial to fake geographic location observed through source IP addresses (or even RTT multilateration) but much harder to fake time zones a user is active within, particularly if monitoring latency of replies.<p>edit: To further clarify, I don't think a contributions calendar would be beneficial to HN either. I've never cared to think about the country a commenter resides in, and don't care about username/real name either unless the commenter is appealing to their own authority (e.g. "I am the author of this software"). Even then, the usefulness of an appeal to own authority is often limited to the ability to reverse lookup the user's personal website (which itself is proven to be notable from other sources) for a link back to their HN profile.<p>[1] <a href="https://impliedchaos.github.io/ip-alloc/" rel="nofollow">https://impliedchaos.github.io/ip-alloc/</a><p>[2] <a href="https://docs.github.com/en/account-and-profile/concepts/contributions-on-your-profile#contributions-calendar" rel="nofollow">https://docs.github.com/en/account-and-profile/concepts/cont...</a></p>
]]></description><pubDate>Tue, 25 Nov 2025 13:59:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46045836</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46045836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46045836</guid></item><item><title><![CDATA[New comment by dhx in "NSA and IETF, part 3: Dodging the issues at hand"]]></title><description><![CDATA[
<p>I agree that Curve25519 and other "safer" algorithms are far from immune to side channel attacks in their implementation. For example, [1] is a single trace EM side channel key recovery attack against Curve25519 implemented in MbedTLS on an ARM Cortex-M4. This implementation had the benefit of a constant-time Montgomery ladder algorithm that NIST P curve implementations have traditionally not had a similar approach for, but nonetheless failed due to a conditional swap instruction that leaked secret state via EM.<p>The question is generally, could a standard in 2025 build upon decades of research and implementation failures to specify side channel resistant algorithms to address conditional jumps, processor optimisations for math functions, etc which might leak secret state via timing, power or EM signals. See for example section VI of [1] which proposed a new side channel countermeasure that ended up being implemented in MbedTLS to mitigate the conditional swap instruction leak. Could such countermeasures be added to the standard in the first instance, rather than left to implementers to figure out based on their review of IACR papers?<p>One could argue that standards are simply following interests of standards proposers and organisations who might not care about cryptography implementations on smart cards, TPMs, etc, or side channel attacks between different containers on the same host. Instead, perhaps standards proposers and organisations only care about side channel resistance across remote networks with high noise floors for timing signals, where attacks such as [2] (300ns timing signal) are not considered feasible. If this is the case, I would argue that the standards should still state their security model more clearly, for example:<p>* Is the standard assuming the implementation has a noise floor of 300ns for timing signals, 1ms, etc? Are there any particular cryptographic primitives that implementers must use to avoid particular types of side channel attack (particularly timing)?<p>* Implementation fingerprinting resistance/avoidance: how many choices can an implementation make that may allow a cryptosystem party to be deanonymised by the specific version of a crypto library in use?[3] Does the standard provide any guarantee for fingerprinting resistance/avoidance?<p>[1] Template Attacks against ECC: practical implementation against Curve25519, <a href="https://cea.hal.science/cea-03157323/document" rel="nofollow">https://cea.hal.science/cea-03157323/document</a><p>[2] CVE-2024-13176 openssl Timing side-channel in ECDSA signature computation, <a href="https://openssl-library.org/news/vulnerabilities/index.html#CVE-2024-13176" rel="nofollow">https://openssl-library.org/news/vulnerabilities/index.html#...</a><p>[3] Table 2, pyecsca: Reverse engineering black-box ellipticcurve cryptography via side-channel analysis, <a href="https://tches.iacr.org/index.php/TCHES/article/view/11796/11301" rel="nofollow">https://tches.iacr.org/index.php/TCHES/article/view/11796/11...</a></p>
]]></description><pubDate>Tue, 25 Nov 2025 09:46:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46044161</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46044161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46044161</guid></item><item><title><![CDATA[New comment by dhx in "NSA and IETF, part 3: Dodging the issues at hand"]]></title><description><![CDATA[
<p>Amongst the numerous reasons why you _don't_ want to rush into implementing new algorithms is even the _reference implementation_ (and most other early implementations) for Kyber/ML-KEM included multiple timing side channel vulnerabilities that allowed for key recovery.[1][2]<p>djb has been consistent in view for decades that cryptography standards need to consider the foolproofness of implementation so that a minor implementation mistake specific to timing of specific instructions on specific CPU architectures, or specific compiler optimisations, etc doesn't break the implementation. See for example the many problems of NIST P-224/P-256/P-384 ECC curves which djb has been instrumental in fixing through widespread deployment of X25519.[3][4][5]<p>[1] <a href="https://cryspen.com/post/ml-kem-implementation/" rel="nofollow">https://cryspen.com/post/ml-kem-implementation/</a><p>[2] <a href="https://kyberslash.cr.yp.to/faq.html" rel="nofollow">https://kyberslash.cr.yp.to/faq.html</a> / <a href="https://kyberslash.cr.yp.to/libraries.html" rel="nofollow">https://kyberslash.cr.yp.to/libraries.html</a><p>[3] <a href="https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication#Constant_time_Montgomery_ladder" rel="nofollow">https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplic...</a><p>[4] <a href="https://safecurves.cr.yp.to/ladder.html" rel="nofollow">https://safecurves.cr.yp.to/ladder.html</a><p>[5] <a href="https://cr.yp.to/newelliptic/nistecc-20160106.pdf" rel="nofollow">https://cr.yp.to/newelliptic/nistecc-20160106.pdf</a></p>
]]></description><pubDate>Mon, 24 Nov 2025 14:08:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46034275</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=46034275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46034275</guid></item><item><title><![CDATA[New comment by dhx in "WeatherNext 2: Our most advanced weather forecasting model"]]></title><description><![CDATA[
<p>Take for example Chongqing, China which is one of the world's cloudiest and most overcast cities[1]. Easily confirmed by lack of cloudless satellite imagery.[2] You could get the forecast correct most of the time by just assuming it will be cloudy.<p>What is more interesting for meteorological forecasting is the time-sensitive details such as:<p>1. We know severe storms will impact city X at approximately Ypm tomorrow. Will it include large hailstones? Severe and destructive downdraft / tornado? What path will the most damage occur and how much notice can we provide those in the path, even if it's just 30min before the storm arrives?<p>2. Large wildfire breaks out near city X and is starting to form its own weather patterns.[3] What's the possible scenarios for fire tornadoes, lightning, etc to be formed and when/where? Will the wind direction change more likely happen at Ypm or Y+2pm?<p>I'm skeptical that AI models would excel in these areas because of the time sensitivity of input data as well as the general lack of accurate input data (impacting human analysis too).<p>Maybe AI models would be better than humans at making longer term climate predictions such as "If [particular type of ENSO/IOD/etc event] is occurring, the number of cloudy days in [city] is expected to be [quantity]/month in [month] versus [quantity2]/month if the event was not occurring." It's not that humans would be unable to arrive at these type of results -- just that it would be tedious and resource intensive to do so.<p>[1] <a href="https://en.wikipedia.org/wiki/List_of_cities_by_sunshine_duration" rel="nofollow">https://en.wikipedia.org/wiki/List_of_cities_by_sunshine_dur...</a><p>[2] <a href="https://imagehunter.apollomapping.com/search/90e4893eeeaa48a4ae1f2ec287215d3c" rel="nofollow">https://imagehunter.apollomapping.com/search/90e4893eeeaa48a...</a><p>[3] <a href="https://en.wikipedia.org/wiki/Cumulonimbus_flammagenitus" rel="nofollow">https://en.wikipedia.org/wiki/Cumulonimbus_flammagenitus</a></p>
]]></description><pubDate>Tue, 18 Nov 2025 04:35:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45961442</link><dc:creator>dhx</dc:creator><comments>https://news.ycombinator.com/item?id=45961442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45961442</guid></item></channel></rss>