<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: yaantc</title><link>https://news.ycombinator.com/user?id=yaantc</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 05:38:09 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=yaantc" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by yaantc in "Outsourcing thinking"]]></title><description><![CDATA[
<p>Profession as sibling said, available here: <a href="https://www.inf.ufpr.br/renato/profession.html" rel="nofollow">https://www.inf.ufpr.br/renato/profession.html</a><p>The wikipedia entry also has link to the text but the above is nicer IMHO, just the raw text. From a previous HN discussion some weeks ago!</p>
]]></description><pubDate>Sun, 01 Feb 2026 08:11:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46844451</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=46844451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46844451</guid></item><item><title><![CDATA[New comment by yaantc in "Vsora Jotunn-8 5nm European inference chip"]]></title><description><![CDATA[
<p>I'm sorry I won't share much details, I don't think much is public on Vsora architecture and don't want to breach any NDA...<p>From their web page Euclyd is a "many small cores" accelerator. Doing good compilation toolchains for these to get efficient results is a hard problem, see many comments on compilers for AI in this thread.<p>Vsora approach is much more macroscopic, and differentiated. By this I mean I don't know anything quite like it. No sea of small cores, but several more beefy units. They're programmable, but don't look like a CPU: the HW/SW interface is at a higher level. A very hand-wavy analogy with storage would be block devices vs object storage, maybe. I'm sure more details will surface when real HW arrive.</p>
]]></description><pubDate>Fri, 28 Nov 2025 15:17:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46079372</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=46079372</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46079372</guid></item><item><title><![CDATA[New comment by yaantc in "Vsora Jotunn-8 5nm European inference chip"]]></title><description><![CDATA[
<p>Very simplified, AI workloads need compute and communications and compute dominates inference, while communications dominate training.<p>Most start-ups innovate on the compute side, whereas the techno needed for state of the art communications is not common, and very low-level: plenty of analog concerns. The domain is dominated by NVidia and Broadcom today.<p>This is why digital start-ups tend to focus on inference. They innovate on the pure digital part, which is compute, and tend to use off-the-shelf IPs for communications, so not a differentiator and likely below the leaders.<p>But in most cases coupling a computation engine marketed for inference with state of the art communications would (in theory) open the way for training too. It's just that doing both together is a very high barrier. It's more practical to start with compute, and if successful there use this to improve the comms part in a second stage. All the more because everyone expects inference to be the biggest market too. So AI start-ups focus on inference first.</p>
]]></description><pubDate>Fri, 28 Nov 2025 09:51:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46077207</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=46077207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46077207</guid></item><item><title><![CDATA[New comment by yaantc in "Widespread power outage in Spain and Portugal"]]></title><description><![CDATA[
<p>From Le Monde live feed, RTE (French electricity network manager) declared the issue unrelated to this fire.<p>"Le gestionnaire français souligne par ailleurs que cette panne n’est pas due à un incendie dans le sud de la France, entre Narbonne et Perpignan, contrairement à des informations qui circulent."</p>
]]></description><pubDate>Mon, 28 Apr 2025 14:35:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43821973</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=43821973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43821973</guid></item><item><title><![CDATA[New comment by yaantc in "Compiling C++ with the Clang API"]]></title><description><![CDATA[
<p>castxml (<a href="https://github.com/CastXML/CastXML" rel="nofollow">https://github.com/CastXML/CastXML</a>) may be what you want. It uses the Clang front-end to output an XML representation of a C or C++ parse tree. It is then possible to turn this into what you want. I've used it and seen it used to generate code to do endianess conversion of structures from headers, or RPC code generation for example.<p>It can be used from Python through pygccxml (<a href="https://github.com/CastXML/pygccxml" rel="nofollow">https://github.com/CastXML/pygccxml</a>). The name comes from a previous instance, gccxml, based on the GCC front-end.<p>Both castxml and pygccxml are packaged in Debian and Ubuntu.</p>
]]></description><pubDate>Mon, 10 Mar 2025 13:36:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43320447</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=43320447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43320447</guid></item><item><title><![CDATA[New comment by yaantc in "Zen 5's AVX-512 Frequency Behavior"]]></title><description><![CDATA[
<p>On the L/S unit impact: data movement is expensive, computation is cheap (relatively).<p>In "Computer Architecture, A Quantitative Approach" there are numbers for the now old TSMC 45nm process: A 32 bits FP multiplication takes 3.7 pJ, and a 32 bits SRAM read from an 8 kB SRAM takes 5 pJ. This is a basic SRAM, not a cache with its tag comparison and LRU logic (more expansive).<p>Then I have some 2015 numbers for Intel 22nm process, old too. A 64 bits FP multiplication takes 6.4 pJ, a 64 bits read/write from a small 8 kB SRAM 4.2 pJ, and from a larger 256 kB SRAM 16.7 pJ. Basic SRAM here too, not a more expansive cache.<p>The cost of a multiplication is quadratic, and it should be more linear for access, so the computation cost in the second example is much heavier (compare the mantissa sizes, that's what is multiplied).<p>The trend gets even worse with more advanced processes. Data movement is usually what matters the most now, expect for workloads with very high arithmetic intensity where computation will dominate (in practice: large enough matrix multiplications).</p>
]]></description><pubDate>Sat, 01 Mar 2025 08:56:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43217439</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=43217439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43217439</guid></item><item><title><![CDATA[New comment by yaantc in "Emacs 30.1 Released"]]></title><description><![CDATA[
<p>What's new: <a href="https://www.masteringemacs.org/article/whats-new-in-emacs-301" rel="nofollow">https://www.masteringemacs.org/article/whats-new-in-emacs-30...</a><p>Or in antinews format: <a href="https://www.gnu.org/software/emacs/manual/html_node/emacs/Antinews.html" rel="nofollow">https://www.gnu.org/software/emacs/manual/html_node/emacs/An...</a></p>
]]></description><pubDate>Mon, 24 Feb 2025 09:28:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43157489</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=43157489</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43157489</guid></item><item><title><![CDATA[New comment by yaantc in "5G networks meet consumer needs as mobile data growth slows"]]></title><description><![CDATA[
<p>It's a good but hard question... Because cellular is huge.<p>In a professional context, nobody knows it all in details. There are specializations: core network and RAN, and inside RAN protocol stack vs PHY, and in PHY algos vs implementation, etc.<p>You can see all the cellular specs (they're public) from there:
<a href="https://www.3gpp.org/specifications-technologies/specifications-by-series" rel="nofollow">https://www.3gpp.org/specifications-technologies/specificati...</a><p>5G (or NR) is the series 38 at the bottom. Direct access:
<a href="https://www.3gpp.org/ftp/Specs/archive/38_series" rel="nofollow">https://www.3gpp.org/ftp/Specs/archive/38_series</a><p>It's a lot ;) But a readable introduction is the 38.300 spec, and the latest edition for the first 5G release (R15, or "f") is this one:
<a href="https://www.3gpp.org/ftp/Specs/archive/38_series/38.300/38300-fj0.zip" rel="nofollow">https://www.3gpp.org/ftp/Specs/archive/38_series/38.300/3830...</a><p>It's about as readable as it can get. The PHY part is pretty awful by comparison. If you have a PHY interest, you'll need to look for technical books as the specs are quite hermetic (but it's not my field either).</p>
]]></description><pubDate>Wed, 12 Feb 2025 22:24:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43030415</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=43030415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43030415</guid></item><item><title><![CDATA[New comment by yaantc in "5G networks meet consumer needs as mobile data growth slows"]]></title><description><![CDATA[
<p>LTE total latency is 20-50 ms, and you compare this to the marketing "air link only" 5G latency of 1 ms. It's apple and oranges ;)<p>FYI, the air link latency for LTE was given as 4-5 ms. FDD as it's the best here. The 5G improvement to 1ms would require features (URLLC) that nobody implemented and nobody will: too expensive for too niche markets.<p>The latency in a cellular network is mostly from the core network, not the radio link anymore. Event in 4G.<p>(telecom engineer, having worked on both 4G and 5G and recently out of the field)</p>
]]></description><pubDate>Wed, 12 Feb 2025 19:10:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43028613</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=43028613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43028613</guid></item><item><title><![CDATA[New comment by yaantc in "My Resignation from Emacs Development"]]></title><description><![CDATA[
<p>Emacs is in the process of moving from legacy languages modes using regexps and elisp for syntax analysis to new modes using tree sitter.<p>In this context, what does a name like "c-mode" should mean? Options:
1) it should stick to the old mode, cc-mode here. To use the new mode, use explicitly c-ts-mode;
2) it should move to the new tree sitter mode, c-ts-mode. To use the old mode, use explicitly cc-mode;
3) it should mean the new preferred Emacs mode, with a way for the user to take back control if they have a different preference. This preferred mode will change at some point from legacy to tree sitter.<p>The change is (3), with a move to tree sitter in Emacs 30 (to be released soon) IIUC. It makes sense to me. Saying that anyone own a name as generic as "c-mode" in an open source project just because they're first and have a long history as a contributor (thanks by the way!) seems excessive. Change of default is normal in an evolving project, and as long as it's clearly documented with a way to override (which is the case IIUC) it's fine to me. One can dislike the change, but it's impossible to please everyone anyway. Emacs users are used to adjust configuration based on their preferences.<p>I understand it can be an emotional situation for the maintainer of the legacy mode. But I don't see the need to call foul play.</p>
]]></description><pubDate>Wed, 20 Nov 2024 17:25:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42196198</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=42196198</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42196198</guid></item><item><title><![CDATA[New comment by yaantc in "Tbsp – treesitter-based source processing language"]]></title><description><![CDATA[
<p>Hi, in case you're not already aware of the name clash, there's already a `rr` in the programming world. It's "record and replay": <a href="https://rr-project.org/" rel="nofollow">https://rr-project.org/</a>.<p>Very different, but a very fine tool tool too.</p>
]]></description><pubDate>Mon, 02 Sep 2024 08:14:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=41423551</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=41423551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41423551</guid></item><item><title><![CDATA[New comment by yaantc in "Cancer Incidence by Country"]]></title><description><![CDATA[
<p>See just above the map: "This has been age-standardized, assuming a constant age structure of the population for comparisons between countries and over time.". This is what you suggests IIUC?</p>
]]></description><pubDate>Sat, 10 Aug 2024 11:59:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=41208870</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=41208870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41208870</guid></item><item><title><![CDATA[New comment by yaantc in "Ask HN: What's Prolog like in 2024?"]]></title><description><![CDATA[
<p>Take the infinite loop as just an example of an issue with depth-first search and backtracking. To be more general, I'd say that the issue is that the overall performance of a Prolog program can be very dependent on the ordering of its rules.<p>As an anecdote, a long time ago for a toy project switching two rules order got the runtime to finding all solutions from ~15mn to a around the second (long time, memory fuzzy...). The difference was going into a "wrong" path and wasting a lot of time evaluating failing possibilities, vs. taking the right path and getting to the solutions very quickly.<p>So in practice even if Prolog is declarative to get good results you need to understand how the search is done, and organize the rules so that this search is done in the most efficient way. The runtime search is a leaky abstraction in a way ;)<p>It's not an issue limited to Prolog, many solvers can be helped by steering the search in the "right" way. A declarative language for constraint problem like MiniZinc provides way to pass to the solver some indication on how to best search for example.<p>Also, most modern Prolog support tabling, which departs from strict DFS+backtracking and can help in some cases. But here too, to get the best results may require understanding how the engine will search, including tabling.</p>
]]></description><pubDate>Thu, 18 Jul 2024 15:21:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=40996412</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40996412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40996412</guid></item><item><title><![CDATA[New comment by yaantc in "Xpra: Persistent Remote Applications for X11"]]></title><description><![CDATA[
<p>You may want to try x2go. It uses the older NX protocol version 3, while NoMachine is at version 4. It's good enough for my use case, and support remote applications just fine: this is how I use it.</p>
]]></description><pubDate>Mon, 08 Jul 2024 12:39:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=40904887</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40904887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40904887</guid></item><item><title><![CDATA[New comment by yaantc in ""Out of Band" network management is not trivial"]]></title><description><![CDATA[
<p>> I bet it had diesel generators when it was in service with AT&T to boot.<p>20 to 25 years ago I visited a telecom switch center in Paris, the one under the Tuileries garden next to the Louvre. They had a huge and empty diesel generators room. They had all been replaced by a small turbine (not sure it's the right English term), just the same as what's used to power an helicopter. It was in a relatively small soundproof box, with a special vent for the exhaust, kind of lost on the side of a huge underground room.<p>As the guy in charge explained to us, it was much more compact and convenient. The big risk was in getting it started, this was the tricky part. Once started it was extremely reliable.</p>
]]></description><pubDate>Sun, 07 Jul 2024 07:55:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=40895941</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40895941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40895941</guid></item><item><title><![CDATA[New comment by yaantc in "Europe's solar power surge hits prices, exposing storage needs"]]></title><description><![CDATA[
<p>According to <a href="https://www.withouthotair.com/" rel="nofollow">https://www.withouthotair.com/</a>, no.</p>
]]></description><pubDate>Fri, 21 Jun 2024 12:21:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=40748838</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40748838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40748838</guid></item><item><title><![CDATA[New comment by yaantc in "Asus' first Snapdragon X Elite laptop (mostly) blows away the MacBook Air M3"]]></title><description><![CDATA[
<p>>  I wonder how (if?) this thing runs Linux.<p>See there: <a href="https://www.qualcomm.com/developer/blog/2024/05/upstreaming-linux-kernel-support-for-the-snapdragon-x-elite" rel="nofollow">https://www.qualcomm.com/developer/blog/2024/05/upstreaming-...</a><p>Should be good after all this has landed in upstream Linux and all the distros (which may take a bit of time, as usual).</p>
]]></description><pubDate>Tue, 18 Jun 2024 12:36:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=40717083</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40717083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40717083</guid></item><item><title><![CDATA[New comment by yaantc in "What We Learned from a Year of Building with LLMs"]]></title><description><![CDATA[
<p>> [...] the standard way to get structured output seems to be to retry the query until the stochastic language model produces expected output.<p>No, that would be very inefficient. At each token generation step, the LLM provides a likelihood for all the defined token based on the past context. The structured output is defined by a grammar, which defines the legal tokens for the next step. You can then take the intersection of both (ignore any token not allowed by the grammar), and then select among the authorized token based on the LLM likelihood for them in the usual way. So it's a direct constraint, and it's efficient.</p>
]]></description><pubDate>Wed, 29 May 2024 07:26:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=40509395</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40509395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40509395</guid></item><item><title><![CDATA[New comment by yaantc in "Show HN: I built a non-linear UI for ChatGPT"]]></title><description><![CDATA[
<p>For a text based version of the "tree of chats" idea, using Emacs, Org mode and gptel see `gptel-org-branching-context`in:
<a href="https://github.com/karthink/gptel?tab=readme-ov-file#extra-org-mode-conveniences">https://github.com/karthink/gptel?tab=readme-ov-file#extra-o...</a></p>
]]></description><pubDate>Wed, 08 May 2024 17:27:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=40300689</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40300689</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40300689</guid></item><item><title><![CDATA[The tiny ultrabright laser that can melt steel]]></title><description><![CDATA[
<p>Article URL: <a href="https://spectrum.ieee.org/pcsel">https://spectrum.ieee.org/pcsel</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40032963">https://news.ycombinator.com/item?id=40032963</a></p>
<p>Points: 7</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 14 Apr 2024 17:58:07 +0000</pubDate><link>https://spectrum.ieee.org/pcsel</link><dc:creator>yaantc</dc:creator><comments>https://news.ycombinator.com/item?id=40032963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40032963</guid></item></channel></rss>