<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mattalex</title><link>https://news.ycombinator.com/user?id=mattalex</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 21:45:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mattalex" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mattalex in "Is Germany's gold safe in New York ?"]]></title><description><![CDATA[
<p>Effectively none. The US has a huge trade deficit with Germany/Europe so there is practically never a case where the US receives gold from Germany: It's always more then offset by the deficit.<p>The equivalent for the US would be the consumption goods that are already flowing into the US. I.e. US gets goods but doesn't sell enough to Germany, so the difference to maintain the total exchange rate is the Gold.<p>That's also why it was trivial for france to repatriate its gold compared to germany: Germany holds about 10x the amount of gold in the US compared to France (France was ~120 tons, Germany is roughly 1200 tons: France earned its gold through different trade).<p>That's also why it is such a complex thing to repatriate German reserves: France took almost 1 year to repatriate its gold. For Germany, the efforts would be decade spanning (though maybe with recent changes there is a little more urgency).</p>
]]></description><pubDate>Mon, 06 Apr 2026 19:45:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665954</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=47665954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665954</guid></item><item><title><![CDATA[New comment by mattalex in "Is Germany's gold safe in New York ?"]]></title><description><![CDATA[
<p>Germany already repatriated about half of its gold reserves between 2013 and 2017 from paris and new york to frankfurt.<p>There has been a recent (as in "18th of march" recent) petition to the Bundestag to repatriate the gold.<p>The reason not to repatriate the remaining gold back then is because Germany has substantial trade with the US, which is why Germany held gold in new york to begin with: It's the easiest way to resolve USD-Euro currency exchange at the central bank level (this is also why germany got rid of the paris gold reserves: with the euro you don't need currency exchange anymore).<p>Also, as you mentioned, the idea of "officially" repatriating gold with the current administration is quite dicey. It is very possible that the correct way of resolving this is to just stop buying gold in new york and let the currency exchange flux deal with the slow unwinding of the reserves without explicit repatriation.</p>
]]></description><pubDate>Mon, 06 Apr 2026 17:31:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47664101</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=47664101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47664101</guid></item><item><title><![CDATA[New comment by mattalex in "Is Germany's gold safe in New York ?"]]></title><description><![CDATA[
<p>Whether the US is capable of hiding their maleficence or not should not be an indicator of whether it is safe to deal with them. If your indicator for the US being a good partner in _anything_ is that "well we did corrupt things in the past, but people didn't use to care about it", then the US is still not a good partner.<p>It's not like the US has never e.g. openly threatened NATO allies with war: There is quite literally a standing law that allows the US president to invade the netherlands if any US military personnel is ever detained by the International Criminal Court.
This law has been on the books for over 20 years and has the publically announced intention to prevent the US from being prosecuted for all the other atrocities committed in e.g. Iraq. This bill was supported by both democrats and republicans.<p>The reality is that the US' stance towards the rest of the world has not changed with the recent administrations (nor would I expect it to: Trump does not happen in a vacuum). What did change was willingness of the rest of the world to act on the US' actions.</p>
]]></description><pubDate>Mon, 06 Apr 2026 14:22:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47661302</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=47661302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47661302</guid></item><item><title><![CDATA[New comment by mattalex in "BitNet: Inference framework for 1-bit LLMs"]]></title><description><![CDATA[
<p>There were plenty of models the size of gpt3 in industry.<p>The core insight necessary for chatgpt was not scaling (that was already widely accepted): the insight was that instead of finetuning for each individual task, you can finetune once for the meta-task of instruction following, which brings a problem specification directly into the data stream.</p>
]]></description><pubDate>Wed, 11 Mar 2026 18:29:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47339323</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=47339323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47339323</guid></item><item><title><![CDATA[New comment by mattalex in "OpenAI agrees with Dept. of War to deploy models in their classified network"]]></title><description><![CDATA[
<p>Assuming this is real: Why do you think anthropic was put on what is essentially an "enemy of the state" list and openai didn't?<p>The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.<p>It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.</p>
]]></description><pubDate>Sat, 28 Feb 2026 08:02:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47192010</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=47192010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47192010</guid></item><item><title><![CDATA[New comment by mattalex in "Microsoft Favors Anthropic over OpenAI for Visual Studio Code"]]></title><description><![CDATA[
<p>It might be that they pay less for anthropic depending how many tokens are generated by each model: total cost is token cost times number of tokens. I haven't checked gpt5, but it is not impossible that price wise they might be very comparable if you account for reasoning tokens used.</p>
]]></description><pubDate>Tue, 16 Sep 2025 18:05:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45265657</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=45265657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45265657</guid></item><item><title><![CDATA[New comment by mattalex in "Into CPS, Never to Return"]]></title><description><![CDATA[
<p>This is essentially the principle behind algebraic effects (which, in practice, do get implemented as delimited continuations):<p>When you have an impure effect (e.g. check a database, generate a random number, write to a file, nondeterministic choices,...), instead of directly implementing the impure action, you instead have a symbol e.g "read", "generate number", ...<p>When executing the function, you also provide a context of "interpreters" that map the symbol to whatever action you want.
This is very useful, since the actual business logic can be analyzed in an isolated way.
For instance, if you want to test your application you can use a dummy interpreter for "check database" that returns whatever values you need for testing, but without needing to go to an actual SQL database.
It also allows you to switch backends rather easily: If your database uses the symbols "read", "write", "delete" then you just need to implement those calls in your backend. If you want to formally prove properties of your code, you can also do that by noting the properties of your symbols, e.g. `∀ key. read (delete key) = None`.<p>Since you always capture the symbol using an interpreter, you can also do fancy things like dynamically overriding the interpreter:
To implement a seeded random number generator, you can have an interpreter that always overrides itself using the new seed. The interpreter would look something like this<p>```<p>Pseudorandom_interpreter(seed)(argument, continuation):<p><pre><code>  rnd, new_seed <- generate_pseudorandom(seed, argument)
  with Pseudorandom_interpreter(new_seed):
       continuation(rnd)</code></pre>
```<p>You can clearly see the continuation passing style and the power of self-overriding your own interpreter.
In fact, this is a nice way of handeling state in a pure way: Just put something other than new_seed into the new interpreter.<p>If you want to debug a state machine, you can use an interpreter like this<p>```
replace_state_interpreter(state)(new_state, continuation):<p><pre><code>  with replace_state_interpreter(new_state ++ state):
       continuation(head state)</code></pre>
```<p>To trace the state.
This way the "state" always holds the entire history of state changes, which can be very nice for debugging.
During deployment, you can then replace use a different interpreter<p>```<p>replace_state_interpreter(state)(new_state, continuation):<p><pre><code>  with replace_state_interpreter(new_state):
       continuation(state)</code></pre>
```<p>which just holds the current state.</p>
]]></description><pubDate>Thu, 26 Dec 2024 11:33:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=42514613</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=42514613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42514613</guid></item><item><title><![CDATA[New comment by mattalex in "A DSL for peephole transformation rules of integer operations in the PyPy JIT"]]></title><description><![CDATA[
<p>Once you have strong normalization you can just check local confluence and use Newman's lemma to get strong confluence. That should be pretty easy: just build all n^2 pairs and run them to termination (which you have proven before). If those pairs are confluent, so is the full rewriting scheme.</p>
]]></description><pubDate>Thu, 24 Oct 2024 11:36:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=41934523</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=41934523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41934523</guid></item><item><title><![CDATA[New comment by mattalex in "AI engineers claim new algorithm reduces AI power consumption by 95%"]]></title><description><![CDATA[
<p>That entirely depends on what AMD device you look at: gaming GPUs are not well supported, but their instinct line of accelerators works just as well as cuda. keep in mind that, in contrast to Nvidia, AMD uses different architectures for compute and gaming (though they are changing that in the next generation)</p>
]]></description><pubDate>Sun, 20 Oct 2024 11:31:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=41894656</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=41894656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41894656</guid></item><item><title><![CDATA[New comment by mattalex in "Sony shutting down Concord, refunds after 2 week launch. 8 year dev, 25k sales"]]></title><description><![CDATA[
<p>To expand on that: there's also the issue that these games have to be (somewhat) competitive multiplayer games: multiplayer because otherwise there's no way to create enough content, and competitive since otherwise there's less of a reason to play the game for long periods of time.<p>If you've ever played a dead/dying competitive game as a newcomer you will know the problem this creates: since the people that stay around are either new or very dedicated players, the skill gap becomes gigantic, which turns of most new players.<p>if your game wins the Life-Service race, you draw other players in. If your game dies the very same structure that keep players around will prevent new players from joining.</p>
]]></description><pubDate>Wed, 04 Sep 2024 06:18:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=41442448</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=41442448</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41442448</guid></item><item><title><![CDATA[New comment by mattalex in "Iron as an inexpensive storage medium for hydrogen"]]></title><description><![CDATA[
<p>There are alternatives to iron that have higher efficiency and lower prices. For instance <a href="https://hydrogenious.net/" rel="nofollow">https://hydrogenious.net/</a> does exactly that but with benzene like structures. The advantage of this is that you can reuse existing infrastructure for transport and you have higher transport efficiency: while the square cube law exist, the same thing holds for the forces on the chamber walls which have to increase in thickness. Hydrogen tanks are also very expensive as they have to be manufactured to tight tolerances (and they need to be replaced rate often due to hydrogen creep weakening chamber walls)</p>
]]></description><pubDate>Sun, 01 Sep 2024 06:21:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=41414747</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=41414747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41414747</guid></item><item><title><![CDATA[New comment by mattalex in "Encyclopedia of Optimization"]]></title><description><![CDATA[
<p>The paper I have mentioned can be found here <a href="https://arxiv.org/pdf/2206.09787" rel="nofollow">https://arxiv.org/pdf/2206.09787</a><p>There are so many things that have only been invented in the last couple of years like RINS, MCF cuts, conflict analysis, symmetry detection, dynamic search,... (see e.g. Tobias Achterberg's line of work).<p>On the other hand, hardware improvements were not as relevant for LP and MILP solvers as one would expect: For instance, as of now there is still no solver that really uses GPU compute (though people are working on that). The reason is that parallelization of simplex solvers is quite though since the algorithm is inherently sequential (it's a descend over simplex vertices) and the actual linear algebra is very sparse (if not entirely matrix free). You can do some things like lookahead for better pricing or row/column generation approaches, but you have to be very careful in that (interior point methods are arguably nicer to parallelize but in many cases have a penalty in performance compared to simplex).<p>MILP/MINLP solvers are much nicer to parallelize at first glance since you can parallelize across branches in the branch-and-bound, but in practice that is also pretty hard: Moderns solvers are so efficient that it can easily happen that you spend a lot of compute exploring a branch that is quickly proven to be unncessary to explore by a different branch (e.g. SCIP, the fastest open-source MINLP solver is completely single threaded and still _somewhat_ competetive). This means that a lot of the algorithmic improvements are hidden inside the parallelization improvements. I.e. a lot of time has been spent on the question of "What do we have to do to parallelize the solver without just wasting the additional threads".</p>
]]></description><pubDate>Sun, 18 Aug 2024 11:15:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41281640</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=41281640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41281640</guid></item><item><title><![CDATA[New comment by mattalex in "Encyclopedia of Optimization"]]></title><description><![CDATA[
<p>2008 is ancient for optimization!<p>People have tested old year 2000 lp and milp solvers against recent ones while correcting for hardware. Hardware improvements made up ~20x improvement, while lp solvers in general sped up 180x. MILP solvers speed up a full 1000x (Progress in mathematical programming solvers from 2001 to 2020).<p>Solvers from 2008 are entirely different levels of performance: there are many problems that are unsolvable by those that are solved to zero duality gap in less than a second by more modern solvers.<p>In MINLPs the difference is even more standing. This doesn't mean that those books are useless (they are quite good), but do not expect a solver based on those techniques to even play in the same league as modern solvers.</p>
]]></description><pubDate>Sat, 17 Aug 2024 22:43:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41278628</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=41278628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41278628</guid></item><item><title><![CDATA[New comment by mattalex in "Well-known paradox of R-squared is still buggin me"]]></title><description><![CDATA[
<p>You can solve L1 regression using linear programming at fantastically large scales. In fact in many applications you do the opposite: go from squared to absolute because the latter fits into in lp</p>
]]></description><pubDate>Thu, 04 Jul 2024 16:01:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=40875905</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=40875905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40875905</guid></item><item><title><![CDATA[New comment by mattalex in "German state moving 30k PCs to LibreOffice"]]></title><description><![CDATA[
<p>The problem was mostly that the only guy that was really backing the project (Christian Ude, SPD), was replaced with his successor (Dieter Reiter, SPD) who just didn't have the drive necessary to maintain the project.<p>The entire design of "LiMux" was doomed from the start: it was a highly customized version of Ubuntu that was only used in Munich (not even throughout the entire state). That made everything ridiculously expensive since the actual advantages of building on an open source solution was never realized.
That is combined with the fact that "open source" and "cost savings" were used interchangeably when in reality the budget for Windows should have been pre-allocated into development, rather than cut.<p>The entire project was half-assed to begin with, which basically meant that Windows and Linux had to coexist since many crucial tools were never ported to Linux.<p>The "Microsoft killed it" story sounds realistic, but the truth is the much more boring incompetence in execution.</p>
]]></description><pubDate>Thu, 04 Apr 2024 12:11:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=39929380</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=39929380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39929380</guid></item><item><title><![CDATA[New comment by mattalex in "Intel Brags of $152B in Stock Buybacks. Why Does It Need an $8B Subsidy?"]]></title><description><![CDATA[
<p>The US could have required stock in Intel instead of taking nothing: Intel gets money, US gets influence over Intel. Just like it works everywhere else.</p>
]]></description><pubDate>Thu, 28 Mar 2024 13:57:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=39851530</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=39851530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39851530</guid></item><item><title><![CDATA[New comment by mattalex in "FuryGpu – Custom PCIe FPGA GPU"]]></title><description><![CDATA[
<p>Not in the grand scheme of things: you can get fpga dev boards for $50 that are already useable for this type of thing (you can go even lower, but those aren't really useable for "CPU like" operation and are closer to "a whole lot of logic gates in a single chip"). Of course the "industry grade" solutions pack significantly more of a punch, but they can also be had for <$500.</p>
]]></description><pubDate>Wed, 27 Mar 2024 11:38:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=39837771</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=39837771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39837771</guid></item><item><title><![CDATA[New comment by mattalex in "Delimiters won’t save you from prompt injection"]]></title><description><![CDATA[
<p>It's actually a lot worse than that: Just redesigning LLMs to have separate input channels for prompts and data doesn't solve the problem either, since this would be impossible to train.<p>Effectively you would need to filter all incoming data into "data" and "prompt" parts, because otherwise the model would learn to also follow instructions put into the "data" path.
However, this split between data and prompt does not exist in natural language. You can even think of sentences that might act as both depending on the context and interpretation you put on them. So getting this sort of split without tainting the data channel is intractable.</p>
]]></description><pubDate>Sun, 14 May 2023 09:45:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=35936723</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=35936723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35936723</guid></item><item><title><![CDATA[New comment by mattalex in "A broad-spectrum synthetic antibiotic that does not evoke bacterial resistance"]]></title><description><![CDATA[
<p>probably. But you can make it sufficiently hard: If you force the bacteria to make sufficiently large jumps in mutation to escape whatever antimicrobial you design, odds are that you force it into something either incompatible with life or make it have tradeoffs that make it easier to fight against with new drugs.</p>
]]></description><pubDate>Tue, 21 Feb 2023 23:38:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=34889364</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=34889364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34889364</guid></item><item><title><![CDATA[New comment by mattalex in "Open Assistant: Conversational AI for Everyone"]]></title><description><![CDATA[
<p>Instruction tuning mostly relies on the quality of the data you put into the model. This makes it different from traditional language model training: essentially you take one of these existing hugely expensive models (there are lots of them already out there), and tune them specifically on high quality data.<p>This can be done on a comparatively small scale, since you don't need to train trillions of words, but only train on the smaller high quality data (even openai didn't have a lot of that).<p>In fact, if you look at the original paper <a href="https://arxiv.org/pdf/2203.02155.pdf" rel="nofollow">https://arxiv.org/pdf/2203.02155.pdf</a> Figure 1, you can see that even small models already significantly beat the current SOTA.<p>Open source projects often have trouble securing the HW ressources, but the "social" resources for producing a large dataset are much easier to manage in OSS projects. In fact, the data the OSS project collects might just be better since they don't have to rely on paying a handful minimum wage workers to produce thousands of examples.<p>In fact one of the main objectives is to reduce the bias generated by openai's screening and selection process, which is doable since much more people work on generating the data.</p>
]]></description><pubDate>Sat, 04 Feb 2023 21:43:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=34658679</link><dc:creator>mattalex</dc:creator><comments>https://news.ycombinator.com/item?id=34658679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34658679</guid></item></channel></rss>