<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: matu3ba</title><link>https://news.ycombinator.com/user?id=matu3ba</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 17:35:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=matu3ba" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by matu3ba in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>> Engineering isn't free, models tend to grow obsolete as the price/capability frontier advances, and AI specialists are less of a commodity than AI inference is. I'm inclined to bet against approaches like this on a principle.<p>This does not sound like it will simplify the training and data side, unless their or subsequent models can somehow be efficiently utilized for that.
However, this development may lead to (open source) hardware and distributed system compilation, EDA tooling, bus system design, etc getting more deserved attention and funding.
In turn, new hardware may lead to more training and data competition instead of the current NVIDIA model training monopoly market.
So I think you're correct for ~5 years.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:14:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094774</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=47094774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094774</guid></item><item><title><![CDATA[New comment by matu3ba in "RCC: A Boundary Theory Explaining Why LLMs Still Hallucinate"]]></title><description><![CDATA[
<p>The ideas sound intriguing at first sight, so I have some questions:
1 Do you have a mathematical formalization and/or why not (yet)?
2 What would be necessary as data/proof to indicate/show either result?
3 Did you try to apply probabilistic programming (theories) to your theory?
4 There is probablistic concolic execution and probablistic formal verification. How do these relate to your theory?</p>
]]></description><pubDate>Wed, 04 Feb 2026 20:29:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46891218</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46891218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46891218</guid></item><item><title><![CDATA[New comment by matu3ba in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>Sorry, it's 4am and I should sleep, but got very interested.
Thank you very much for the excellent overview. This explains all my current questions.</p>
]]></description><pubDate>Sat, 10 Jan 2026 02:56:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46562294</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46562294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46562294</guid></item><item><title><![CDATA[New comment by matu3ba in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>1 Does this mean that Sledgehammer and Coqhammer offer concolic testing based on an input framework (say some computing/math system formalization) for some sort of system execution/evaluation or does this only work for hand-rolled systems/mathematical expressions?<p>Sorry for my probably senseless questions, as I'm trying to map the computing model of math solvers to common PL semantics. Probably there is better overview literature. I'd like to get an overview of proof system runtime semantics for later usage.
2 Is there an equivalent of fuzz testing (of computing systems) in math, say to construct the general proof framework?
3 Or how are proof frameworks (based on ideas how the proof could work) constructed?
4 Do I understand it correct, that math in proof systems works with term rewrite systems + used theory/logic as computing model of valid representation and operations? How is then the step semantic formally defined?</p>
]]></description><pubDate>Sat, 10 Jan 2026 02:10:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46562061</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46562061</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46562061</guid></item><item><title><![CDATA[New comment by matu3ba in "Linear Address Spaces: Unsafe at any speed (2022)"]]></title><description><![CDATA[
<p>> we could design something faster, safer, overall simpler, and easier to program<p>I do remain doubtful on this for general purpose computing principles: Hardware for low latency/high throughput is at odds with full security (absence of observable side-channels). Optimal latency/throughput requires time-constrained=hardware programming with FGPAs or building hardware (high cost) usually programmed on dedicated hardware/software or via things like system-bypass solutions.
Simplicity is at odds with generality, see weak/strong formal system vs strong/weak semantics.<p>If you factor those compromises in, then you'll end up with the current state plus historical mistakes like missing vertical system integration of software stacks above Kernel-space as TCB, bad APIs due to missing formalization, CHERI with its current shortcomings, etc.<p>I do expect things to change once security with mandatory security processor becomes more required leading to multi-CPU solutions and potential for developers to use on the system complex+simple CPUs, meaning roughly time-accurate virtual and/or real ones.<p>> The second is that there isn’t strong demand.<p>This is not true for virtualization and security use cases, but not that obvious yet due to missing wide-spread attacks, see side-channel leaks of cloud solutions. Take a look at hardware security module vendors growth.</p>
]]></description><pubDate>Mon, 05 Jan 2026 10:11:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46497070</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46497070</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46497070</guid></item><item><title><![CDATA[New comment by matu3ba in "Linear Address Spaces: Unsafe at any speed (2022)"]]></title><description><![CDATA[
<p>CHERI on its own does not fix many of the side-channels, which would need something like "BLACKOUT : Data-Oblivious Computation with Blinded Capabilities", but as I understand it, there is no consensus/infra on how to do efficient capability revocation (potentially in hardware), see <a href="https://lwn.net/Articles/1039395/" rel="nofollow">https://lwn.net/Articles/1039395/</a>.<p>On top of that, as I understand it, CHERI has no widespread concept of how to allow disabling/separation of workloads for ulta-low latency/high-throughput/applications in mixed-critical systems in practical systems. The only system I'm aware of with practical timing guarantees and allowing virtualization is sel4,
but again there are no practical guides with trade-offs in numbers yet.</p>
]]></description><pubDate>Mon, 05 Jan 2026 09:05:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46496699</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46496699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46496699</guid></item><item><title><![CDATA[New comment by matu3ba in "Memory Subsystem Optimizations"]]></title><description><![CDATA[
<p>The blog looks nice, especially having simple to understand numbers.
To me the memory subsystem articles are missing the more spicy pieces like platform semantics, barriers, de-virtualization (latter discussed in an article separate of the series).
In the other articles I'd also expect debugging format trade-offs (DWARF vs ORC vs alternatives), virtualization performance and relocation effects briefly discussed, but could not find them. 
There are a few C++ article missing: 1. cache-friendly structures in C++, because standard std::map etc are unfortunately not written to be cache-friendly (only std::vector and std::deque<T> with high enough block_size), ideally with performance numbers, 2. what to use for destructive moves or how to roll your own (did not make it into c++26).</p>
]]></description><pubDate>Thu, 01 Jan 2026 21:23:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46458141</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46458141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46458141</guid></item><item><title><![CDATA[New comment by matu3ba in "Constant-time support coming to LLVM: Protecting cryptographic code"]]></title><description><![CDATA[
<p>Sorry for necro-bumping, but there is a paper doing exactly that besides various other things to eliminate timing channels claiming also to prevent attacks based on speculative execution etc: "BLACKOUT : Data-Oblivious Computation with Blinded Capabilities" <a href="https://arxiv.org/abs/2504.14654" rel="nofollow">https://arxiv.org/abs/2504.14654</a>. They basically utilize another bit of CHERI for "blinded capability" and methods to mitigate potential problems you identified.</p>
]]></description><pubDate>Tue, 09 Dec 2025 20:54:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46210524</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46210524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46210524</guid></item><item><title><![CDATA[New comment by matu3ba in "Zig's new plan for asynchronous programs"]]></title><description><![CDATA[
<p>> I still think sans-io at the language level might be the future, but this isn't a complete solution. Maybe we should be simply compiling all fns to state machines (with the Rust polling implementation detail, a sans-io interface could be used to make such functions trivially sync - just do the syscall and return a completed future).<p>Can you be more specific what is missing in sans-io with explicit state machine for static and dynamic analysis would not be a complete solution?
Serializing the state machine sounds excellent for static and dynamic analysis.
I'd guess the debugging infrastructure for optimization passes and run-time debugging are missing or is there more?</p>
]]></description><pubDate>Tue, 02 Dec 2025 22:38:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46127926</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46127926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46127926</guid></item><item><title><![CDATA[New comment by matu3ba in "Constant-time support coming to LLVM: Protecting cryptographic code"]]></title><description><![CDATA[
<p>What would be more sane alternatives, when it becomes obvious that any side-effect of timing is a potential attack vector?
See <a href="https://www.hertzbleed.com/" rel="nofollow">https://www.hertzbleed.com/</a> for frequency side channels.
I do only see dedicated security cores as options with fast data lanes to the CPU similar to what Apple is doing with Secure Enclave or do you have better suggestions that still allow performance and power savings?</p>
]]></description><pubDate>Wed, 26 Nov 2025 13:12:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46057066</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46057066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46057066</guid></item><item><title><![CDATA[New comment by matu3ba in "Ilya Sutskever: We're moving from the age of scaling to the age of research"]]></title><description><![CDATA[
<p>I like the idea of voluntary thinking very much, but I have no idea how to properly formalize or define it.</p>
]]></description><pubDate>Wed, 26 Nov 2025 13:00:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46056969</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46056969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46056969</guid></item><item><title><![CDATA[New comment by matu3ba in "Ilya Sutskever: We're moving from the age of scaling to the age of research"]]></title><description><![CDATA[
<p>> Forming deterministic actions is a sign of computation, not intelligence.<p>What computations can process and formalize other computations as transferable entity/medium, meaning to teach other computations via various mediums?<p>> Intelligence is probably (I guess) dependent on the nondeterministic actions.<p>I do agree, but I think intelligent actions should be deterministic, even if expressing non-deterministic behavior.<p>> Computation is when you query a standby, doing nothing, machine and it computes a deterministic answer.<p>There are whole languages for stochastic programming <a href="https://en.wikipedia.org/wiki/Stochastic_programming" rel="nofollow">https://en.wikipedia.org/wiki/Stochastic_programming</a> to express deterministically non-deterministic behavior, so I think that is not true.<p>> Intelligence (or at least some sign of it) is when machine queries you, the operator, on it's own volition.<p>So you think the thing, who holds more control/force at doing arbitrary things as the thing sees fit, is more intelligent? That sounds to me more like the definition of power, not intelligence.</p>
]]></description><pubDate>Wed, 26 Nov 2025 12:03:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46056597</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46056597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46056597</guid></item><item><title><![CDATA[New comment by matu3ba in "Ilya Sutskever: We're moving from the age of scaling to the age of research"]]></title><description><![CDATA[
<p>> I mean, we've proven that predicting certain things (even those that require nothing but deduction) require more computational resources regardless of the algorithm used for the prediction.<p>I do understand proofs as formalized deterministic action for given inputs and processing as the solving of various proofs.<p>> Formalising a process, i.e. inferring the rules from observation through induction, may also be dependent on available computational resources.<p>Induction is only one way to construct a process and there are various informal processes (social norms etc). It is true, that the overall process depends on various things like available data points and resources.<p>> I don't have one except for "an overall quality of the mental processes humans present more than other animals".<p>How would your formalize the process of self-reflection and believing in completely made-up stories of humans often used as example that distinguishes animals from humans? It is hard to make a clear distinction in language and math, since we mostly do not understand animal language and math or other well observable behavior (based on that).</p>
]]></description><pubDate>Wed, 26 Nov 2025 09:53:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46055869</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46055869</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46055869</guid></item><item><title><![CDATA[New comment by matu3ba in "Ilya Sutskever: We're moving from the age of scaling to the age of research"]]></title><description><![CDATA[
<p>My definition of intelligence is the capability to process and formalize a deterministic action from given inputs as transferable entity/medium.
In other words knowing how to manipulate the world directly and indirectly via deterministic actions and known inputs and teach others via various mediums.
As example, you can be very intelligent at software programming, but socially very dumb (for example unable to socially influence others).<p>As example, if you do not understand another person (in language) and neither understand the person's work or it's influence, then you would have no assumption on the person's intelligence outside of your context what you assume how smart humans are.<p>ML/AI for text inputs is stochastic at best for context windows with language or plain wrong, so it does not satisfy the definition. Well (formally) specified with smaller scope tend to work well from what I've seen so far.
Known to me working ML/AI problems are calibration/optimization problems.<p>What is your definition?</p>
]]></description><pubDate>Tue, 25 Nov 2025 23:26:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46052083</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46052083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46052083</guid></item><item><title><![CDATA[New comment by matu3ba in "A Reverse Engineer's Anatomy of the macOS Boot Chain and Security Architecture"]]></title><description><![CDATA[
<p>Can you recommend a more factual and complete overview on Apple security architecture and bootchain than this bug-ridden article? I'm interested in hardware security (models).</p>
]]></description><pubDate>Sun, 23 Nov 2025 11:15:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46022600</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46022600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46022600</guid></item><item><title><![CDATA[New comment by matu3ba in "Over-regulation is doubling the cost"]]></title><description><![CDATA[
<p>Money is created and distributed via 1 banking system and 2 government. 
Are 1 rules, 2 checks and 3 punishment enforced against the banking system and government or only to stabilize and extend those systems?
I'd argue the introduction of (arbitrary) rules are often just the excuses to amass power, but enforcement of checks and punishments decides who holds (political) power.</p>
]]></description><pubDate>Fri, 21 Nov 2025 08:34:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46002490</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46002490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46002490</guid></item><item><title><![CDATA[New comment by matu3ba in "Over-regulation is doubling the cost"]]></title><description><![CDATA[
<p>> Regulations are practically the only thing standing between the rich and the powerful and their incessant attempt to drive even more wealth into their own pockets at the expense ordinary people's health, wealth, future, welfare, housing, etc.<p>Try to rethink how money is created and how money gets its value and how and by whom that wealth is distributed. Regulation as in "make rules" does not enforce rules, which is the definition of (political) power.<p>> The other important requirement is to increase the staffing of the regulatory agencies so that their individual workload doesn't become a bottleneck in the entire process. There is a scientific method to assess the staffing requirements of public service institutions. According to that, a significant number of government departments all over the world are understaffed.<p>Why are you claiming "There is a scientific method" and do not provide it? Governments do (risk) management by 1 rules, 2 checks and 3 punishment and we already know from software that complexity in system is only bounded by system working with eventual necessary (ideally partial) resets.
Ideally governments would be structured like that, but that goes against governments interest of extending power/control.
Also, "system working" is decided by the current ruling class/group. Besides markets and physical constrains.</p>
]]></description><pubDate>Fri, 21 Nov 2025 07:51:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46002192</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=46002192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46002192</guid></item><item><title><![CDATA[New comment by matu3ba in "The Future of Programming (2013) [video]"]]></title><description><![CDATA[
<p>> Call me grumpy and sleep deprived, but every year I look at this talk again, and every year I wonder... "now, what" ? What am I supposed to do, as a programmer, to change this sad state of things ?<p>That depends on your goals. If you are into building systems for selling them (or production), then you are bound by the business model (platform vs library) and use cases (to make money).
Otherwise, you are more limited in time.<p>To think more realistically about reality you have to work with, take a look at <a href="https://www.youtube.com/watch?v=Cum5uN2634o" rel="nofollow">https://www.youtube.com/watch?v=Cum5uN2634o</a> about types of (software) systems (decay), then decide what you would like to simplify and what you are willing to invest.
If you want to properly fix stuff, unfortunately often you have to first properly (formally) specify the current system(s) (design space) to use it as (test,etc) reference for (partial) replacement/improvement/extension system(s).<p>What these type of lectures usually skip over (as the essentials) are the involved complexity, solution trade-offs and interoperability for meaningful use cases with current hw/sw/tools.</p>
]]></description><pubDate>Wed, 19 Nov 2025 18:14:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45982838</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=45982838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45982838</guid></item><item><title><![CDATA[New comment by matu3ba in "Open-source Zig book"]]></title><description><![CDATA[
<p>What is the most optimal Erlang/Elixir you can think of regarding standardized effect systems for recording non-determinism, replaying and reversible computing? How comparable are performance numbers of Erlang/Elixir with Java and wasm?</p>
]]></description><pubDate>Mon, 17 Nov 2025 08:38:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45951854</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=45951854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45951854</guid></item><item><title><![CDATA[New comment by matu3ba in "Problems with C++ exceptions"]]></title><description><![CDATA[
<p>> Author completely misunderstands how to use exceptions and is just bashing them. A lot of what he says is inaccurate if not outwardly incorrect.<p>Do you mind to elaborate what you believe are the misunderstandings? Examples of incorrect/inaccurate statements and/or an article with better explanations of mentioned use cases would be helpful.<p>> it's called std::expected<p>How does std::expected play together with all other possible error handling schemas? Can I get unique ids for errors to record (error) traces along functions?
What is the ABI of std::expected? Stable(ish) or is something planned, ideally to get something C compatible?</p>
]]></description><pubDate>Wed, 12 Nov 2025 07:26:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45897277</link><dc:creator>matu3ba</dc:creator><comments>https://news.ycombinator.com/item?id=45897277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45897277</guid></item></channel></rss>