<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: new_user55</title><link>https://news.ycombinator.com/user?id=new_user55</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 16:34:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=new_user55" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by new_user55 in "Framework Laptop 13 Pro"]]></title><description><![CDATA[
<p>I will guess for linux. Most out of the box linux laptops I saw were intel based. I guess open source support of intel is best among others in the industry. Even in my current thinkpad first thing I did was to replace its wifi module from realtek to intel (realtek was always hanging/dropping connection etc).</p>
]]></description><pubDate>Tue, 21 Apr 2026 18:50:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47852840</link><dc:creator>new_user55</dc:creator><comments>https://news.ycombinator.com/item?id=47852840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47852840</guid></item><item><title><![CDATA[New comment by new_user55 in "Show HN: XLA-based array computing framework for R"]]></title><description><![CDATA[
<p>Its really cool! If you don't mind me asking, does it support variable size inputs? I am bit confused about JAX in that regards. I am trying for long to run JAX stablehlo models in C++ for inference. However dynamic shapes were still an issue. If I understand correctly, it recompiles the kernels for all different shapes at runtime, so if inputs vary too much in shape, it will spend considerable time recompiling kerenels. In C++ inference it becomes impossible .However I could be wrong (I did not fully understand the issue, the developer of gopjrt tried to explain it to me!). Do you have any thoughts on this?<p>e.g.:<p><a href="https://github.com/openxla/xla/issues/33092" rel="nofollow">https://github.com/openxla/xla/issues/33092</a>
<a href="https://github.com/openxla/xla/issues/35556" rel="nofollow">https://github.com/openxla/xla/issues/35556</a><p>Explanation from the gopjrt dev:<p><a href="https://github.com/gomlx/gopjrt/issues/59" rel="nofollow">https://github.com/gomlx/gopjrt/issues/59</a></p>
]]></description><pubDate>Thu, 12 Mar 2026 09:05:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348182</link><dc:creator>new_user55</dc:creator><comments>https://news.ycombinator.com/item?id=47348182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348182</guid></item><item><title><![CDATA[New comment by new_user55 in "ONNX runtime: Cross-platform accelerated machine learning"]]></title><description><![CDATA[
<p>We wanted to use ONNX runtime for a "model driver" for MD simulations, where any ML model can be used for molecular dynamics simulations. Problem was it was way too immature. Like ceiling function will only work with single precision in ONNX. But the biggest issue was that we could not take derivatives in ONNX runtime, so any complicated model that uses derivatives inside was a nogo, is that limitation still exist? Do you know if it can take derivatives in training mode now?<p>Eventually we went with pytorch only support for the time being, with still exploring OpenXLA in place of ONNX, as a universal adapter: <a href="https://github.com/ipcamit/colabfit-model-driver">https://github.com/ipcamit/colabfit-model-driver</a></p>
]]></description><pubDate>Wed, 26 Jul 2023 01:04:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=36871876</link><dc:creator>new_user55</dc:creator><comments>https://news.ycombinator.com/item?id=36871876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36871876</guid></item><item><title><![CDATA[New comment by new_user55 in "ONNX runtime: Cross-platform accelerated machine learning"]]></title><description><![CDATA[
<p>Tinygrad is python only right? Can it provide gradients during C++ runtime as well? ONNX runtime have multiple language backends for inference.</p>
]]></description><pubDate>Wed, 26 Jul 2023 00:59:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=36871829</link><dc:creator>new_user55</dc:creator><comments>https://news.ycombinator.com/item?id=36871829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36871829</guid></item><item><title><![CDATA[New comment by new_user55 in "PyTorch 2.0"]]></title><description><![CDATA[
<p>There is [torch-md](<a href="https://github.com/torchmd/torchmd" rel="nofollow">https://github.com/torchmd/torchmd</a>)</p>
]]></description><pubDate>Sat, 03 Dec 2022 01:56:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=33839545</link><dc:creator>new_user55</dc:creator><comments>https://news.ycombinator.com/item?id=33839545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33839545</guid></item></channel></rss>