<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: elwypea</title><link>https://news.ycombinator.com/user?id=elwypea</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 15:51:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=elwypea" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by elwypea in "What went wrong with the Alan Turing Institute?"]]></title><description><![CDATA[
<p>Stable diffusion was a productised version of work done at LMU. Not sure Germany is the best example of how AI funding goes wrong.</p>
]]></description><pubDate>Fri, 28 Mar 2025 01:57:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43500592</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=43500592</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43500592</guid></item><item><title><![CDATA[New comment by elwypea in "Confirmed: Reflection 70B's official API is a wrapper for Sonnet 3.5"]]></title><description><![CDATA[
<p><a href="https://xcancel.com/RealJosephus/status/1832904398831280448" rel="nofollow">https://xcancel.com/RealJosephus/status/1832904398831280448</a></p>
]]></description><pubDate>Mon, 09 Sep 2024 08:30:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41486533</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=41486533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41486533</guid></item><item><title><![CDATA[New comment by elwypea in "SiFive Rolls Out RISC-V Cores Aimed at Generative AI and ML"]]></title><description><![CDATA[
<p>What is the pass rate on torchbench? This gives a more realistic measure of how good a vendor's pytorch support is.<p>All the big chip startups have their own pytorch compiler that works on the examples they write themselves. From what I've seen of Groq it doesn't appear to be any different.<p>The problem is that pytorch is incredibly permissive in what it lets users do. torch.compile is itself very new and far from optimal.</p>
]]></description><pubDate>Tue, 17 Oct 2023 19:10:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=37920109</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=37920109</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37920109</guid></item><item><title><![CDATA[New comment by elwypea in "SiFive Rolls Out RISC-V Cores Aimed at Generative AI and ML"]]></title><description><![CDATA[
<p>llama.cpp is not necessary for creating lots of demand for the chip it was originally written for (Apple M1), whereas new hardware vendors need to demonstrate they can plugin to existing tools to generate enough demand to ship in volume.</p>
]]></description><pubDate>Tue, 17 Oct 2023 16:36:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=37917709</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=37917709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37917709</guid></item><item><title><![CDATA[New comment by elwypea in "SiFive Rolls Out RISC-V Cores Aimed at Generative AI and ML"]]></title><description><![CDATA[
<p>Even after prioritising tensorflow, keras, jax etc., they can still afford to have a very large team working on torch_xla and still hedge their bets with a separate team on torch_mlir.</p>
]]></description><pubDate>Tue, 17 Oct 2023 13:48:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=37915011</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=37915011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37915011</guid></item><item><title><![CDATA[New comment by elwypea in "SiFive Rolls Out RISC-V Cores Aimed at Generative AI and ML"]]></title><description><![CDATA[
<p>That might be good enough to get a hardware startup acquired, but not good enough to get major sales. Users want pytorch and negligible switching cost between chips.<p>Bigger problem for startups trying to muscle in on LLMs is that there isn't much room for improvement on existing solutions to do something radically different.</p>
]]></description><pubDate>Tue, 17 Oct 2023 13:15:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=37914495</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=37914495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37914495</guid></item><item><title><![CDATA[New comment by elwypea in "SiFive Rolls Out RISC-V Cores Aimed at Generative AI and ML"]]></title><description><![CDATA[
<p>Easier said than done. Even with Google level resources, TPU support for pytorch is patchy (<a href="https://arxiv.org/abs/2309.07181" rel="nofollow noreferrer">https://arxiv.org/abs/2309.07181</a>). Device abstraction is not great, assumes CUDA in unexpected places.</p>
]]></description><pubDate>Tue, 17 Oct 2023 08:45:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=37912191</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=37912191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37912191</guid></item><item><title><![CDATA[New comment by elwypea in "Inmos and the Transputer – Parallel Ventures"]]></title><description><![CDATA[
<p>I guess some people who worked on the transputer later went on to design Graphcore's IPU? The architecture looks similar (and Bristol based)</p>
]]></description><pubDate>Mon, 28 Aug 2023 08:12:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=37291212</link><dc:creator>elwypea</dc:creator><comments>https://news.ycombinator.com/item?id=37291212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37291212</guid></item></channel></rss>