<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: totalperspectiv</title><link>https://news.ycombinator.com/user?id=totalperspectiv</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 14:40:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=totalperspectiv" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by totalperspectiv in "Mojo 1.0 Beta"]]></title><description><![CDATA[
<p>"requires" is a strong word, but I implemented an alignment kernel that can do alignments on the GPU.<p>Overall I think there is going to be a lot of "old" gpu compute hanging around, and now that writing kernels is a lot easier than it has been, we might as well try and see what algorithms we can get working there.<p>I originally picked up Mojo for the SIMD, not for the GPU kernels. The SIMD usability in Mojo is outstanding.<p>Paper on the tool I wrote: <a href="https://doi.org/10.1093/bioadv/vbaf292" rel="nofollow">https://doi.org/10.1093/bioadv/vbaf292</a></p>
]]></description><pubDate>Sat, 09 May 2026 00:04:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=48070289</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=48070289</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48070289</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Mojo 1.0 Beta"]]></title><description><![CDATA[
<p>Me too! I've been using it for bioinformatics related work, and it is absolutely fantastic. I can't wait for it to hit fully open source status so it can be easily recommended.</p>
]]></description><pubDate>Fri, 08 May 2026 20:15:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=48068198</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=48068198</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48068198</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Mojo 1.0 Beta"]]></title><description><![CDATA[
<p>Having written a lot of Mojo over the last two year, just for fun, it's a really cool language. Ownership model adjacent to Rust, comptime that is more powerful than Zig, Rich type system, first class SIMD support, etc.<p>Performance wise it's the first language in long time that isn't just an LLVM wrapper. LLVM is still involved, but they are using it differently than say, Rust or Zig.<p>Very excited for Mojo once it's open sourced later this year.</p>
]]></description><pubDate>Fri, 08 May 2026 20:08:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=48068083</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=48068083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48068083</guid></item><item><title><![CDATA[New comment by totalperspectiv in "The Impossible Optimization, and the Metaprogramming to Achieve It"]]></title><description><![CDATA[
<p>The author works for Modular. He shared the write up on the Mojo Discord. I think Mojo users were the intended audience.</p>
]]></description><pubDate>Sat, 01 Nov 2025 12:07:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45781023</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45781023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45781023</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Removing newlines in FASTA file increases ZSTD compression ratio by 10x"]]></title><description><![CDATA[
<p>I've only tested this when writing my own parser where I could skip the record end checks, so idk if this improves perf on a existing parser. Excited to see what you find!</p>
]]></description><pubDate>Mon, 15 Sep 2025 15:19:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45250706</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45250706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45250706</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Removing newlines in FASTA file increases ZSTD compression ratio by 10x"]]></title><description><![CDATA[
<p>Removing the wrapping newline from the FASTA/FASTQ convention also dramatically improves parsing perf when you don't have to do as much lookahead to find record ends.</p>
]]></description><pubDate>Mon, 15 Sep 2025 14:17:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45250047</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45250047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45250047</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Removing newlines in FASTA file increases ZSTD compression ratio by 10x"]]></title><description><![CDATA[
<p>> a testament to the massive gap in perceived vs actual programming ability of the average bioinformatician.<p>This is not really a fair statement. Literally all of software bears the weight of some early poor choice that then keeps moving forward via weight of momentum. FASTA and FASTQ formats are exceptionally dumb though.</p>
]]></description><pubDate>Mon, 15 Sep 2025 14:13:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45250005</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45250005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45250005</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Matmul on Blackwell: Part 2 – Using Hardware Features to Optimize Matmul"]]></title><description><![CDATA[
<p>Because I was originally writing some very CPU intensive SIMD stuff, which Mojo is also fantastic for. Once I got that working and running nicely I decided to try getting the same algo running on GPU since, at the time, they had just open sourced the GPU parts of the stdlib. It was really easy to get going with.<p>I have not used Triton/Cute/Cutlass though, so I can't compare against anything other than Cuda really.</p>
]]></description><pubDate>Mon, 08 Sep 2025 13:22:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45167931</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45167931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45167931</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Matmul on Blackwell: Part 2 – Using Hardware Features to Optimize Matmul"]]></title><description><![CDATA[
<p>I can confirm, it’s quite nice.</p>
]]></description><pubDate>Sun, 07 Sep 2025 14:59:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45158770</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45158770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45158770</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Matmul on Blackwell: Part 2 – Using Hardware Features to Optimize Matmul"]]></title><description><![CDATA[
<p>They allow you to write a kernel for Nvidia, or AMD, that can take full advantage of the Hardware of either one, then throw a compile time if-statement in there to switch which kernel to use based on the hardware available.<p>So, you can support either vendor with as-good-vendor-library performance. That’s not lock-in to me at least.<p>It’s not as good as the compiler being able to just magically produce optimized kernels for arbitrary hardware though, fully agree there. But it’s a big step forward from Cuda/HIP.</p>
]]></description><pubDate>Sun, 07 Sep 2025 14:58:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45158760</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45158760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45158760</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Matmul on Blackwell: Part 2 – Using Hardware Features to Optimize Matmul"]]></title><description><![CDATA[
<p>I have used Mojo quite a bit. It’s fantastic and lives up to every claim it makes. When the compiler becomes open source I fully expect it to really start taking off for data science.<p>Modular also has its paid platform for serving models called Max. I’ve not used that but heard good things.</p>
]]></description><pubDate>Sun, 07 Sep 2025 01:18:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45154439</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45154439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45154439</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Matmul on Blackwell: Part 2 – Using Hardware Features to Optimize Matmul"]]></title><description><![CDATA[
<p>I don’t follow your logic. Mojo can target multiple gpu vendors. What is the Modular specific lock in?</p>
]]></description><pubDate>Sun, 07 Sep 2025 01:16:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45154429</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45154429</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45154429</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Optimising for maintainability – Gleam in production at Strand"]]></title><description><![CDATA[
<p>I can't speak to Gleam, but for Elixir I just used Burrito to create a single executable: <a href="https://github.com/burrito-elixir/burrito" rel="nofollow">https://github.com/burrito-elixir/burrito</a> I think it works for just Erlang too.</p>
]]></description><pubDate>Thu, 28 Aug 2025 19:46:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45056265</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=45056265</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45056265</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Nextflow: System for creating scalable, portable, reproducible workflows"]]></title><description><![CDATA[
<p>I really wish Crystal had taken off a bit. I thought it had a chance in bfx with some good benchmarking and PR by lh3 in biofast.</p>
]]></description><pubDate>Wed, 16 Jul 2025 14:55:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44583013</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=44583013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44583013</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Nextflow: System for creating scalable, portable, reproducible workflows"]]></title><description><![CDATA[
<p>I would rather write Groovy than YAML any day of the week.<p>Why did you rule out Nextflow or Snakemake? I believe they both work with k8 clusters.<p>Argo doesn’t look great from my standpoint as a workflow author.</p>
]]></description><pubDate>Wed, 16 Jul 2025 11:58:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44581230</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=44581230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44581230</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Nextflow: System for creating scalable, portable, reproducible workflows"]]></title><description><![CDATA[
<p>NF Tower / Seqera would be the selling points. They offer a nice UX for managing pipelines and abstract over AWS.<p>Technically snakemake can do it all. But in practice NF seems to scale up a bit better.<p>That said, if you don’t need the UI for scientists, I’d stick to snakemake.</p>
]]></description><pubDate>Wed, 16 Jul 2025 11:51:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44581183</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=44581183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44581183</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Nextflow: System for creating scalable, portable, reproducible workflows"]]></title><description><![CDATA[
<p>Cool seeing a workflow language pop up on HN!<p>Nextflow and Snakemake are the two most-used options in bioinformatics these days, with WDL trailing those two.<p>I really wish Nextflow was based on Scala and not Groovy, but so it goes.<p>There is a Draft up for dsl3 that adds static types to the channels that I’m very excited about. <a href="https://github.com/nf-core/fetchngs/pull/309">https://github.com/nf-core/fetchngs/pull/309</a></p>
]]></description><pubDate>Wed, 16 Jul 2025 09:03:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44580183</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=44580183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44580183</guid></item><item><title><![CDATA[New comment by totalperspectiv in "I'm dialing back my LLM usage"]]></title><description><![CDATA[
<p>I think you hit the nail on the head with the mental model part. I really like this method of thinking about programming "Programming as Theory Building" <a href="https://gist.github.com/onlurking/fc5c81d18cfce9ff81bc968a7f342fb1#programming-and-the-programmers-knowledge" rel="nofollow">https://gist.github.com/onlurking/fc5c81d18cfce9ff81bc968a7f...</a><p>I don't mind when other programmers use AI, and use it myself. What I mind is the abdication of responsibility for the code or result. I don't think that we should be issuing a disclaimer when we use AI any more than when I used grep to do the log search. If we use it, we own the result of it as a tool and need to treat it as such. Extra important for generated code.</p>
]]></description><pubDate>Wed, 02 Jul 2025 15:25:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44444904</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=44444904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44444904</guid></item><item><title><![CDATA[Show HN: Ish – Grep-like text search with optimal alignment, built with Mojo]]></title><description><![CDATA[
<p>ish is a CLI tool for searching records using alignment methods. It’s record-type aware and supports lines, FASTA, and FASTQ. I was really pleased with the dev experience using Mojo. It’s still pre-1.0 and missing a few things, but overall it came together smoothly. Performance-wise, Mojo held up well. There's no direct apples-to-apples comparison for ish as a whole, but the core alignment algorithms are on par with the C++ reference (faster in one case, see preprint linked above). Writing and shipping a GPU kernel as part of a CLI was especially cool. This was my first time with GPU programming, and Mojo made it feel first-class, though I don't have much CUDA experience to compare. Excited to see where Mojo goes. Once the compiler is open-sourced, the possibilities look wide open.<p>Preprint can be found here: <a href="https://www.biorxiv.org/content/10.1101/2025.06.04.657890v1" rel="nofollow">https://www.biorxiv.org/content/10.1101/2025.06.04.657890v1</a><p>Code can be found here: <a href="https://github.com/BioRadOpenSource/ish">https://github.com/BioRadOpenSource/ish</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44247480">https://news.ycombinator.com/item?id=44247480</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 11 Jun 2025 13:35:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44247480</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=44247480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44247480</guid></item><item><title><![CDATA[New comment by totalperspectiv in "Show HN: Ish – a grep like CLI tool using SIMD/GPU alignment, built with Mojo"]]></title><description><![CDATA[
<p>ish is a CLI tool for searching records using alignment methods. It’s record-type aware and supports lines, FASTA, and FASTQ.
 I was really pleased with the dev experience using Mojo. It’s still pre-1.0 and missing a few things, but overall it came together smoothly.
Performance-wise, Mojo held up well. There's no direct apples-to-apples comparison for ish as a whole, but the core alignment algorithms are on par with the C++ reference (faster in one case, see preprint linked above).
Writing and shipping a GPU kernel as part of a CLI was especially cool. This was my first time with GPU programming, and Mojo made it feel first-class, though I don't have much CUDA experience to compare.
Excited to see where Mojo goes. Once the compiler is open-sourced, the possibilities look wide open.</p>
]]></description><pubDate>Tue, 10 Jun 2025 10:11:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44234847</link><dc:creator>totalperspectiv</dc:creator><comments>https://news.ycombinator.com/item?id=44234847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44234847</guid></item></channel></rss>