<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: filipstrand</title><link>https://news.ycombinator.com/user?id=filipstrand</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:43:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=filipstrand" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by filipstrand in "Flux 2 Klein pure C inference"]]></title><description><![CDATA[
<p>Really cool project! Impressive to see this being done in pure C.<p>I'm the maintainer of MFLUX (<a href="https://github.com/filipstrand/mflux" rel="nofollow">https://github.com/filipstrand/mflux</a>) which does a similar thing, but at a higher level using the MLX framework optimised for Apple Silicon. I just merged Flux 2 Klein support as well and was happy to see this discussion :)<p>I started out doing this type of work roughly 1.5 years ago when FLUX.1 was released and have been doing it off and on ever since with newer models, trying to use more and more AI over time.<p>At one point, I vibe-coded a debugger to help the coding agent along. It worked OK but as models have gotten stronger, this doesn't really seem to be needed anymore. My latest version simply has a SKILL.md that outlines my overall porting strategy (<a href="https://github.com/filipstrand/mflux/blob/main/.cursor/skills/mflux-model-porting/SKILL.md" rel="nofollow">https://github.com/filipstrand/mflux/blob/main/.cursor/skill...</a>). Somewhat surprisingly, this actually works now with Cursor + Codex-5.2, with little human intervention.<p>> Even if the code was generated using AI, my help in steering towards the right design, implementation choices, and correctness has been vital during the development.<p>This definitely resonates! Curious to hear more about what worked/didn't for you. A couple of things I've found useful:<p>- Porting the pipeline backwards: This is the way I did it personally before using any coding models. The typical image generation flow is the following:<p>1.Text_encodings (+ random_noise_latent)
2.Transformer loop 
3.VAE decoding<p>I found that by starting with the VAE first (by feeding it pre-loaded tensors from the reference extracted at specific locations) it was the quickest way to get to an actual generated image. Once the VAE is done and verified, only then proceed backwards the chain and handle the Transformer, etc. I still prefer to do it this way and I like to manually intervene between step 3,2 and 1, but maybe this won't actually be needed soon?<p>- Also, with the VAE, if you care about implementing the encoding functionality (e.g to be used with img2img features), the round-trip test is a very quick way to verify correctness:<p>image_in -> encode -> decode -> image_out : compare(image_in, image_out)<p>- Investing in a good foundation for weight handling, especially when doing repeat work across similar models. Earlier coding models would easily get confused about weight assignment, naming conventions etc. A lot of time could be wasted because weight assignment failed (sometimes silently) early on.</p>
]]></description><pubDate>Mon, 19 Jan 2026 02:32:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46674450</link><dc:creator>filipstrand</dc:creator><comments>https://news.ycombinator.com/item?id=46674450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46674450</guid></item><item><title><![CDATA[Run FLUX.1 locally on your Mac]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/filipstrand/mflux">https://github.com/filipstrand/mflux</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41420888">https://news.ycombinator.com/item?id=41420888</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 01 Sep 2024 22:18:03 +0000</pubDate><link>https://github.com/filipstrand/mflux</link><dc:creator>filipstrand</dc:creator><comments>https://news.ycombinator.com/item?id=41420888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41420888</guid></item><item><title><![CDATA[Run FLUX.1-Schnell locally on your Mac]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/filipstrand/mflux">https://github.com/filipstrand/mflux</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41229970">https://news.ycombinator.com/item?id=41229970</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 12 Aug 2024 22:15:46 +0000</pubDate><link>https://github.com/filipstrand/mflux</link><dc:creator>filipstrand</dc:creator><comments>https://news.ycombinator.com/item?id=41229970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41229970</guid></item></channel></rss>