<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: flux1krea</title><link>https://news.ycombinator.com/user?id=flux1krea</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 00:08:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=flux1krea" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by flux1krea in "Gemini 2.5 Flash Image"]]></title><description><![CDATA[
<p>The consistency of the characters is really astonishing.It can be used for free at <a href="https://www.nano-banana.com" rel="nofollow">https://www.nano-banana.com</a></p>
]]></description><pubDate>Tue, 02 Sep 2025 05:44:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45099504</link><dc:creator>flux1krea</dc:creator><comments>https://news.ycombinator.com/item?id=45099504</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45099504</guid></item><item><title><![CDATA[New comment by flux1krea in "Ask HN: What's your most valuable query to an LLM?"]]></title><description><![CDATA[
<p>My useful local stack: Ollama + Llama 3.1 8B (chat/RAG), VSCode + Continue.dev for coding, Qdrant for lightweight retrieval, 64 GB RAM desktop. Works offline with low latency; main gotcha is context blow-ups → chunking + token monitoring.
Bonus: I A/B prompts/styles in the cloud first, then reproduce in local ComfyUI. Disclosure: I built flux1krea.app to baseline prompts/styles before moving local: <a href="https://www.flux1krea.app" rel="nofollow">https://www.flux1krea.app</a></p>
]]></description><pubDate>Sat, 16 Aug 2025 10:47:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44922089</link><dc:creator>flux1krea</dc:creator><comments>https://news.ycombinator.com/item?id=44922089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44922089</guid></item></channel></rss>