<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Genego</title><link>https://news.ycombinator.com/user?id=Genego</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 16:19:50 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Genego" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Genego in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>I keep having this conversation with clients. If you want to allow an LLM to delete, create or update data; you need to do this with a human in the loop, and explicit hitl gating against execution; where the agent can't even call the tool without triggering an update on the UI that has to be confirmed (then the confirmation issues the actual tool call).</p>
]]></description><pubDate>Mon, 27 Apr 2026 10:07:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47919692</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=47919692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47919692</guid></item><item><title><![CDATA[New comment by Genego in "Ask HN: Who wants to be hired? (April 2026)"]]></title><description><![CDATA[
<p>SEEKING WORK | Full-stack Python/Django Developer (Gen-AI image and video generation)<p><pre><code>   Location: Thailand (UTC+7)
   Remote: Only
   Technologies: Django, Python, HTMX, Tailwind, Postgres, Replicate API, image generation pipelines, LoRA training workflows
   Résumé/CV: https://edwin.genego.io/about
   Email: edwin@genego.io

</code></pre>
I am a well-seasoned software engineer; who grew up with a hackers mindset. I am currently exploring create AI tooling around image & video generation pipelines, multi-model orchestration, prompt engineering systems and cost-optimized workflows. I am currently looking for a startup or agency interested in working with me; as I have availability coming up in the next few months. I have 10-Years of experience (full-stack) mostly with Django, Python & Tailwind. I have most of my work outlined on my website.<p><a href="https://edwin.genego.io/" rel="nofollow">https://edwin.genego.io/</a></p>
]]></description><pubDate>Fri, 03 Apr 2026 11:49:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47625603</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=47625603</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47625603</guid></item><item><title><![CDATA[New comment by Genego in "Ask HN: Who wants to be hired? (January 2026)"]]></title><description><![CDATA[
<p>SEEKING WORK | Full-stack Python/Django Developer (Gen-AI image and video generation)<p><pre><code>   Location: Thailand (UTC+7)
   Remote: Only
   Technologies: Django, Python, HTMX, Tailwind, Postgres, Replicate API, image generation pipelines, LoRA training workflows
   Résumé/CV: https://edwin.genego.io/about
   Email: edwin@genego.io

</code></pre>
Sr. Software Engineer building production Django apps with practical AI integration. I specialize in creative AI tooling , image generation pipelines, multi-model orchestration (Flux, SDXL), prompt engineering systems, and cost-optimized workflows. Current work: 20+ custom management commands for AI image generation, character IP systems, scene replication with layered prompt architecture. I help teams ship AI-powered creative tools without risky rewrites, handling multi-model workflows, resume-capable operations, and obsessive cost tracking. Looking for fractional or project work (2-6 week cycles) involving generative AI, creative tooling, or content pipelines.<p><a href="https://edwin.genego.io/" rel="nofollow">https://edwin.genego.io/</a></p>
]]></description><pubDate>Sat, 03 Jan 2026 01:56:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46471987</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=46471987</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46471987</guid></item><item><title><![CDATA[New comment by Genego in "Django: what’s new in 6.0"]]></title><description><![CDATA[
<p>Whenever I saw people complain about LLMs writing code, I never really understood why they were so adamant that it just didn’t work at all for them. 
The moment I did try to use LLMs outside of Django, it became clear that some frameworks are just much easier to work with LLMs than others. I immediately understood their frustrations.</p>
]]></description><pubDate>Wed, 10 Dec 2025 10:22:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46216131</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=46216131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46216131</guid></item><item><title><![CDATA[New comment by Genego in "We gave 5 LLMs $100K to trade stocks for 8 months"]]></title><description><![CDATA[
<p>When I see stuff like this, I feel like rereading the Incerto by Taleb just to refresh and sharpen my bullshit senses.</p>
]]></description><pubDate>Fri, 05 Dec 2025 01:07:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46155636</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=46155636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46155636</guid></item><item><title><![CDATA[New comment by Genego in "Django 6"]]></title><description><![CDATA[
<p>I feel very comfortable with Django on the frontend, what are you missing there? I usually use Tailwind or Bulma, with HTMX and AlpineJs. I feel like the experience can be very much React like, even if you leave out HTMX. The frontend game of Django really changed about 2 years ago (at least for me).</p>
]]></description><pubDate>Fri, 05 Dec 2025 01:04:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46155600</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=46155600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46155600</guid></item><item><title><![CDATA[New comment by Genego in "Django 6"]]></title><description><![CDATA[
<p>Django has been one of the biggest reasons why web development has been so enjoyable to me. Whenever I switched to something else, I just felt too spoiled by everything that Django gives you. So I always ended up back with Django, and have no regrets at all specializing deep down that path.</p>
]]></description><pubDate>Thu, 04 Dec 2025 23:48:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46154942</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=46154942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46154942</guid></item><item><title><![CDATA[New comment by Genego in "Ask HN: Who wants to be hired? (December 2025)"]]></title><description><![CDATA[
<p>SEEKING WORK | Full-stack Python/Django Developer (Creative AI Focus: GenAI Image & Video Generation)<p><pre><code>   Location: Thailand (UTC+7)
   Remote: Only
   Technologies: Django, Python, HTMX, Tailwind, Postgres, image generation pipelines, LoRA training workflows
   Résumé/CV: https://edwin.genego.io
   Email: edwin@genego.io

</code></pre>
I am a SR. SWE building production systems and workflows with GenAI. I am currently specializing down the road of creating digital characters and universes through diffusion models (both video as well as images). On my blog (<a href="https://edwin.genego.io" rel="nofollow">https://edwin.genego.io</a>) you will find extensive case study material on the topic, as well as a showcase of my own creative skills. Keep in mind that I come to this through the lense of applied-GenAI and not a pure AI/ML background; although I have worked with AI/ML teams well before 2022.<p>I am currently looking for a startup, company, agency or anyone really that is doing world, universe or character building with AI. Whether this is through GenAI models by building IP or something else. my current work spreads 50+ custom management commands for AI image generation, character IP systems, scene replication with layered prompt architecture; which is more or less openly documented on my website. What I am also looking for is fractional or project work (2-6 week cycles) involving generative AI, creative tooling, or content pipelines. <a href="https://edwin.genego.io/blog" rel="nofollow">https://edwin.genego.io/blog</a></p>
]]></description><pubDate>Wed, 03 Dec 2025 00:46:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46128938</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=46128938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46128938</guid></item><item><title><![CDATA[New comment by Genego in "We're losing our voice to LLMs"]]></title><description><![CDATA[
<p>This will get better over time, there will be ways to make LLM output more uniquely yours. I personally enjoy writing all my blog posts with LLMs, because its the only way I can bring to turn countless of notes and drafts I have while running experiments, into some public facing documentation and blog posts that I would want people to read. There are at least 5-6 years of lost ideas, thoughts and notes that I was not able to communicate the same way I can do now with LLMs. So I definitely found my voice here.</p>
]]></description><pubDate>Fri, 28 Nov 2025 00:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46074310</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=46074310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46074310</guid></item><item><title><![CDATA[New comment by Genego in "Cloudflare Global Network experiencing issues"]]></title><description><![CDATA[
<p>Yes (Asia)</p>
]]></description><pubDate>Tue, 18 Nov 2025 11:39:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45963832</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45963832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45963832</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>I am using Django, HTML (JS - AlpineJS & HTMX). Each page is just created from scratch rather than from some CMS or template, I use Claude code for that (with mem0.ai as MCP) and build my entire development workspace and workflow around / into my website.</p>
]]></description><pubDate>Sun, 16 Nov 2025 00:43:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45941788</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45941788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45941788</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>You're just one person, those people have their own audiences; so your own critique, is just your own critique. Just because you don't like it, doesn't mean that it is not resonating well with others. I can tell you from the research I am doing for several hours per day on ai-filmmaking that there are already a few handful of creators making a living from this; with communities behind them that keep growing, and their audiences that keep expanding (some already have 100k to 1m subscribers across different social media channels). Some of them are even striking brand deals.<p>Entire narrative driven AI stores that are driven by AI stories and AI characters in AI generated universes... they are here already, but I can only count those who do it well on two hands (last year, there where 1-2). This is going to accelerate, and if you think its "slop" now, it just takes a few iterations of artists who you personally resonate with to jump onto this, before you stop seeing it as slop. I am jumping on this, because I can see very clearly where this will all lead. You don't have to like it, but it will arrive regardless.</p>
]]></description><pubDate>Sat, 15 Nov 2025 01:22:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45934207</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45934207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45934207</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>Thanks! Its really refreshing to work on this sort of stuff, not even knowing what the end result is going to be. Just a hobby? Something that some new model or third party app will completely replace next week? A new career path? Me getting back to my filmmaking and arts roots? I have no idea, I just know that its some of the best fun I have had with software in my career. I am hoping that more people jump on this experimental path with GenAI, just for themselves or to see how far they can push boundaries.</p>
]]></description><pubDate>Sat, 15 Nov 2025 00:23:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45933822</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45933822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45933822</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>Automated generator-critique loops for evaluation may be really useful for creating your own style libraries, because its easy for an LLM-agent to evaluate how close an image is from a reference style or scene. So you end up with a series of base prompts, and now can replicate that style across a whole franchise of stories. Most people still do it with reference images, and it doesn't really create very stable results. If you do need some help with bounding boxes for nano-banana, feel free to send me a message!</p>
]]></description><pubDate>Sat, 15 Nov 2025 00:06:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45933681</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45933681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45933681</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>The issue for them is that once the tools exists, adoption only moves in one direction. And it will enable a whole wave of new artists. I sympathize with them, but if I enjoy GenAI art creation and see it as my genuine creative outlet, why would I stop? What about thousands of others exploring this?<p>If at some point I also get very good at it; and the tech, models and tools mature, this will turn into a real avenue; who are they to tell us not to pursue it?</p>
]]></description><pubDate>Fri, 14 Nov 2025 22:11:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45932767</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45932767</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45932767</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>No I am using my own workflows and software for this. I made nano-banana accept my bounding boxes. Everything is possible with some good prompting: <a href="https://edwin.genego.io/blog/lpa-studio" rel="nofollow">https://edwin.genego.io/blog/lpa-studio</a> < there are some videos of an earlier version there while I am editing a story. Either send the coords and describe the location well, or draw a box around the bb and tell it to return the image without the drawn bb, and only the requested changes.<p>It also works well if you draw a bb on the original image, then ask Claude for a meta-prompt to deconstruct the changes into a much more detailed prompt, and then send the original image without the bbs for changes. It really depends on the changes you need, and how long you're willing to wait.<p>- normal image editing response: 12-14s<p>- image editing response with Claude meta-prompting: 20-25s<p>- image editing response with Claude meta-prompting as well as image deconstructing and re-constructing the prompt: 40-60s<p>(I use Replicate though, so the actual API may be much faster).<p>This way you can also go into new views of a scene by zooming in and out the image on the same aspect-ratio canvas, and asking it to generatively fill the white borders around. So you can go from an tight inside shot, to viewing the same scene from outside of an house window. Or from inside the car, to outside the car.</p>
]]></description><pubDate>Fri, 14 Nov 2025 06:45:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45924516</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45924516</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45924516</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>Yes, the prompt is composed of 7 different layers, where I group together coherent visual and temporal responsibilities. Depending on the scene, I usually only change 3-5 layers, but the base layers still stay the same; so the scenes all appear within the same story universe and same style. If something feels off, or feels like it needs to be improved, I just adjust one layer after the other to experiment with the results on the entire story, but also on individual scene level. Over time, I have created quite some 7-Layer style profiles, that work well, and I can cast onto different story universes. Keep in mind this is heavy experimentation, it may just be that there is a much easier way to do this, but I am seeing success with this. <a href="https://edwin.genego.io/blog/lpa-studio" rel="nofollow">https://edwin.genego.io/blog/lpa-studio</a> - at any point I may throw this all out and start over; depending on how well my understanding of this all develops.<p>Bounding boxes: I actually send an image with a red box around where the requested change is needed. And 8 out of 10 times it works well. But if it doesn't work, I use Claude to make the prompt more refined. The Claude API call that I make, can see the image + the prompt, as well understanding the layering system. This is one of the 3 ways I edit, there is another one where I just sent the prompt to Claude without it looking at the image. Right now this all feels like dial-up. With a minimum of 0.035$ per image generation (0.0001$ if I just use a LoRa though) and a minimum of 12-14 seconds wait on each edit/generation.</p>
]]></description><pubDate>Fri, 14 Nov 2025 02:47:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45923310</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45923310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45923310</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>Yes we are definitely doing the same! For now I’m just familiarizing myself in this space technically and conceptually. <a href="https://edwin.genego.io/blog" rel="nofollow">https://edwin.genego.io/blog</a></p>
]]></description><pubDate>Thu, 13 Nov 2025 22:00:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45921193</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45921193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45921193</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>How much have you experimented with it? For some stories I may generate 5 image variations of 10-20 different scenes and then spend time writing down what worked and what did not; and running the generation again (this part is mostly for research). It’s certainly advancing my understanding over time and being able to control the output better. But I’m learning that it takes a huge amount of trial and error. So versioning prompts is definitely recommended, especially if you find some nuances that work for you.</p>
]]></description><pubDate>Thu, 13 Nov 2025 21:43:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45921006</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45921006</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45921006</guid></item><item><title><![CDATA[New comment by Genego in "Nano Banana can be prompt engineered for nuanced AI image generation"]]></title><description><![CDATA[
<p>I have been generating a few dozen images per day for storyboarding purposes. The more I try to perfect it, the easier it becomes to control these outputs and even keep the entire visual story as well as their characters consistent over a few dozen different scenes; while even controlling the time of day throughout the story. I am currently working with 7 layers prompts to control for environment, camera, subject, composition, light, colors and overall quality (it might be overkill, but it’s also experimenting).<p>I also created a small editing suite for myself where I can draw bounding boxes on images when they aren’t perfect, and have them fixed. Either just with a prompt or feeding them to Claude as image and then having it write the prompt to fix the issue for me (as a workflow on the api). It’s been quite a lot of fun to figure out what works. I am incredibly impressed by where this is all going.<p>Once you do have good storyboards. You can easily do start-to-end GenAI video generation (hopping from scene to scene) and bring them to life and build your own small visual animated universes.</p>
]]></description><pubDate>Thu, 13 Nov 2025 21:11:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45920596</link><dc:creator>Genego</dc:creator><comments>https://news.ycombinator.com/item?id=45920596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45920596</guid></item></channel></rss>