<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: CCHappy</title><link>https://news.ycombinator.com/user?id=CCHappy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 08:25:51 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=CCHappy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by CCHappy in "Wan Animate: A Unified Model for Video Character Replacement"]]></title><description><![CDATA[
<p>What Is Wan Animate?<p>Wan Animate is an AI video generation system developed by the Wan team. It focuses on two core capabilities:<p>Character Replacement – Keep the motion, rhythm, and background of the original video while swapping in a custom character image.<p>Expression Transfer – Capture facial details and expressions from the source video and accurately map them onto the new character.<p>Style Flexibility – Works across photography, illustrations, anime characters, game avatars, or even pets.<p>Built on the Wan2.2-Animate framework, the model combines diffusion techniques with transformer-based architectures to generate natural, detailed, and stylistically consistent results.<p>Our Web Demo<p>We’ve deployed a unified character replacement demo on wananimate.video
. Here’s what you can expect:<p>Browser-first workflow – Just upload a character image and a reference video.<p>No hardware required – No GPUs, no environment setup.<p>Fast results – Within minutes, you’ll receive an animated output.<p>Right now, our version still trails the official implementation in some areas — like edge refinement and lighting consistency in complex scenes — but we’re iterating quickly. Expect smoother, sharper results soon.</p>
]]></description><pubDate>Tue, 23 Sep 2025 09:34:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45344769</link><dc:creator>CCHappy</dc:creator><comments>https://news.ycombinator.com/item?id=45344769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45344769</guid></item><item><title><![CDATA[Wan Animate: A Unified Model for Video Character Replacement]]></title><description><![CDATA[
<p>Article URL: <a href="https://wananimate.video/">https://wananimate.video/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45344768">https://news.ycombinator.com/item?id=45344768</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 23 Sep 2025 09:34:44 +0000</pubDate><link>https://wananimate.video/</link><dc:creator>CCHappy</dc:creator><comments>https://news.ycombinator.com/item?id=45344768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45344768</guid></item><item><title><![CDATA[New comment by CCHappy in "Wan Animate: AI tool that transfers motion from videos to animate any character"]]></title><description><![CDATA[
<p>We’ve deployed a unified character replacement demo on <a href="https://wananimate.video" rel="nofollow">https://wananimate.video</a>
. Here’s what you can expect:
Browser-first workflow – Just upload a character image and a reference video.
No hardware required – No GPUs, no environment setup.
Fast results – Within minutes, you’ll receive an animated output.
Right now, our version still trails the official implementation in some areas — like edge refinement and lighting consistency in complex scenes — but we’re iterating quickly. Expect smoother, sharper results soon.</p>
]]></description><pubDate>Tue, 23 Sep 2025 09:33:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45344754</link><dc:creator>CCHappy</dc:creator><comments>https://news.ycombinator.com/item?id=45344754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45344754</guid></item></channel></rss>