<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mfabbri77</title><link>https://news.ycombinator.com/user?id=mfabbri77</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 03:03:48 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mfabbri77" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mfabbri77 in "Multi-stroke text effect in CSS"]]></title><description><![CDATA[
<p>Miter join (Safari) VS round join (Chrome)</p>
]]></description><pubDate>Wed, 06 May 2026 16:02:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=48037771</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=48037771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48037771</guid></item><item><title><![CDATA[New comment by mfabbri77 in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>The point was to answer the question: "Can every piece of software be viewed as a permutation of software that has already been developed?"
In my opinion, an email client is a more favorable example than a 3D engine. In fields where it is necessary to differentiate, improve, or innovate at the algorithmic level, where research and development play a fundamental role, it is not simply a matter of permuting software or leveraging existing software components by simply assembling them more effectively.</p>
]]></description><pubDate>Thu, 19 Mar 2026 07:15:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47435944</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47435944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47435944</guid></item><item><title><![CDATA[New comment by mfabbri77 in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>In my experience, the further you move away from the user and toward the hardware and fundamental theoretical algorithms, the less true this becomes.<p>This is very true for an email client, but very untrue for an innovative 3D rendering engine technology (just an example).</p>
]]></description><pubDate>Thu, 19 Mar 2026 06:36:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47435711</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47435711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47435711</guid></item><item><title><![CDATA[New comment by mfabbri77 in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>Yes. This happens because the training data contains countless SotA "to-do" apps. This argument does not scale well to other types of software.</p>
]]></description><pubDate>Thu, 19 Mar 2026 05:53:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47435455</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47435455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47435455</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Learning Creative Coding"]]></title><description><![CDATA[
<p>Back in 80/90 we used to call it "demoscene".</p>
]]></description><pubDate>Sun, 15 Mar 2026 04:57:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384493</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47384493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384493</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Is legal the same as legitimate: AI reimplementation and the erosion of copyleft"]]></title><description><![CDATA[
<p>What if someone doesn't declare that it has been reimplemented using an LLM? Isn't it enough to simply declare that you have reimplemented the software without using an LLM? Good luck proving that in court...<p>One thing is certain, however: copyleft licenses will disappear: If I can't control the redistribution of my code (through a GPL or similar license), I choose to develop it in closed source.</p>
]]></description><pubDate>Mon, 09 Mar 2026 17:00:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47311773</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47311773</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47311773</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Building a new Flash"]]></title><description><![CDATA[
<p>OpenVG is just an API by Kronos group, that was never implemented by hardware vendors on desktop graphic cards (it was specifically created for mobiles, as OpenGL|ES).<p>Btw, there exists several implementations, with pure CPU rendering (like AmanithVG SRE) and others with GPU backends.</p>
]]></description><pubDate>Thu, 05 Mar 2026 18:14:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47265120</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47265120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47265120</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Relicensing with AI-Assisted Rewrite"]]></title><description><![CDATA[
<p>If the AI   product is recognised as "derivative work" of a GPL-compliant project, then it must itself be licensed under the GPL. Otherwise, it can be licensed under any other license (including closed source/proprietary binary licenses). This last option is what threatens to kill open source: an author no longer has control over their project. This might work for permissive licenses, but for GPL/AGPL and similar licenses, it's precisely the main reason they exist: to prevent the code from being taken, modified, and treated as closed source (including possible use as part of commercial products or Sass).</p>
]]></description><pubDate>Thu, 05 Mar 2026 10:26:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47259983</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47259983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47259983</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Relicensing with AI-Assisted Rewrite"]]></title><description><![CDATA[
<p>This has the potential to kill open source, or at least the most restrictive licenses (GPL, AGPL, ...): if a license no longer protects software from unwanted use, the only possible strategy is to make the development closed source.</p>
]]></description><pubDate>Thu, 05 Mar 2026 06:24:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47258218</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=47258218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47258218</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Opening the AWS European Sovereign Cloud"]]></title><description><![CDATA[
<p>This issue can be resolved on the European side by effectively making the transfer of EU->US data illegal and, if detected, nationalizing the entire EU subsidiary of the US company. Would this trigger a US-EU war? Certainly, but only the blind cannot see that relations are no longer those between two allies.</p>
]]></description><pubDate>Tue, 20 Jan 2026 06:15:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46688497</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=46688497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46688497</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Drawing Text Isn't Simple: Benchmarking Console vs. Graphical Rendering"]]></title><description><![CDATA[
<p>I don't know how common it is in fonts, but for generic 2D vector graphics, problems arise from the management of self-intersections, i.e., the pixels where they fall. With an SDF rasterizer, how do you handle the pixel where two Bezier curves intersect in a fish-shaped path? 
For this reason, more conventional rasterizers with multisampling are often used, or rasterizers that calculate pixel coverage analytically, also finding intersections (sweepline, Bentley-Ottmann).</p>
]]></description><pubDate>Wed, 12 Nov 2025 04:41:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45896424</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=45896424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45896424</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Rasterizer: A GPU-accelerated 2D vector graphics engine in ~4k LOC"]]></title><description><![CDATA[
<p>I'm always interested in new 2D vector rendering algorithms, so if you make a blog post explaining your approach, with enough detail, I'd be happy to read it!</p>
]]></description><pubDate>Mon, 01 Sep 2025 08:26:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45090725</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=45090725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45090725</guid></item><item><title><![CDATA[New comment by mfabbri77 in "GPU Prefix Sums: A nearly complete collection"]]></title><description><![CDATA[
<p>At what order of magnitude in the number of elements to be sorted (I'm thinking to the overhead of the GPU setup cost) is the break-even point reached, compared to a pure CPU sort?</p>
]]></description><pubDate>Fri, 29 Aug 2025 02:32:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45059421</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=45059421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45059421</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Ask HN: Need C code for drawing graphic primitives to a framebuffer"]]></title><description><![CDATA[
<p>You have to look for "stroking": there are several ways to do it, in CPU you usually first perform a piecewise linear approximation of the curve, then offset each segment along the normals, add the caps, get a polygon and draw it.</p>
]]></description><pubDate>Fri, 11 Jul 2025 13:32:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44531952</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=44531952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44531952</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Ask HN: Need C code for drawing graphic primitives to a framebuffer"]]></title><description><![CDATA[
<p>You need to look at a 2d vector graphics library eg: Skia, Cairo, Agg, NanoVG, Blend2D, OpenVG (and many other, in no particular order).</p>
]]></description><pubDate>Fri, 11 Jul 2025 12:52:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44531564</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=44531564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44531564</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Faster, easier 2D vector rendering [video]"]]></title><description><![CDATA[
<p>You didn't mention one of the biggest source of 2d vector graphic artifacts: mapping polygon coverage to the alpha channel, which is what virtually all engines do, and is the main reason why we at Mazatech are writing a new version of our engine, AmanithVG, based on a simple idea: draw all the paths (polygons) at once. Well, the idea is simple, the implementation... not so much ;)</p>
]]></description><pubDate>Tue, 10 Jun 2025 16:33:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44238769</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=44238769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44238769</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing"]]></title><description><![CDATA[
<p>As pointed out by Raphlinus, the moire pattern in the Siemens star isn't such a significant quality indicator for the type of content usually encountered in 2D vector graphics. With the analytical coverage calculation you can have perfect font/text rendering, perfect thin lines/shapes and, by solving all the areas at once, no conflating artifacts.</p>
]]></description><pubDate>Thu, 15 Aug 2024 18:33:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=41259035</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=41259035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41259035</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing"]]></title><description><![CDATA[
<p>In my opinion, if you break down all the polygons in your scene into non-overlapping polygons, then clip them into pixels, calculate the color of each piece of polygon (applying all paints, blend modes, etc) and sum it up, ...in the end that's the best visual quality you can get. And that's the idea i'm working on, but it involves the decomposition/clip step on the CPU, while sum of paint/blend is done by the GPU.</p>
]]></description><pubDate>Thu, 15 Aug 2024 18:00:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=41258717</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=41258717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41258717</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing"]]></title><description><![CDATA[
<p>I've been researching this field for 20 years (I'm one of the developers of AmanithVG). Unfortunately, no matter how fast they are made, all the algorithms to analytically decompose areas involve a step to find intersections and therefore sweepline approaches that are difficult to parallelize and therefore must be done in CPU. However, we are working on it for the next AmanithVG rasterizer, so I'm keeping my eyes open for all possible alternatives.</p>
]]></description><pubDate>Thu, 15 Aug 2024 07:22:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=41253776</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=41253776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41253776</guid></item><item><title><![CDATA[New comment by mfabbri77 in "Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing"]]></title><description><![CDATA[
<p>I am quite convinced that if the goal is the best possible output quality, then the best approach is to analytically compute the non-overlapping areas of each polygon within each pixel. Resolving all contributions (areas) together in the same single pass for each pixel.</p>
]]></description><pubDate>Thu, 15 Aug 2024 07:06:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41253717</link><dc:creator>mfabbri77</dc:creator><comments>https://news.ycombinator.com/item?id=41253717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41253717</guid></item></channel></rss>