<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: woolion</title><link>https://news.ycombinator.com/user?id=woolion</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 12:19:06 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=woolion" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by woolion in "Types and Neural Networks"]]></title><description><![CDATA[
<p>Ok, thank you for the info. Do you have any idea when at some point might be? I'd love to check it out.</p>
]]></description><pubDate>Wed, 22 Apr 2026 07:07:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47860101</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47860101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47860101</guid></item><item><title><![CDATA[New comment by woolion in "Types and Neural Networks"]]></title><description><![CDATA[
<p>>As Philippe Schnoebelen discovered in 2002 [1], languages cannot reduce the difficulty of program construction or comprehension.<p>From a model-checking point of view. This is about taking a proof-theoretic approach...<p>Your last paragraph is also quite wrong: a machine learning could very well easily  learn and solve an NP-complete problem, because this property does not say anything about average case complexity (and we should consider Probabilistic complexity classes, so the picture is even more "complex").</p>
]]></description><pubDate>Tue, 21 Apr 2026 18:21:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47852458</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47852458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47852458</guid></item><item><title><![CDATA[New comment by woolion in "Types and Neural Networks"]]></title><description><![CDATA[
<p>Thank you for your reply. FTR, I find the subject very interesting and I hope there will be more work on this line of approach.<p>>The problem with those methods is that they're inference time<p>I agree, I just thought it was missing some prior art (not affiliated with these papers :-P)<p>What is not clear to me at all is, is this the draft of a research idea? 
Or is there already some implementation coming in a later post?<p>It seems to me that such an idea would be workable on a given language with a given type system, but it seems to me there would be a black magic step to train a model that would work in a language-agnostic manner. Could you clarify?</p>
]]></description><pubDate>Tue, 21 Apr 2026 12:39:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47847953</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47847953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47847953</guid></item><item><title><![CDATA[New comment by woolion in "Types and Neural Networks"]]></title><description><![CDATA[
<p>I'm not sure what to make of TFA (I don't have time right now to investigate in details, but the subject it interesting). It starts with saying you can stop generation as soon as you have an output that can't be completed -- and there's already more advanced techniques that do that. If your language is typed, then you can use a "proof tree with a hole" and check whether there's a possible completion of that tree.
References are "Type-Constrained Code Generation with Language Models" and "Statically Contextualizing Large Language Models with Typed Holes".<p>Then it switches to using an encoding that would be more semantic, but I think the argument is a bit flimsy: it compares chess to the plethora of languages that LLM can spout somewhat correct code for (which is behind the success of this generally incorrect approach). 
What I found more dubious is that it brushed off syntactical differences to say "yeah but they're all semantically equivalent". Which, it seems to me, is kind of the main problem about this; basically any proof is an equivalence of two things, but it can be arbitrarily complicated to see it. If we consider this problem solved, then we can get better things, sure...<p>I think without some e.g. Haskell PoC showing great results these methods will have a hard time getting traction.<p>Please correct any inaccuracies or incomprehension in this comment!</p>
]]></description><pubDate>Tue, 21 Apr 2026 08:51:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47846301</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47846301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47846301</guid></item><item><title><![CDATA[New comment by woolion in "Deezer says 44% of songs uploaded to its platform daily are AI-generated"]]></title><description><![CDATA[
<p>I've had my digital art flagged a few times for various reasons (automatic copyright infringement and NSFW filters) -- so this is nothing new (in particular the artwork blocked the upload for some artist songs). The only thing is to have a reasonable appeal process. In all cases we got an automated approval after appeal, but it can put an untimely delay.<p>Honestly I hope that the AI filter would be much better in terms of false positive than the aforementioned one, if only because it should be easier via statistical methods.</p>
]]></description><pubDate>Mon, 20 Apr 2026 17:22:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47837577</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47837577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47837577</guid></item><item><title><![CDATA[New comment by woolion in "Book review: There Is No Antimemetics Division"]]></title><description><![CDATA[
<p>I had seen this, but all the examples correspond to having an actual, external threat as a result of this knowledge. 
I thought more about the buddhist parable that men don't know when they'll die, because only buddhas are able to live with this information.
I guess it's very close to 'malinformation', but this is still related to an external actor manipulating what you know with an external goal, rather than intrinsic to the information.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:01:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680623</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47680623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680623</guid></item><item><title><![CDATA[New comment by woolion in "Book review: There Is No Antimemetics Division"]]></title><description><![CDATA[
<p>That's a good point. I think this one can be easily be resolved on a factual level, since hard work is one of the few variables you can actually control. But it is more interesting from an emotional point of view, since in many cases that would an article of faith with the implicit fear that it might not be true.<p>There are variations of this, such as composition theory in art getting good results based on completely false assumptions, but these tend to fall under epistemic underdetermination.</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:41:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47676188</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47676188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47676188</guid></item><item><title><![CDATA[New comment by woolion in "Book review: There Is No Antimemetics Division"]]></title><description><![CDATA[
<p>I have been thinking a lot about a notion of self-paradoxical knowledge, meaning knowledge that actively makes your reasoning worse. 
For example, knowledge of extremely rare diseases causes the mind to over-evaluate their importance by many orders of magnitude (there are many variants of this effect). 
Or trying to explain some concepts of the object/subject construction tend to use a language that is grounded on the concept of a shared objective reality that furthers from the concept true understanding -- in other words, "the tao that can be named is not the tao".<p>I didn't think "There Is No Antimemetics Division" did very well with its premise, but the premise is quite fascinating, and it's the closest I've seen to this concept. Are there other explorations of similar ideas?</p>
]]></description><pubDate>Tue, 07 Apr 2026 09:12:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47672535</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47672535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47672535</guid></item><item><title><![CDATA[New comment by woolion in "Just 'English with Hanzi'"]]></title><description><![CDATA[
<p>There is a similar path with Chinese painting. The language of painting was refined over millennia, but the last 2 centuries caused an extremely rapid integration of Western influences.
This article is interesting because language is like water to a fish, the invisible medium humans live through. Since art is more 'foreign' and 'superfluous', the change were more obvious and there was much more debate regarding this evolution than in linguistics.<p>I discussed with a painter in the artistic lineage of Shi Guoliang, and he told me he remembered how much that could be seen as "Western art painted with a Chinese brush". 
I think the criticism was more directed towards such painters than say the Lingnan school that explicitly sought to revitalize Chinese painting through foreign influences, because it's really in the foundations of the painting -- how perspective and light are tackled through the 'scientific' system rather than the elaborate symbolic system of classical painting.</p>
]]></description><pubDate>Sun, 05 Apr 2026 17:59:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47652084</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47652084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47652084</guid></item><item><title><![CDATA[New comment by woolion in "Mathematical methods and human thought in the age of AI"]]></title><description><![CDATA[
<p>I just re-read my comment (too late for edits) and there are a number of typos (including missing "not") that significantly degraded the syntax, but the point kind of came across anyway.</p>
]]></description><pubDate>Tue, 31 Mar 2026 06:48:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583628</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47583628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583628</guid></item><item><title><![CDATA[New comment by woolion in "Mathematical methods and human thought in the age of AI"]]></title><description><![CDATA[
<p>> We assert that artificial intelligence is a natural evolution of human tools developed throughout history to facilitate the creation, organization, and dissemination of ideas, and argue that it is paramount that the development and application of AI remain fundamentally human-centered.<p>While this is a noble goal, it seems obvious that this isn't how it usually goes. 
For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be. 
An unstoppable force, so any form of opposition is "luddite behavior". 
Another one is easier transport and remote communication, that generally broke down the social fabric. 
Or social media wreaking havoc among teen's minds.
From there, it's easy to see why the technological system might be seen as an inherent evil. 
In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it.
There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.<p>I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".</p>
]]></description><pubDate>Mon, 30 Mar 2026 13:42:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47574231</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=47574231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47574231</guid></item><item><title><![CDATA[New comment by woolion in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>The ELIZA program, released in 1966, one of the first chatbots, led to the "ELIZA effect", where normal people would project human qualities upon simple programs. It prompted Joseph Weizenbaum, its author, to write "Computer Power and Human Reason" to try to dispel such errors. I bought a copy for my personal library as a kind of reassuring sanity check.</p>
]]></description><pubDate>Tue, 10 Feb 2026 08:19:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46956771</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46956771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46956771</guid></item><item><title><![CDATA[New comment by woolion in "Heathrow scraps liquid container limit"]]></title><description><![CDATA[
<p>Flying back from Beijing, I had bought a lot of books. I filled my bags with it, so they were very heavy. When the agent came to try to check my backpack, he casually grabbed it, and fell on the conveyor belt trying to lift it. He looked at me with shock. "I'm done", I thought. He opened the bag, and saw a box of zongzi the university gave me, on top of the books. He instantly became radiant, gave me a pat on the back, and just indicated the way.</p>
]]></description><pubDate>Tue, 27 Jan 2026 14:43:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46780615</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46780615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46780615</guid></item><item><title><![CDATA[New comment by woolion in "Video Games as Art"]]></title><description><![CDATA[
<p>Let's take the definition of "video-game art" as the art of defining interactive experiences that open themselves up upon mastery. 
This is the original definition of what video-games were at the start (Pac-Man, Space Invaders). 
The mastery takes effort to learn; the game, to incentivize this effort, rewards the player when they do well, and punish them when they don't. 
The almost immediate nature of the feedback loop makes learning faster than almost any other human activity.<p>Given the nature of the medium, you can tackle a theme (space invaders), and even a story on top of it.
This is good for critics; they know stories, they know that books are the highest form of art for intellectuals. 
The currency of critics in the system (media/advertisement/entertainment industry loop) is credentialism -- except for purely independent critics you have their own platform and exist through a complex bidirectional relationship with their audience.<p>However, the story is almost always at odds with gameplay. 
A story limits the freedom the gameplay system can respond to the player by railroading certain outcomes. 
Often, adapting a story implies different scenes that cannot fit into a game genre, so it's more appropriate to a collection of mini-games rather than what people generally consider to be a game.
Video-game stories tend towards tropes that don't cause such problems for itself, such as the 'big tournament' arc. 
Of course, certain genres have much more freedom (RPGs), but still a definite story means certain characters can't or have to die, etc, which remove the meaning of player choices.<p>The mastery approach hasn't gone away. But critics hate it; the general philosophy of the industry is inclusivity, which is at direct odds with a competitive direct ranking of players according to skills. It requires effort, and rewards innate ability -- reflex, memory, ability to make mental computations, ... are all advantages that generally directly translate into in-game advantages. So the critics industry had been relentless at disparaging the games that directly emphasized mastery (arcade designs, the infamous 'God Hand' review) and elevate what are generally called 'movie-games' that have worked at eliminating these aspects ('Last of Us', later 'God of war') to let all players experience the story fully without interacting with the gameplay in any meaningful manner. 
They had to compromise because of the success of Dark Souls that brought mastery back to the forefront, but this is where the total incompetence of mainstream critics is truly glaring (see the infamous 'Cuphead' journalist moment). As a result, their critiques are rarely anything more than press releases with a final score based on production value and not based on any insight into the depth of game mechanics and systems.<p>I'm surprised not to see Chris Crawford mentioned, as The Art of Computer Game Design (1984) makes the central point of this article at the very beginning, and is a primary source of video-game critique.</p>
]]></description><pubDate>Mon, 26 Jan 2026 08:47:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46763280</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46763280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46763280</guid></item><item><title><![CDATA[New comment by woolion in "Remove Black Color with Shaders"]]></title><description><![CDATA[
<p>It seems the shader transparent version is badly aliased? The effect is less noticeable on Chrome than Firefox, but it is still quite visible. This defeats the purpose of vector graphics...<p>It's a nice trick to play around, but that limits its usefulness.</p>
]]></description><pubDate>Tue, 23 Dec 2025 08:50:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46363613</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46363613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46363613</guid></item><item><title><![CDATA[New comment by woolion in "Richard Stallman on ChatGPT"]]></title><description><![CDATA[
<p>I have most sympathy for the ideals of free software, but I don't think prominently displaying "What's bad about:", include ChatGPT, and not make a modicum of effort to sketch out a basic argument, is making any service to anyone. It's barely worth a tweet, which would excuse it as a random blurb of barely coherent thought spurred by the moment. There are a number of serious problems with LLMs; the very poor analogies with neurobiology and anthropomorphization do poison the public discourse to a point where most arguments don't even mean anything. The article itself present LLMs as bullshitters, which is clearly another anthropomorphization, so I don't see how this really addresses these problems.<p>Whats bad about: RMS Not making a decent argument make your position look unserious<p>The objection that is generally made to RMS is that he is 'radically' pro-freedom rather than be willing to compromise to get 'better results'. This is something that makes sense, and that he is a beacon for. It seems such argument weaken even this perspective.</p>
]]></description><pubDate>Tue, 09 Dec 2025 11:36:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46203758</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46203758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46203758</guid></item><item><title><![CDATA[New comment by woolion in "I wasted years of my life in crypto"]]></title><description><![CDATA[
<p>"In theory it's a great idea, in practice not so much."<p>I feel that's the lesson anyone who toyed with libertarians ideals ultimately come to. It just takes a bit longer for some than others. 
It's also harder to realize if you're making mad bank on it, rather than be part of the idiots who blew their hard-earned money on some technical misunderstanding, scam, or retro-active regulation.</p>
]]></description><pubDate>Mon, 08 Dec 2025 14:25:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46192560</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46192560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46192560</guid></item><item><title><![CDATA[New comment by woolion in "FLUX.2: Frontier Visual Intelligence"]]></title><description><![CDATA[
<p>I'm pretty sure that some model at least advertised that it would work. I also think your example was in the training data at some point least, but I suspect these styles are kind of pruned when the models are steered towards "aesthetically pleasing" outputs which are often used as benchmarks. Thanks for the replies, it's quite informative.</p>
]]></description><pubDate>Wed, 26 Nov 2025 21:50:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46062765</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46062765</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46062765</guid></item><item><title><![CDATA[New comment by woolion in "FLUX.2: Frontier Visual Intelligence"]]></title><description><![CDATA[
<p>I didn't go very far with my own benchmarks because my results were just so bad. 
But for example, here's a line art with the instruction to color it (I can't remember the prompt, I didn't take notes).<p><a href="https://woolion.art/assets/img/ai/ai_editing.webp" rel="nofollow">https://woolion.art/assets/img/ai/ai_editing.webp</a><p>It's original, ChatGPT, Flux.<p>Still, you can see that ChatGPT just throw everything out and does not do a minimal attempt at respecting style. 
Flux is quite bad, but it follows the design much more (although it gets completely confused by it) that it seems that with a whole lot of work you could get something out of it.</p>
]]></description><pubDate>Wed, 26 Nov 2025 19:39:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46061506</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46061506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46061506</guid></item><item><title><![CDATA[New comment by woolion in "FLUX.2: Frontier Visual Intelligence"]]></title><description><![CDATA[
<p>Oh, thank you for your reply. We may have different definitions of style and what editing would mean.<p>If you look for example at "Mermaid Disciplinary Committee", every single image is in a very different style, each that you can consider a default of what the model assume would be for the specific prompt. It's quite obvious that these styles were 'baked in' the models, and it's not clear how much you can steer in a specific style.
If you look at "The Yarrctic Circle", a lot more models default to a kind of "generic concept art" style (the "by greg rutkowski" meme) but even then I would classify the results as at least 5 distinct styles. So for me this benchmark is not checking style at all, unless you consider style to be just around 4 categories (cartoon, anime, realistic, painterly).<p>So regarding image editing, I did my own tests at the first release of Flux tools, and found that it was almost impossible to get any decent results on some specific styles, specifically cartoon and concept art styles. I think the tools focus on what imaginary marketing people would want (like "put this can of sugary beverage into an idyllic scene") rather than such use cases. So editing like "color this" or other changes would just be terrible, and certainly unusable.</p>
]]></description><pubDate>Wed, 26 Nov 2025 19:24:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46061366</link><dc:creator>woolion</dc:creator><comments>https://news.ycombinator.com/item?id=46061366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46061366</guid></item></channel></rss>