<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: CompleteSkeptic</title><link>https://news.ycombinator.com/user?id=CompleteSkeptic</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 08:19:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=CompleteSkeptic" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by CompleteSkeptic in "GPT-5.5"]]></title><description><![CDATA[
<p>Is this the first time OpenAI has published comparisons to other labs?<p>Seems so to me - see GPT-5.4[1] and 5.2[2] announcements.<p>Might be an tacit admission of being behind.<p>[1] <a href="https://openai.com/index/introducing-gpt-5-4/" rel="nofollow">https://openai.com/index/introducing-gpt-5-4/</a>
[2] <a href="https://openai.com/index/introducing-gpt-5-2/" rel="nofollow">https://openai.com/index/introducing-gpt-5-2/</a></p>
]]></description><pubDate>Thu, 23 Apr 2026 19:23:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47880398</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=47880398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47880398</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "PP-YOLO Surpasses YOLOv4 – State-of-the-art object detection techniques"]]></title><description><![CDATA[
<p>There actually is some work (<a href="https://arxiv.org/abs/2003.13630" rel="nofollow">https://arxiv.org/abs/2003.13630</a>) claiming that FLOPS are a poor measure of real-world performance - with some of the more recent FLOP-efficient models actually running slower than older models.</p>
]]></description><pubDate>Tue, 04 Aug 2020 01:29:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=24045563</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=24045563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24045563</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "PP-YOLO Surpasses YOLOv4 – State-of-the-art object detection techniques"]]></title><description><![CDATA[
<p>This isn't directly relevant to PP-YOLO, but I'm surprised roboflow is still promoting "YOLOv5" - despite that model not having an associated paper and it not being made by the authors of the previous YOLO's.[1]<p>The ML community has been asking the authors of that model to rename their project[2] because they are basically stealing publicity by making it seem like the next version of YOLO, despite its performance being worse than that of YOLOv4.[3]<p>Roboflow has deflected this in the past by claiming they don't know if "YOLOv5" is the correct name[4], but by continuing to promote it, they are directly supporting it. In fact, I wouldn't be surprised that their claim of not being affiliated with Ultralytics to be either false or a half truth, given that all the top pages about "YOLOv5" were done by roboflow, including the first official announcement.[5]<p>[1] <a href="https://github.com/AlexeyAB/darknet/issues/5920" rel="nofollow">https://github.com/AlexeyAB/darknet/issues/5920</a><p>[2] <a href="https://github.com/ultralytics/yolov5/issues/2" rel="nofollow">https://github.com/ultralytics/yolov5/issues/2</a><p>[3] <a href="https://github.com/AlexeyAB/darknet/issues/5920#issuecomment-642812152" rel="nofollow">https://github.com/AlexeyAB/darknet/issues/5920#issuecomment...</a><p>[4] <a href="https://blog.roboflow.ai/yolov4-versus-yolov5/" rel="nofollow">https://blog.roboflow.ai/yolov4-versus-yolov5/</a><p>[5] <a href="https://blog.roboflow.ai/yolov5-is-here/" rel="nofollow">https://blog.roboflow.ai/yolov5-is-here/</a></p>
]]></description><pubDate>Tue, 04 Aug 2020 01:26:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=24045546</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=24045546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24045546</guid></item><item><title><![CDATA[Improving Pulse Diversity in the Iterative Setting]]></title><description><![CDATA[
<p>Article URL: <a href="https://medium.com/swlh/improving-pulse-diversity-in-the-iterative-setting-83ce9231dde4">https://medium.com/swlh/improving-pulse-diversity-in-the-iterative-setting-83ce9231dde4</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=24031055">https://news.ycombinator.com/item?id=24031055</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 02 Aug 2020 19:42:10 +0000</pubDate><link>https://medium.com/swlh/improving-pulse-diversity-in-the-iterative-setting-83ce9231dde4</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=24031055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24031055</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "Tensorflow sucks"]]></title><description><![CDATA[
<p>An appropriate quote: "If you can't intelligently argue for both sides of an issue, you don't understand the issue well enough to argue for either."<p>There are many people for whom the declarative paradigm is a huge plus. I would say there are at least 2 major approaches in running fast neural networks: 1. Figure out the common big components and make fast versions of those. 2. Figure out the common small components and how to make those run fast together.<p>Different libraries have different strengths and weaknesses that match the abstraction level that they work at. For example, Caffe is the canonical example of approach 1, which makes writing new kinds of layers much harder than with other libraries, but makes connecting those layers quite easy as well as enabling new techniques that work layer-wise (such as new kinds of initialization). Approach 2 (TensorFlow's approach) introduces a lot of complexity, but it allows for different kinds of research. For example, because how you combine the low-level operations is decoupled from how those things are optimized together, you can more easily create efficient versions of new layers without resorting to native code.</p>
]]></description><pubDate>Sun, 08 Oct 2017 22:19:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=15430327</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=15430327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15430327</guid></item><item><title><![CDATA[Deep Neural Decision Forests Explained]]></title><description><![CDATA[
<p>Article URL: <a href="http://topos-theory.github.io/deep-neural-decision-forests/">http://topos-theory.github.io/deep-neural-decision-forests/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=10698409">https://news.ycombinator.com/item?id=10698409</a></p>
<p>Points: 10</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 08 Dec 2015 18:44:12 +0000</pubDate><link>http://topos-theory.github.io/deep-neural-decision-forests/</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=10698409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10698409</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "A blog engine written and proven in Coq"]]></title><description><![CDATA[
<p>I think he was referring to the fact that the request completes in finite time isn't a tight enough bound.</p>
]]></description><pubDate>Thu, 12 Feb 2015 17:17:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=9039904</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=9039904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9039904</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "Jeff Hawkins on the Limitations of Artificial Neural Networks"]]></title><description><![CDATA[
<p>State-of-the-art results no longer require unsupervised pretraining with autoencoders or RBMs, but back when unsupervised pretraining was more popular, top researchers were rationalizing that it was consider more biologically plausible than the standard nets trained with back prop, since brains generalize through observing a large amount of data over their lifetime to quickly recognize new objects and since the nets aren't trained for a specific task, they would hopefully generalize better and be a step closer to general intelligence.</p>
]]></description><pubDate>Sun, 02 Nov 2014 02:50:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=8545429</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=8545429</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8545429</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "Programming Language Network: A Graph of Programming Languages"]]></title><description><![CDATA[
<p>You may be interested in DBpedia, which gets its data from wikipedia but presents it in a structured format (for example, the page for Clojure: <a href="http://dbpedia.org/describe/?url=http://dbpedia.org/resource/Clojure&sid=17176" rel="nofollow">http://dbpedia.org/describe/?url=http://dbpedia.org/resource...</a>), though for this case, it seems the data is already on wikipedia.</p>
]]></description><pubDate>Sun, 02 Nov 2014 02:28:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=8545379</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=8545379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8545379</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "Trouble in Paradise with ClojureScript and React.js"]]></title><description><![CDATA[
<p>Yup! We were about to implement something like that, but we realized Reagent's model of having atoms store the state worked pretty well with javelin's FRP, and once you break down the state into atoms, the cursors aren't needed anymore.</p>
]]></description><pubDate>Mon, 20 Oct 2014 04:44:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=8480731</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=8480731</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8480731</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "Trouble in Paradise with ClojureScript and React.js"]]></title><description><![CDATA[
<p>I wouldn't go so far as to say that Reagent's model is perfect. Definitely pretty good, but we've had some issues that were somewhat non-obvious.<p>If you're worried about it being maintained, there was actually a great discussion of this post on the clojure subreddit (<a href="https://www.reddit.com/r/Clojure/comments/2jq0cu/om_no_trouble_in_paradise_with_clojurescript_and/" rel="nofollow">https://www.reddit.com/r/Clojure/comments/2jq0cu/om_no_troub...</a>) where others talk about Reagent having a community-maintained fork with new features.<p>Out of curiousity though, what kind of helper methods would you be looking for? I found it pretty comprehensive on its own (though I do have some utility methods for some niche uses involving hierarchical atoms).</p>
]]></description><pubDate>Mon, 20 Oct 2014 04:16:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=8480675</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=8480675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8480675</guid></item><item><title><![CDATA[New comment by CompleteSkeptic in "Trouble in Paradise with ClojureScript and React.js"]]></title><description><![CDATA[
<p>If any of my post sounded as if I was complaining about the lack of syntactic sugar, I apologize as that was not at all my intention. The best practices and syntactic ugliness didn't matter to us at all after the learning curve. We wrote some macros to solve the problem, and didn't have to think about it anymore. We used sablono as well, so the syntax was almost the same as that of Reagent.<p>The biggest problems that we faced with Om were how normal Clojure-y things just didn't work with it, and how the app state had to be kept in a tree instead of a DAG (mostly the latter), which created issues with consistency, performance, or both.<p>> trying to put mutating state into the application state object<p>How you're supposed to use Om is put all the mutable state into a global atom. I didn't mean putting reference types in the global atom or anything horrible like that, but the problems with cursors being pointers still exist.</p>
]]></description><pubDate>Mon, 20 Oct 2014 04:08:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=8480663</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=8480663</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8480663</guid></item><item><title><![CDATA[Trouble in Paradise with ClojureScript and React.js]]></title><description><![CDATA[
<p>Article URL: <a href="https://diogo149.github.io/2014/10/19/om-no/">https://diogo149.github.io/2014/10/19/om-no/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=8480247">https://news.ycombinator.com/item?id=8480247</a></p>
<p>Points: 28</p>
<p># Comments: 12</p>
]]></description><pubDate>Mon, 20 Oct 2014 01:01:19 +0000</pubDate><link>https://diogo149.github.io/2014/10/19/om-no/</link><dc:creator>CompleteSkeptic</dc:creator><comments>https://news.ycombinator.com/item?id=8480247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8480247</guid></item></channel></rss>