<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vinn124</title><link>https://news.ycombinator.com/user?id=vinn124</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 11:54:48 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vinn124" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vinn124 in "Artificial Intelligence Still Isn’t All That Smart"]]></title><description><![CDATA[
<p>for a good introduction to order and complexity (including intelligence) arising from nothing, read "the origins of order" by stuart kauffman.</p>
]]></description><pubDate>Sat, 18 Aug 2018 12:48:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=17788222</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17788222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17788222</guid></item><item><title><![CDATA[New comment by vinn124 in "How the Fleece Vest Became the New Corporate Uniform"]]></title><description><![CDATA[
<p>who cares?</p>
]]></description><pubDate>Wed, 25 Jul 2018 03:37:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=17606754</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17606754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17606754</guid></item><item><title><![CDATA[New comment by vinn124 in "Reinforcement Learning from scratch"]]></title><description><![CDATA[
<p>> At the core RL is just updating a table of values, and then using function approximation (aka, machine learning) for more complex cases.<p>this seems to be a common assertion about ml. other refrains include "ml is just matrix multiplication" and "ml is just affine transformations followed by nonlinearity".<p>while technically true, it is an unhelpful comment as it doesnt shed any light on the salient questions, such as:<p>* how do you reduce bias/variance, since youre sampling a minuscule slice of the state/action space?
* how do you construct a value function (or something like it) in a sparse reward environment, with possibly thousands of time steps?
* how do you know your policy network is exploring new states?<p>it is the equivalent of learning how to draw an owl: draw a few circles, then draw the fucking owl.</p>
]]></description><pubDate>Thu, 07 Jun 2018 23:38:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=17261329</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17261329</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17261329</guid></item><item><title><![CDATA[New comment by vinn124 in "Microsoft Is Said to Have Agreed to Acquire GitHub"]]></title><description><![CDATA[
<p>particularly when most microsoft investors value microsoft stock for its earnings potential, as opposed to growth/revenue potential.</p>
]]></description><pubDate>Sun, 03 Jun 2018 20:30:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=17221664</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17221664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17221664</guid></item><item><title><![CDATA[New comment by vinn124 in "Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch"]]></title><description><![CDATA[
<p>> Increased accuracy per unit wallclock time is what you want.<p>yah, especially for a framework for distributed learning!</p>
]]></description><pubDate>Sun, 03 Jun 2018 12:05:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=17219160</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17219160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17219160</guid></item><item><title><![CDATA[New comment by vinn124 in "Deep learning: a critical appraisal"]]></title><description><![CDATA[
<p>> 3) Test that the solution works on toy examples, like MNIST, simple block worlds, simulated data, etc.<p>youre right: mnist, imagenet, etc are toy examples that do not extend into the real world. but the point of <i>reproducible</i> research is to experiment on agreed upon, existing benchmarks.</p>
]]></description><pubDate>Sun, 03 Jun 2018 10:53:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=17218972</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17218972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17218972</guid></item><item><title><![CDATA[New comment by vinn124 in "Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch"]]></title><description><![CDATA[
<p>> What are the use cases for adding yet another layer to the stack?<p>in my limited experience with horovod, horovod is most useful when youre running large clusters of workers/ps. in those situations, you typically have to manually find the appropriate balance of workers/ps. (otherwise youd run into blocking or network saturation issues.) horovod addresses this issue with their ring allreduce implementation.<p>having said all of that, im sticking with distributed tf for now.</p>
]]></description><pubDate>Sun, 03 Jun 2018 10:48:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=17218954</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17218954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17218954</guid></item><item><title><![CDATA[New comment by vinn124 in "A Course in Machine Learning"]]></title><description><![CDATA[
<p>> Usually you throw everything and see what sticks.<p>most practitioners start with the simplest possible learner, then gradually, and thoughtfully, increase model complexity while paying attention to bias/variance. this is far from a "kitchen sink" approach.</p>
]]></description><pubDate>Sat, 02 Jun 2018 17:10:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=17215403</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17215403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17215403</guid></item><item><title><![CDATA[New comment by vinn124 in "Empiricism and the limits of gradient descent"]]></title><description><![CDATA[
<p>> Many losses which don't seem differentiable can be reformulated as such...<p>agreed, especially with policy gradients.<p>> If the dimensionality is small, second-order methods (or approximations thereof) can do dramatically better yet.<p>i have not seen second order derivatives in practice, presumably due to memory limitations. can you point me to examples?</p>
]]></description><pubDate>Mon, 28 May 2018 13:24:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=17172249</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17172249</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17172249</guid></item><item><title><![CDATA[Ai and compute (since 2012)]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.openai.com/ai-and-compute/">https://blog.openai.com/ai-and-compute/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=17166932">https://news.ycombinator.com/item?id=17166932</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 27 May 2018 14:09:13 +0000</pubDate><link>https://blog.openai.com/ai-and-compute/</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=17166932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17166932</guid></item><item><title><![CDATA[New comment by vinn124 in "Ask HN: Building a game for AI Research"]]></title><description><![CDATA[
<p>not exactly what you asked for, but if youre looking for a gentle academic introduction to the intersection of ai and games, i recommend [
AI Researchers, Video Games Are Your Friends!](<a href="https://arxiv.org/abs/1612.01608" rel="nofollow">https://arxiv.org/abs/1612.01608</a>)</p>
]]></description><pubDate>Fri, 04 May 2018 16:43:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=16995969</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16995969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16995969</guid></item><item><title><![CDATA[New comment by vinn124 in "Intel's 10nm Is Broken, Delayed Until 2019"]]></title><description><![CDATA[
<p>the quality and preciseness of this answer is why i read hn every day.</p>
]]></description><pubDate>Fri, 27 Apr 2018 12:24:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=16940248</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16940248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16940248</guid></item><item><title><![CDATA[New comment by vinn124 in "Evolution Is the New Deep Learning"]]></title><description><![CDATA[
<p>> Why should we expect there to be any mathematical foundation to this stuff?<p>i would be surprised that "this stuff" would be exception to the unreasonable effectiveness of mathematics. mathematics underpins virtually every observed phenomenon, including theoretical physics, computer science, economics. in fact, the mathematical structure of any physical theory often points the way to further advances in that theory and even to empirical predictions.<p>to not expect that "this stuff" should not have any mathematical foundation is a fantastically naive view.</p>
]]></description><pubDate>Sat, 24 Mar 2018 23:28:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=16669652</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16669652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16669652</guid></item><item><title><![CDATA[New comment by vinn124 in "Deciphering China’s AI Dream"]]></title><description><![CDATA[
<p>youve missed the point.<p>the point is: anything complex can be dismissed as "just x,y,z" if you dont appreciate the massive body of work behind it.<p>i made that point because OP observed that "ml is just affine transformations" or something to that effect. yes, that's one way to frame it - if youre okay overlooking roughly 30 years of research.</p>
]]></description><pubDate>Tue, 20 Mar 2018 01:45:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=16625288</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16625288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16625288</guid></item><item><title><![CDATA[New comment by vinn124 in "Sierra Leone just ran the first blockchain-based election"]]></title><description><![CDATA[
<p>> Then my next question will be how can I trust something that I don't understand.<p>ive always found this mentality silly. do you understand aerodynamic principles? laws of electromagnetism? mendelian genetics? or information theory?</p>
]]></description><pubDate>Sun, 18 Mar 2018 21:33:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=16614505</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16614505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16614505</guid></item><item><title><![CDATA[New comment by vinn124 in "Will GDPR Make Machine Learning Illegal?"]]></title><description><![CDATA[
<p>how does this solve anything? if a simple decision tree could predict the outputs of more complex deep nets, why not use the decision tree in the first place? also, what do you do when a decision tree isnt powerful enough, as in the case of many interesting problems such as speech, computer vision, etc.</p>
]]></description><pubDate>Sun, 18 Mar 2018 21:26:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=16614465</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16614465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16614465</guid></item><item><title><![CDATA[New comment by vinn124 in "No CEO should earn 1,000 times more than a regular employee"]]></title><description><![CDATA[
<p>it was the ability of steve jobs to create an organizing principle (and its corresponding organizational structure) that deserves credit for apple's success.</p>
]]></description><pubDate>Sun, 18 Mar 2018 21:12:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=16614366</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16614366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16614366</guid></item><item><title><![CDATA[New comment by vinn124 in "Deciphering China’s AI Dream"]]></title><description><![CDATA[
<p>> It's just alternating layers of affine transformations and nonlinearity with lots of tricks and improved routing.<p>and a computation is just 0s and 1s, with lots of if/then statements.</p>
]]></description><pubDate>Sun, 18 Mar 2018 21:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=16614349</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16614349</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16614349</guid></item><item><title><![CDATA[New comment by vinn124 in "Learn with Google AI"]]></title><description><![CDATA[
<p>pytorch, not tf, seems to be winning the hearts and minds of ml researchers. then again, things change quickly.</p>
]]></description><pubDate>Thu, 01 Mar 2018 14:21:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=16492174</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16492174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16492174</guid></item><item><title><![CDATA[Programming for Computations: An Introduction to Numerical Simulations]]></title><description><![CDATA[
<p>Article URL: <a href="http://hplgit.github.io/prog4comp/doc/pub/p4c-bootstrap-Python.html">http://hplgit.github.io/prog4comp/doc/pub/p4c-bootstrap-Python.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=16222635">https://news.ycombinator.com/item?id=16222635</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 24 Jan 2018 13:57:24 +0000</pubDate><link>http://hplgit.github.io/prog4comp/doc/pub/p4c-bootstrap-Python.html</link><dc:creator>vinn124</dc:creator><comments>https://news.ycombinator.com/item?id=16222635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16222635</guid></item></channel></rss>