<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Dn_Ab</title><link>https://news.ycombinator.com/user?id=Dn_Ab</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 00:20:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Dn_Ab" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Dn_Ab in "Why you should learn F#"]]></title><description><![CDATA[
<p>As others have said, F# interesting language features are computation expressions, active patterns, units and type-providers. The library, platforms and ecosystem benefits are gravy. Though subjective, the syntax is clean too, being somewhere between an ML and Python.<p>Something that no one has mentioned yet is that F# is now among the fastest functional first programming languages. At least according to (take with a grain of salt) benchmarks like [1] and <a href="https://www.techempower.com/benchmarks/" rel="nofollow">https://www.techempower.com/benchmarks/</a><p>[1] <a href="https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/fsharpcore-java.html" rel="nofollow">https://benchmarksgame-team.pages.debian.net/benchmarksgame/...</a></p>
]]></description><pubDate>Tue, 18 Dec 2018 21:17:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=18710659</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=18710659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=18710659</guid></item><item><title><![CDATA[New comment by Dn_Ab in "What is the difference between deep learning and usual machine learning?"]]></title><description><![CDATA[
<p>I did not downvote (and you're clear now) but your post is not a relevant argument. The determinism that the SFWT is arguing against is that of certain hidden variable theories of quantum mechanics. It states that if the humans are free to choose particular configurations for an experiment measuring this or that spin, then bounded by relativity and experimentally verified aspects of quantum mechanics, the behaviors of the particles cannot be dependent on the past history of the universe. The main characters are the particles, people are incidental.<p>> "Our argument combines the well-known consequence of relativity theory, that the time order of space-like separated events is not absolute, with the EPR paradox discovered by Einstein, Podolsky, and Rosen in 1935, and the Kochen-Specker Paradox of 1967"<p>So as far as I can tell, it takes for granted the humans' ability to choose the configurations freely, which though suspect in of itself doesn't matter so much to their argument as it's not really an argument for free will, it's a discussion of how inherent to quantum mechanics non-determinism is.<p>> "To be precise, we mean that the choice an experimenter makes is not a function of the past."<p>> "We have supposed that the experimenters’ choices of directions from the Peres configuration are totally free and independent."<p>> "It is the experimenters’ free will that allows the free and independent choices of x, y, z, and w ."<p>It is actually, if anything, in favor of no distinction between humans and computers (more precisely, it is not dependent on humans, only a "free chooser") as they argue that though the humans can be replaced by pseudo random number generators, the generators need to be chosen by something with "free choice" so as to escape objections by pendants that the PRNG's path was set at the beginning of time.<p>> The humans who choose x, y, z, and w may of course be replaced by a computer program containing a pseudo-random number generator.<p>> "However, as we remark in [1], free will would still be needed to choose the random number generator, since a determined determinist could maintain that this choice was fixed from the dawn of time."<p>There is nothing whatsoever in the paper that stops an AI from having whatever ability to choose freely humans have. The way you're using determinism is more akin to precision and reliability—the human brain has tolerances but it too requires some amount of reliability to function correctly, even if not as much as computers do. In performing its tasks, though the brain is tolerant to noise and stochasticity, it still requires that those tasks happen in a very specific way. Asides, the paper is not an argument for randomness or stochasticity.<p>> ” In the present state of knowledge, it is certainly beyond our capabilities to understand the connection between the free decisions of particles and humans, but the free will of neither of these is accounted for by mere randomness."</p>
]]></description><pubDate>Sun, 05 Jun 2016 22:14:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=11843426</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=11843426</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11843426</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Could a neuroscientist understand a microprocessor?"]]></title><description><![CDATA[
<p>Wow, your idea is heading incredibly towards the same general direction as <a href="https://en.wikipedia.org/wiki/Sparse_distributed_memory" rel="nofollow">https://en.wikipedia.org/wiki/Sparse_distributed_memory</a> described there. Excellent!</p>
]]></description><pubDate>Thu, 26 May 2016 22:41:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=11782269</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=11782269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11782269</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Comparing a recurrent neural network with a Markov chain"]]></title><description><![CDATA[
<p>One can view RNNs as a sort of generalization to markov chains. RNNs have the advantage of a memory, context tracking and are not limited to learning patterns of some specific length. RNNs can apply these advantages to learn subtleties of grammars, balance parenthesis, the proper use of punctuation and other things that a markov chain might never learn (and certainly not memory efficiently). For any given piece of text, RNNs can be said to have gotten closer to understanding what was consumed.<p>The other question is, are those difficult to learn things truly worth the cost of training and running an RNN? If a fast and simple markov chain serves, as is likely the case in practical settings, then it is better to go with the markov chain. The RNN will still make obvious mistakes, all while correctly using subtle rules that trouble even humans. Unfortunately, this combination is exactly the kind of thing that will leave observers less than impressed: "Yes I know it rambles insensibly but look, it uses punctuation far better than your average forum dweller!" Alas, anyone who has gone through the trouble of making a gourad shaded triangle spin in Mode X and proudly showing their childhood friends, can explain just what sort of reaction to expect.<p>Eh, so, the moral here is pay attention to cost effectiveness and don't make things any more complicated than they need to be.<p>Yoav Goldberg treats much the same thing as this blog post but with far more detail and attention to subtlety here: <a href="http://nbviewer.jupyter.org/gist/yoavg/d76121dfde2618422139" rel="nofollow">http://nbviewer.jupyter.org/gist/yoavg/d76121dfde2618422139</a></p>
]]></description><pubDate>Sat, 27 Feb 2016 19:51:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=11188180</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=11188180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11188180</guid></item><item><title><![CDATA[New comment by Dn_Ab in "The New Atomic Age We Need"]]></title><description><![CDATA[
<p>It's not a report, it's an almost 400 page book. He doesn't compare random energies, instead he looks at what, under reasonably generous conditions, the daily power budget each energy source could provide per person. The very generous 20kWh/d of power per person is the key point. 6 m/sec is already high, with few places reaching such speeds consistently. And almost no one will reach double that, so you can look at an optimal 17 W/m^2 for wind. <a href="http://web.stanford.edu/group/efmh/winds/global_winds.html" rel="nofollow">http://web.stanford.edu/group/efmh/winds/global_winds.html</a><p>In chapter 25, he acknowledges that while the cost of photovoltaics will fall, he does not see it doing so in a timeline that will be useful in terms of getting everything deployed for a ~2050 deadline. Economically speaking, carpeting deserts with concentrating collectors will be the cheaper of the solar options. The book is careful about doing all the math, citing all its sources and carefully explaining the scenarios it models. It is a very good book[+].<p>But cost is not the only issue—even as prices fall, there is still the problem of land use area. Efficiencies aren't going to pass 30% (without going to much more expensive materials) and for mass production, we can halve that; cheap as panels may someday become, places with high pop densities (on top of seasonal variations/not being near the equator) are going to have trouble meeting their needs. Especially if they don't want to get rid of their curling irons, hair/clothes dryers, toasters and electric stove/kettles. But panels/turbines aren't the whole picture.<p>Already today, panels take up only a fraction of the cost of solar. You ideally, want an MPPT controller. You might need voltage regulators, you'll need a rack for the panel and batteries, an appropriately sized inverter, wiring and installation. Batteries—to save more money long term—you want to oversize them so you rarely hit a low depth of discharge. But more batteries means more panels. You also want enough batteries such that you can wait out ~4 days of low light (speaking from experience, on cloudy days you can go the entire day at ~13% typical amp output). Even those at the equator will only get ~6 good hours of sunlight (~8 hours for an appreciable amount), so even for the best case scenario, 12 hours of storage per person is not going to cut it. Solar is great but it's no panacea. And the math doesn't work out for chemical energy storage. Molten salt storage, compressed air look to be more logical at the grid level but even they won't be sufficient.<p>That said, Mr Theil is also incorrect to place Nuclear in opposition to renewables. Renewables will be in addition to Nuclear [-]. As well as looking into more DC appliances, more HVDC and working out circuit breakers for them, optimal manufacturing layouts such that 'waste output' can be redirected to where it is needed. More energy efficient devices, energy routing algorithms (and a global grid of superconducting HVDC while we're at it—seems far fetched but still at a much higher technological readiness level when compared to fusion), better city planning, climate control with geothermal heatpumps, more material reclamation and recycling, nuclear waste as fuel, carbon capture, extracting CO2 from the ocean for fuel and a cultural move away from an over consuming disposable society.<p>[+] I am biased in that I'd already known the author for one of the best free books on information theory and machine learning. Anyone interested in the link between learning, energy and thermodynamics should see this book as a starting point. <a href="http://www.inference.phy.cam.ac.uk/itprnn/book.pdf" rel="nofollow">http://www.inference.phy.cam.ac.uk/itprnn/book.pdf</a><p>[-] Ch. 24 of sewtha.pdf goes into numeric data backed detail on why most build out, waste, cost arguments against nuclear are weak. Personally, I think at best, we only have a couple hundred more years where we can all be justifiably irrationally paranoid over Nuclear. We should have DNA repair down by then.</p>
]]></description><pubDate>Sat, 28 Nov 2015 23:46:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=10643040</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=10643040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10643040</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Alchemy – Open Source AI"]]></title><description><![CDATA[
<p>Ah, I was confused for a second—I'd thought Markov was a library, but you meant the markov assumption—the topics are actually loosely related and orthogonal. Your excellent looking library deals with reinforcement learning agents that model environment/agent interactions as a (PO)Markov Decision Process, where as the Alchemy library combines FOL with network representations of particular (satisfying certain markov properties) probability distributions to perform inference.<p>More pertinent to your post, Sutton's working on an updated RL book here: <a href="http://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.pdf" rel="nofollow">http://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.p...</a><p>If anyou have the time, Chapter 15 (pdf pg 273) of the above link is a fascinating read. In particular, TD-Gammon had already achieved impressive results using NNs in the early 90s; reaching world class levels in Backgammon with zero specialized knowledge.</p>
]]></description><pubDate>Fri, 27 Nov 2015 12:40:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=10637015</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=10637015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10637015</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Seizures from Solving Sudoku Puzzles"]]></title><description><![CDATA[
<p>This is a medical case study however, not a journalistic piece and you two are being at least a little bit unfair, I think. Since seizures and sudoku are not a common combination, upon seeing the title, I assumed it was something conditional. This sort of Crucifix Glitch—ahem, environmental epilepsy—is also very uncommon and usually genetic, which makes this all the more interesting.<p>Here, it seems inhibitory circuits in a section of the right parietal lobe were damaged; without dampening, as with any feedback system, the system quickly goes out of whack. What's interesting here is that in this patient, the only activity that seems to generate a pattern resulting in such over-excitation is playing sudoku. But surely that's not the only Visuospatial task he partakes in, so why? All we're left with is: "Our patient stopped solving sudoku puzzles and has been seizure free for more than 5 years".</p>
]]></description><pubDate>Tue, 20 Oct 2015 05:30:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=10417571</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=10417571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10417571</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Medical Breakthrough in Spinal Cord Injuries Was Made by a Computer Program"]]></title><description><![CDATA[
<p>I agree that this wasn't done by the computer (did computers uncover the Higgs Boson?) but I also do not believe humans can take most of the credit: this was the result of a Man Machine System team up—trying to disentangle credit assignment is not a worthwhile activity. Roughly and from a quick reading of a paper thickly frosted with jargon I am unfamiliar with, the method works by creating networks—which highlight key relationships—for visualization by searching for stable clusters in a reduced dimensionality space of the variables.<p>Humans are there to explore the visualizations, interpret the network structures and understand the clusters and variables. The machines are intelligent too; they do the heavy work of comparing large numbers of points in a high dimensional space, factorization and searching for a way to express the data in a manner that makes it easier to uncover promising research directions and hypotheses.<p>Scanning this, it seems the most valuable contribution are their network visualization and exploratory tools. I think they should be proud of those and see no need to stretch so mightily to connect this to Stronger AI. As Vinge notes, "I am suggesting that we recognize that in network and interface research there is something as profound (and potential wild) as Artificial Intelligence."<p><a href="http://www.nature.com/ncomms/2015/151014/ncomms9581/full/ncomms9581.html" rel="nofollow">http://www.nature.com/ncomms/2015/151014/ncomms9581/full/nco...</a></p>
]]></description><pubDate>Thu, 15 Oct 2015 18:51:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=10395152</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=10395152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10395152</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Extracting Structured Data from Recipes Using Conditional Random Fields"]]></title><description><![CDATA[
<p>All the things you mentioned (plus e.g. bayesian networks and Restricted Boltzmann Machines) are examples of Graphical Models. You can roughly think of (linear chain) CRFs as being to HMMs as logistic regression is to Naive Bayes. HMMs and Naive bayes learn a joint probability distribution on the data while Log Reg and CRFs fit conditional probabilities.<p>If none of that makes sense then, basically, in general and with more data, the CRF (or discriminative classifier) will tend to make better predictors because they don't try to <i>directly</i> model complicated things that don't really matter for prediction anyways. Because of this they can use richer features without having to worry about how such and such relates to this or that. All this ends up making discriminative classifiers more robust when model assumptions are violated because they don't sacrifice as much to remain tractable (or rather, the trade off/sacrifice they make tends to end up not mattering as much when prediction accuracy is your main concern).<p>So in short, you use a HMM instead of a Markov Chain when the sequence you're trying to predict is not visible. Like say when you want to predict the parts of speech but only have access to words, you'll use the relationship between the visible sequence of words to learn the hidden sequence of Parts of speech labels. You use CRFs instead of HMMs because they tend to make better predictors while remaining tractable. The downside is discriminative classifiers will not necessarily learn the most meaningful decision boundaries, this starts to matter when you want to move beyond just prediction.</p>
]]></description><pubDate>Mon, 21 Sep 2015 10:10:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=10251333</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=10251333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10251333</guid></item><item><title><![CDATA[New comment by Dn_Ab in "A new Stephen Hawking presentation about black holes"]]></title><description><![CDATA[
<p>Time reversibility exists in quantum mechanics because observables are self adjoint operators. Closed systems evolve unitarily. In simpler terms, you can think of it as the requirement that maps preserve distances and are easily invertible. We need this so that the information describing a system (which we can still talk about in terms of traces), remains invariant with time.  In the classical sense, the corresponding violation leads to probabilities not summing to 1! We clearly can't have information shrink and for pure systems, dropping distance preserving maps leads to a really awesome universe (I believe this also ends up highly recommending L2). We literally go from a universe that is almost certainly near the bottom end of the Slow Zone of Thought to the Upper Beyond (<a href="https://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep#Setting" rel="nofollow">https://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep#Setting</a>). We gain non-locality, causality violations and powerful computational ability.<p>In practice, our confusion about a system does increase with time as classical systems become ever more correlated, losing distinguishability, aka decoherence.</p>
]]></description><pubDate>Wed, 26 Aug 2015 07:55:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=10121681</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=10121681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10121681</guid></item><item><title><![CDATA[New comment by Dn_Ab in "The Web We Have to Save"]]></title><description><![CDATA[
<p>In nature, decentralized networks are more the norm. You find them in neural, gene, protein and metabolic networks. You find them in mycorrhizal networks of forests and in various self-organizing systems. Food webs, IIRC are even more random, as far from centralized as you can get.<p>These networks with small world properties strike a highly pragmatic balance. They are far more robust to insult when compared to centralized networks (though not so much as random) while having much more efficient propogation of information than in more random networks.</p>
]]></description><pubDate>Mon, 03 Aug 2015 16:48:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=9997615</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9997615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9997615</guid></item><item><title><![CDATA[New comment by Dn_Ab in "A Polynomial Time Bounded-Error Quantum Algorithm for Boolean Satisfiability"]]></title><description><![CDATA[
<p>The short version is, if this is correct (which is exceedingly unlikely) then either quantum mechanics is wrong or whether P=NP has just become irrelevant. This is because, as Scott Aaronson is often forced to tirelessly point out, an inability to build a Quantum Computer must also show Quantum mechanics to somehow be in error. If instead, we can build a Quantum computer and this result is true then the universe has suddenly become an incredibly more interesting place (for one example, building this would also solve AI).<p>The result also has consequences for quantum computation specifically, in that it also takes care of the negative sign problem, which arises when simulating certain kinds of important quantum many body systems.<p>And last but certainly not least, all those press releases breathlessly proclaiming Quantum computers work by trying a gazillion possibilities at once (with no worries about the plausibility of actually reading out the correct answer with a non-negligible probability) weren't so far off after all.<p>All in all, either there is a mistake hidden somewhere in the preprint or this is the greatest scientific achievement since Evolution invented Human Level intelligence.<p>See here for intuition on why NP-complete problems if at all solvable without brute force, <i>should</i> also be efficiently solvable: <a href="http://windowsontheory.org/2013/05/06/reasons-to-care-in-honor-of-scott-aaronson/" rel="nofollow">http://windowsontheory.org/2013/05/06/reasons-to-care-in-hon...</a></p>
]]></description><pubDate>Wed, 22 Jul 2015 14:07:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=9929704</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9929704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9929704</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Drug perks up old muscles and aging brains"]]></title><description><![CDATA[
<p>I must very strongly disagree with you. ScienceDaily is as good an aggregator as you'll find. Sometimes, I'll even purposely seek out their write-ups on a topic because of one thing they almost always do, while almost everyone else doesn't: make it easy to find the source paper.<p>I rarely ever stop to read their replicated press releases—read the press release and read the article. They're basically the same thing. ScienceDaily just regurgitates source articles; any grievance one has with the article should instead be taken with the issuing University. It's why there's such high variance in the quality of ScienceDaily articles.<p>They provide a link here: <a href="http://www.eurekalert.org/pub_releases/2015-05/uoc--dpu051215.php" rel="nofollow">http://www.eurekalert.org/pub_releases/2015-05/uoc--dpu05121...</a><p>And provide a title: Journal Reference:<p><pre><code>    David Schaffer et al. Systemic attenuation of the TGF-β pathway by a single drug simultaneously rejuvenates hippocampal neurogenesis and myogenesis in the same old mammal. Oncotarget, May 201
</code></pre>
I find that an incredibly useful service since,often, even press releases can't be bothered to link to or at least, write the name of the paper under discussion.</p>
]]></description><pubDate>Fri, 15 May 2015 20:02:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=9553283</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9553283</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9553283</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Artificial-Intelligence Experts Are in High Demand"]]></title><description><![CDATA[
<p>SVMs were invented by a couple statisticians/mathematicians in the 60s. k-means also harkens back to the 60s, by mathematicians and control theorists. Decision Trees and Random forests were invented by a famous statistician, with the latter related to bootstrapping, a statstical technique. PCA and factor analysis, forms of or closely related to low rank matrix approximation, were pioneered in the early 1900s, by some of the most famous statisticians ever.</p>
]]></description><pubDate>Mon, 04 May 2015 16:55:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=9487376</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9487376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9487376</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Introducing F# 4.0"]]></title><description><![CDATA[
<p>> > Would, say, a Ruby or node.js-oriented web developer find the F#/.NET community lacking, incomplete, unfriendly, uncool? Is there even a community to speak of?<p>> The answer in my opinion is yes. This may change, given recent efforts by Microsoft, however evolution will be slow because the whole ecosystem has to change, not just Microsoft.<p>A suspicion gnaws at me. I try to rid myself of it but alas, it holds fast and unyielding. That suspicion being: you know very little about the F# community. What statements you make, they seem descendents of generalizations from <i>your</i> experience with the .Net community. Happily, F# is not like the <i>stereotypical</i> .NET community (I must be so specific, in order that I not offend the subset to whom your claims do not apply). There is a strong culture of embracing open tools and code; Microsoft is not looked at for direction beyond what basic support they provide.<p>The community is neither lacking nor is it unfriendly. It is small, yes. Uncool? Opposite of! But opinions run with tastes. Incomplete is a tricky matter. Certainly it's not going to have as many libraries as Python or the JVM, but neither is it some kind of backwater.<p>There are excellent build tools like FAKE and paket. Awesome ideas like—why, take a look at how fantastic the HTML type provider is, and it works in real life too! Most of the time =) [<a href="http://fsharp.github.io/FSharp.Data/library/HtmlProvider.html" rel="nofollow">http://fsharp.github.io/FSharp.Data/library/HtmlProvider.htm...</a>]<p>There are tons of cool libraries available, most of them open source, with generous licensing terms. Stuff with Haskell heritage like fparsec, fscheck, the blazing fspickler combinators (serializer) or more computation oriented tools that let you target and run on the GPU or Here: Automatic Differentiation (after which, loss minimizing Machine Learning algorithms are made much easier) [<a href="http://gbaydin.github.io/DiffSharp/" rel="nofollow">http://gbaydin.github.io/DiffSharp/</a>]. Or the DataFrames library: <a href="http://bluemountaincapital.github.io/Deedle/" rel="nofollow">http://bluemountaincapital.github.io/Deedle/</a><p>Type providers also allow easy use of UI builders. Web frameworks and also, js targeting can be reached here: <a href="http://fsharp.org/guides/web/" rel="nofollow">http://fsharp.org/guides/web/</a>. You can target Unity3D and apparently, also the Unreal Engine.<p>There are interesting projects looking at distributed computation (<a href="https://github.com/fsprojects/Cricket" rel="nofollow">https://github.com/fsprojects/Cricket</a> and ilk) or light weight concurrency, in <i>Hopac</i>'s take on the Pi Calculus.<p>F# was one of the earlier languages with light weight threads, a solid async story and first class events (pre-reactive trend). Active Patterns (not unique to F# but more common in) take us close to Predicate Dispatch (<a href="http://c2.com/cgi/wiki?PredicateDispatching" rel="nofollow">http://c2.com/cgi/wiki?PredicateDispatching</a>). There is much more I could list and hopefully, I have piqued some interest.<p>But it's not perfect. Adding Functor support would be very useful. A while ago, in an early Active Patterns paper, there were hints that generalized algebraic datatypes might soon be introduced. Nothing came of that. Higher kindedness is nice but not as much missed—it is my suspicion that, the gains from each level of types that can be parameterized over, quickly saturates.<p>There are lots of cool projects going on in F#, and while, as a language it's definitely not as powerful as say Scala or Haskell; the tools, libraries and environment, alternate ideas as well as breezy syntax, do make up. Having used them all, I wouldn't say it is any less expressive, it just...prioritizes differently beyond the core functional ideas (REPL, sum/product types, currying, closures, point free application where possible, Hindley Milner inference, immutability by default, pattern matching and deconstruction, etc., etc.).<p>It strikes a lot of middle ground across many planes, in terms of pragmatism vs functional purity—that is its ML heritage (but perhaps even more so, bargained in exchange for the .NET ecosystem). Most functional languages focus on types in terms of algebra, F# does too but only basically, instead it focuses more on the easy bridging of types with unruly data.</p>
]]></description><pubDate>Fri, 01 May 2015 04:06:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=9469336</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9469336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9469336</guid></item><item><title><![CDATA[New comment by Dn_Ab in "CSV Challenge"]]></title><description><![CDATA[
<p>There are actually 2 dates,the 24th and the 25th, in the data sample.<p>You can do this in a manner that's both fairly comprehensible and succint, for arbitrary number of dates, using a Json TypeProvider in F#.<p><pre><code>  #r @"../Fsharp.Data.dll"  
  open FSharp.Data  
  open System

  type PersonsData = JsonProvider<"../data.sample.json">

  let dateTriple (d:PersonsData.Root) = d.Timestamp.Year, d.Timestamp.Month,d.Timestamp.Day 
  
  let info = PersonsData.Load ("./data.json")
  
  let uniqueDates = info |> Array.map dateTriple |> set
  
  let createCsv (d : PersonsData.Root seq) =    
    d |> Seq.filter (fun p -> Option.isSome p.Creditcard)
      |> Seq.map (fun p -> sprintf "%s,%s" p.Name p.Creditcard.Value )   
      |> String.concat "\n"

  info |> Seq.groupBy dateTriple
       |> Seq.iter (fun ((y,m,d), data) ->
         IO.File.WriteAllText (sprintf "%d%02d%02d.csv" y m d, createCsv data))
</code></pre>
With the below as a sample (though dataset itself could have been used since it's not so large):<p><pre><code>    [{"name":"Quincy Gerhold","email":"laron.cremin@macejkovic.info","city":"Port Tiabury","mac":"64:d2:17:ff:28:13","timestamp":"2015-04-25 15:57:12 +0700","creditcard":null},{"name":"Lolita Hudson","email":"tracy.goodwin@schmidt.com","city":"Port Brookefurt","mac":"2d:20:78:41:8e:35","timestamp":"2015-04-25 23:20:21 +0700","creditcard":"1211-1221-1234-2201"}]</code></pre></p>
]]></description><pubDate>Sat, 25 Apr 2015 17:42:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=9438968</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9438968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9438968</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Female Chimps Seen Making, Wielding Spears"]]></title><description><![CDATA[
<p>Other excellent stories in this genre: Blood Music by Greg Bear, Permutation City and Crystal Nights (<a href="http://ttapress.com/553/crystal-nights-by-greg-egan/" rel="nofollow">http://ttapress.com/553/crystal-nights-by-greg-egan/</a>) by Greg Egan. Marooned in Realtime by Vinge had this a bit too—probably closest thematically to the GP's comment. A bunch of people wake up to a world that's been empty of humans for many thousands of years; the book spends some time talking about how some animals evolved.</p>
]]></description><pubDate>Fri, 17 Apr 2015 23:13:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=9397714</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9397714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9397714</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Timeless Decision Theory [pdf]"]]></title><description><![CDATA[
<p>I might be mistaken, but it seems to me that the predictor should also be able to solve the halting problem. Additionally, if the entity to which the source-code is a descriptor for is so simple that they can be predicted without full simulation, can we really say they had free will?</p>
]]></description><pubDate>Sat, 04 Apr 2015 22:16:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=9322416</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9322416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9322416</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Ushering in the age of Hypercapitalism with the Blockchain"]]></title><description><![CDATA[
<p>It depends, some people distinguish <i>economic rent</i> from <i>economic profit</i> by restricting it to economic profits that cannot be reduced to <i>normal</i> by competition in the long run. Some additionally include profits without any opportunity costs.<p>Economic rents run counter to properly functioning markets, e.g. by raising artificial barriers such as patents or counterproductive regulation.<p>The key tenet of perfect competition is that in the long run, only normal profits exist. Since economic rent stands contrary to this, it cannot be a fundamental law of capitalism. In fact, economic rent is a sign of market failure. But this is reality and markets can't be perfect, so we try for good tax policies to redress this imbalance. Look at the section on rents here: <a href="http://www.economist.com/economics-a-to-z/r#node-21529784" rel="nofollow">http://www.economist.com/economics-a-to-z/r#node-21529784</a><p>As an aside, people think Thiel's idea on monopoly is controversial, but really it's just a catchy way of portraying the observation that, since in well functioning markets, competition leads to zero economic profit, you should always be seeking advantages (innovating) and situations which allow you to (temporarily) extract economic profit.</p>
]]></description><pubDate>Sat, 28 Mar 2015 17:55:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=9282179</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=9282179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9282179</guid></item><item><title><![CDATA[New comment by Dn_Ab in "Computers Conquer Heads-Up Limit Hold'em"]]></title><description><![CDATA[
<p>This uses a particular form of a fundamentally simple yet surprisingly powerful class of learning algorithms called regret minimization. CFR is interesting in an of itself as it specializes regret minization to play extensive form games. There are also CFR algorithms to play multiplayer and no-limit games and though the guarantees of optimality are no longer there, the players are still strong (but for now, far away from experts).<p>The article states that this algorithm is weak to bad players but that's more an artifact of resources and training method; one advantage of minimizing regret on games instead of using linear programming is that online learning versions can adapt to exploit poor play with payoff larger than the game's value.<p>I've also posted here before that RM solves 2 player Zero sum game more efficiently than linear programming and how it's related to boosting, portfolio optimization and as an abstraction of natural selection<i>.<p></i><a href="http://www.pnas.org/content/111/29/10620.full" rel="nofollow">http://www.pnas.org/content/111/29/10620.full</a></p>
]]></description><pubDate>Thu, 08 Jan 2015 21:49:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=8859247</link><dc:creator>Dn_Ab</dc:creator><comments>https://news.ycombinator.com/item?id=8859247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8859247</guid></item></channel></rss>