<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dimask</title><link>https://news.ycombinator.com/user?id=dimask</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 22:09:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dimask" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dimask in "Baseline pupil size related to cognitive ability in proper lighting conditions"]]></title><description><![CDATA[
<p>What is not well done in this study?</p>
]]></description><pubDate>Fri, 06 Sep 2024 07:44:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=41463905</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=41463905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41463905</guid></item><item><title><![CDATA[New comment by dimask in "Qwen2-Math"]]></title><description><![CDATA[
<p>1) Then more math should get formalised in lean.<p>2) How is a solution by LLMs supposed to be verified without such a formalisation?</p>
]]></description><pubDate>Thu, 08 Aug 2024 18:52:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=41194924</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=41194924</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41194924</guid></item><item><title><![CDATA[New comment by dimask in "We need visual programming. No, not like that"]]></title><description><![CDATA[
<p>The spatial reasoning on reading code does not happen on the dimensions of the literal text, at least not only on these. It happens in how we interpret the code and build relations in our minds while doing so. So I think that the problem is not about the spatial reasoning of what we literally see per se, but if the specific representation helps in something. I like visual representations for the explanatory value they can offer, but if one tries to work rigorously on a kind of spatial algebra of these, then this explanatory power can be lost after some point of complexity. I guess there may be contexts where a visual language may be working well. But in the contexts I have encountered I have not found them helpful. If anything, the more complex a problem is, the more cluttered the visual language form ends up being, and feels overloading my visual memory. I do not think it is a geometric feature or advantage per se, but about how brains of some people work. I like visual representations and I am in general a quite visual thinker, but I do not want to see all these miniscule details in there, I want to them to represent what I want to understand. Text, on the other hand, serves better as a form of (human-related) compression of information, imo, which makes it better for working on these details there.</p>
]]></description><pubDate>Mon, 15 Jul 2024 01:42:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=40964650</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40964650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40964650</guid></item><item><title><![CDATA[New comment by dimask in "We need visual programming. No, not like that"]]></title><description><![CDATA[
<p>I do not think what they say is that it is hard to visualise it, but that it does not offer much utility to do so. A "for" loop like that is not that complicated to understand and visualising it externally does not offer much. The examples the article gives is about more abstract and general overviews of higher level aspects of a codebase or system. Or to explain some concept that may be less intuitive or complicated. In general less about trying to be formal and rigorous, and more about being explanatory and auxiliary to the code itself.</p>
]]></description><pubDate>Mon, 15 Jul 2024 01:12:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=40964537</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40964537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40964537</guid></item><item><title><![CDATA[New comment by dimask in "CVE-2024-6409: OpenSSH: Possible remote code execution in privsep child"]]></title><description><![CDATA[
<p>At least they do not name them after themselves.</p>
]]></description><pubDate>Wed, 10 Jul 2024 09:43:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=40925234</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40925234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40925234</guid></item><item><title><![CDATA[New comment by dimask in "My finetuned models beat OpenAI's GPT-4"]]></title><description><![CDATA[
<p>Well I precisely talked about things I have engaged professionally. Obviously this cannot cover everything one may do, eg I do not build chatbots for customer service or stuff like that, thus I obviously cannot speak for all possible applications of LLMs and how useful they may be. I am pretty sure there will be useful applications in fields I am not and will not be engaged in as nobody engages with everything. However, some other things that I have tried (eg copilots, summarising scientific articles) imo create much more hype than real value. They can be a bit useful if you know what to actually use them for and what their limits are, but nowhere close to the hype they generate, and I just find myself just googling again tbh. They are absolutely horrible especially with more niche subjects and areas. On the other hand, data extraction and structuring has a quite universal application, has already demonstrated usefulness and potential, and seems a quite realistic, down to earth application that I am happy to see other people and startups working on. Not as fancy, and harder to build hype upon, but very useful regardless.</p>
]]></description><pubDate>Mon, 01 Jul 2024 19:45:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=40849804</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40849804</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40849804</guid></item><item><title><![CDATA[New comment by dimask in "My finetuned models beat OpenAI's GPT-4"]]></title><description><![CDATA[
<p>Thanks for putting all this work and sharing it in such a detail! Data extraction/structuring data is the only serious application of LLMs I have actually engaged in for real work and found useful. I had to extract data from experience sampling reports which I could not share online, thus chatgpt etc was out of question. There were sentences describing onsets and offsets of events and descriptions of what went on. I ran models through llama.cpp to turn these into csv format with 4 columns (onset, offset, description, plus one for whether a specific condition was met in that event or not which had to interpreted through the description). Giving some examples of how I want it all structured in the prompt, was enough for many different models to do it right. Mixtral 8x7b was my favourite because it ran the fastest in that quality level on my laptop.<p>I am pretty sure that a finetuned smaller model would be better and faster for this task. It would be great to start finetuning and sharing such smaller models: they do not really have to be really better than commercial LLMs that run online, as long as they are not at least worse. They are already much faster and cheaper, which is a big advantage for this purpose. There is already need for these tasks to be offline when one cannot share the data with openai and the like. Higher speed and lower cost also allow for more experimentation with more specific finetuning and prompts, with less care about token lengths of prompts and cost. This is an application where smaller, locally run, finetunable models can shine.</p>
]]></description><pubDate>Mon, 01 Jul 2024 11:59:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=40844904</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40844904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40844904</guid></item><item><title><![CDATA[New comment by dimask in "Spaced repetition for teaching two-year olds how to read"]]></title><description><![CDATA[
<p>I do not think that such a conversation is done in productive way (especially when cutting phrases in half to make them appear making no sense) but will try get a couple of points across:<p>- I interpreted the comment in the context of the answer to a specific article/interview. In the linked content, for example, there is a video of a 2.5yo doing a "flashcard class". While I do not think there is anything inherently harmful or anything, it is not a way that 2.5yos learn about the world, and even if it is not harmful it is not needed for 2.5yos to sit on a chair and doing a class to learn about the world. Their curiosity and own exploration drive is enough for pulling them into learning, and this is what I mean by parents should feed, ie see what their kids are most curious and interested in and feeding them inputs to that direction. The comment you answered to was referring to this article, and I interpreted your answer in that context. If I misinterpreted anything, I can only see the context that is shared here, not in your mind.<p>- To reiterate and clarify more on the context, "Speaking and reading to children is a natural activity" is _not_ what OP was about. What OP was about is applying a specific strategy for kids at 2+ years, ie to learn to read using a specific exploitation-based approach. If that is all you meant by your previous comment, then you may want to reread the comment you answered to from that perspective. Nobody here is saying "leave the kids do what they want and do not care about interacting with them much/talking around them" that you seem to suggest. When I say parents building upon kids' own curiosity and exploration drive I mean seeing what sort of inputs their kids become more curious and interested in at a certain time and feeding them inputs like that. When a kid starts being interested in sounds and music, feed them with sounds and music and sound/music-related books and toys. There is no handbook that is gonna say which month and day exactly this should happen for a specific kid.<p>- I may miss a lot of knowledge indeed, but I still find setting goals of "maximising language exposure" and "maximising IQ" weird and unclear. No, I have never read or heard this way of approaching development and learning. Parents doing their best and being mindful of the importance of language exposure is different than "maximising" anything. Maximising with respect to which parameters? Even defining this as an optimisation problem, any complex optimisation problem like this is a tradeoff between different parameters and outcomes. What happens to the other parameters and outcomes when you optimise on just one?<p>- If "you do not speak the jargon" is what you prefer to focus, just say that and any more discussion will not be needed.</p>
]]></description><pubDate>Sun, 16 Jun 2024 10:02:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=40695921</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40695921</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40695921</guid></item><item><title><![CDATA[New comment by dimask in "ARC Prize – a $1M+ competition towards open AGI progress"]]></title><description><![CDATA[
<p>I think you are confusing "memory" with strategies based on memorisation. Yes memorising (ie putting things into memory) is always involved in learning in some way, but that is too general and not what is discussed here. "Compression is understanding" possibly to some extent, but understanding is not just compression; that would be a reduction of what understanding really is, as it involves a certain range of processes and contexts in which the understanding is actually enacted rather than purely "memorised" or applied, and that is fundamentally relational. It is so relational that it can even go deeply down to how motor skills are acquired or spatial relationships understood. It is no surprise that tasks like mental rotation correlates well with mathematical skills.<p>Current research in early mathematical education now focuses on teaching certain spatial skills to very young kids rather than (just) numbers. Mathematics is about understanding of relationships, and that is not a detached kind of understanding that we can make into an algorithm, but deeply invested and relational between the "subject" and the "object" of understanding. Taking the subject and all the relations with the world out of the context of learning processes is absurd, because that is in the exact centre of them.</p>
]]></description><pubDate>Sat, 15 Jun 2024 18:47:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=40691814</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40691814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40691814</guid></item><item><title><![CDATA[New comment by dimask in "Spaced repetition for teaching two-year olds how to read"]]></title><description><![CDATA[
<p>I work in human developmental research and have never heard or read anybody make such claims that you consider "bog standard child development science", and some of what you say are definitely not supported by the current understanding of human development.<p>For example<p>> child language development milestones that are waymarked by age down to the month<p>is totally false. It is quite known that developmental milestones are acquired by children in different times and even in different orders and sequences. This "down to the month" is pure non-sense for most of the milestones.<p>Young children are better served to be guided by their own curiosity, interest and exploration drives and which parents feed with variable inputs and building upon, rather than by anxious parents feeding them with whatever terabytes of exploitation-intended information they think is gonna "serve to maximize IQ".<p>Yes, reading to kids in certain ways (using numbers/spatial relationships/theory of mind stuff/interactively) has been found in some studies to correlate with certain outcomes but there is nothing to suggest a totally linear relationship such that talking to a kid 24/7 since the womb is gonna produce the next Einstein.</p>
]]></description><pubDate>Sat, 15 Jun 2024 18:21:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=40691615</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40691615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40691615</guid></item><item><title><![CDATA[New comment by dimask in "ARC Prize – a $1M+ competition towards open AGI progress"]]></title><description><![CDATA[
<p>It is not "just more text". That is an extremely reductive approach on human cognition and experience that does favour to nothing. Describing things in text collapses too many dimensions. Human cognition is multimodal. Humans are not computational machines, we are attuned and in constant allostatic relationship with the changing world around us.</p>
]]></description><pubDate>Wed, 12 Jun 2024 12:42:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=40657445</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40657445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40657445</guid></item><item><title><![CDATA[New comment by dimask in "ARC Prize – a $1M+ competition towards open AGI progress"]]></title><description><![CDATA[
<p>> How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.<p>Not just that: people learn mathematics mainly by _thinking over and solving problems_, not by memorising solutions to problems. During my mathematics education I had to practice solving a lot of problems dissimilar what I had seen before. Even in the theory part, a lot of it was actually about filling in details in proofs and arguments, and reformulating challenging steps (by words or drawings). My notes on top of a mathematical textbook are much more than the text itself.<p>People think that knowledge lies in the texts themselves; it does not, it lies in what these texts relate to and the processes that they are part of, a lot of which are out in the real world and in our interactions. The original article is spot on that there is no AGI pathway in the current research direction. But there are huge incentives for ignoring this.</p>
]]></description><pubDate>Wed, 12 Jun 2024 08:27:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=40655824</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40655824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40655824</guid></item><item><title><![CDATA[New comment by dimask in "ARC Prize – a $1M+ competition towards open AGI progress"]]></title><description><![CDATA[
<p>Claims of isomorphisms are really strong claims to not be backed up with some kind of evidence.</p>
]]></description><pubDate>Wed, 12 Jun 2024 08:11:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=40655733</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40655733</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40655733</guid></item><item><title><![CDATA[New comment by dimask in "ARC Prize – a $1M+ competition towards open AGI progress"]]></title><description><![CDATA[
<p>> Would an intelligent but blind human be able to solve these problems?<p>Blind people can have spatial reasoning just fine. Visual =/= spatial [0]. Now, one would have to adapt the colour-based tasks to something that would be more meaningful for a blind person, I guess.<p>[0] <a href="https://hal.science/hal-03373840/document" rel="nofollow">https://hal.science/hal-03373840/document</a></p>
]]></description><pubDate>Wed, 12 Jun 2024 08:10:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=40655727</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40655727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40655727</guid></item><item><title><![CDATA[New comment by dimask in "AI Hype is completely out of control – especially since ChatGPT-4o [video]"]]></title><description><![CDATA[
<p>When they can outperform human infants in learning, eg data required to learn and versatility, we can talk business.<p>Not all world is "big data".</p>
]]></description><pubDate>Mon, 10 Jun 2024 14:46:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=40634209</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40634209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40634209</guid></item><item><title><![CDATA[New comment by dimask in "AI Hype is completely out of control – especially since ChatGPT-4o [video]"]]></title><description><![CDATA[
<p>The problem is that there is little, incremental progress last 1 year or so after the big chatGPT boom to justify the hype, technically wise. Most of the "progress" going on is basically marketing, and making the models respond in ways humans like, or being more useful in certain practical applications. The basic, fundamental issues/limitations remain unanswered and unaddressed. As products, they have improved a lot and most probably are gonna improve more. But if we are talking for going towards AGI or more complex applications, I do not see evidence on that except as toys.</p>
]]></description><pubDate>Mon, 10 Jun 2024 14:43:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=40634178</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40634178</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40634178</guid></item><item><title><![CDATA[New comment by dimask in "Leafy vegetables found to contain tire additives"]]></title><description><![CDATA[
<p>> That's not where they belong.<p>Where do they belong? When we set forever chemical loose in the environment, it is expected that a quantity of them will reach the ones in the top of the food chain, which humans are. Where are forever chemicals supposed to end up when we decide it is less costly economic-wise to use them?</p>
]]></description><pubDate>Sat, 08 Jun 2024 17:02:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=40618886</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40618886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40618886</guid></item><item><title><![CDATA[New comment by dimask in "I was denied tenure – how do I cope?"]]></title><description><![CDATA[
<p>You do not teach git/github, you teach students what the best practices (ie version control) are as part of working on their projects.</p>
]]></description><pubDate>Fri, 07 Jun 2024 13:18:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=40608446</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40608446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40608446</guid></item><item><title><![CDATA[New comment by dimask in "I was denied tenure – how do I cope?"]]></title><description><![CDATA[
<p>To be fair people in the private sector typically make more money at least and have better work-related perks. Academia advertises mostly 1. you research stuff you find interesting and 2. you will get job security some point after a couple of postdocs. Currently neither 1 nor 2 apply for the vast majority of academic work.</p>
]]></description><pubDate>Fri, 07 Jun 2024 13:16:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=40608431</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40608431</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40608431</guid></item><item><title><![CDATA[New comment by dimask in "Raivo OTP just deleted all tokens after update and is now asking for money"]]></title><description><![CDATA[
<p>oops</p>
]]></description><pubDate>Tue, 04 Jun 2024 06:43:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=40571373</link><dc:creator>dimask</dc:creator><comments>https://news.ycombinator.com/item?id=40571373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40571373</guid></item></channel></rss>