<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: 0tfoaij</title><link>https://news.ycombinator.com/user?id=0tfoaij</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 19 Apr 2026 12:50:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=0tfoaij" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by 0tfoaij in "BritCSS: Fixes CSS to use non-American English"]]></title><description><![CDATA[
<p>The idea that American spelling and pronunciation have a better heritage than British English is a compelling one, especially as the idea that Southern and Appalachian accents are closer to those of the Founding Fathers and Shakespearean English is a nice balance to the perception that these accents sound unintelligent and uneducated, but it's simply not true that one dialect has diverged more than another - both have diverged and in many cases substantially.<p>One of the common reasons given is that British accents like RP (there's a lot to criticise about RP but that's another topic), Cockney (featured elsewhere on the thread and the internet in general, oi m8 you got a loicence for that?), and general loss of rhoticity in BrE (and some AmE) accents that are most represented in American media have diverged substantially, but to me the examples of Shakespearean English in classic pronunciation sound closer to the West Country accents than they do any American accent. Note that there could be some bias here as the speakers are British, but you get features like H-dropping which simply don't exist in AmE. It also wouldn't be fair to say any modern accent sounds even remotely close to this.<p>Shakespearean English:<p><a href="https://www.youtube.com/watch?v=qYiYd9RcK5M" rel="nofollow">https://www.youtube.com/watch?v=qYiYd9RcK5M</a><p><a href="https://www.youtube.com/watch?v=gPlpphT7n9s" rel="nofollow">https://www.youtube.com/watch?v=gPlpphT7n9s</a><p>Some good reddit threads on the matter:<p><a href="https://old.reddit.com/r/AskHistorians/comments/9ju72b/is_there_any_truth_to_the_narrative_that_the/" rel="nofollow">https://old.reddit.com/r/AskHistorians/comments/9ju72b/is_th...</a><p><a href="https://old.reddit.com/r/linguistics/comments/j3imwe/is_it_true_that_shakespeare_would_have_sounded/" rel="nofollow">https://old.reddit.com/r/linguistics/comments/j3imwe/is_it_t...</a><p><a href="https://old.reddit.com/r/AskAnthropology/comments/9oke84/is_american_english_really_closer_in_accent_and/" rel="nofollow">https://old.reddit.com/r/AskAnthropology/comments/9oke84/is_...</a><p>Another weird one is spelling, given that etymology and spelling is pretty interesting in general, at least up until the advent of the printing press. Both BrE and AmE have made some questionable decisions here. BrE standardised earlier and kept some Frenchisms like -ise (the OED maintains that -ize is correct with -ise being valid) but this was likely because -ise is correct for some words like advertise, or prise (which AmE dropped entirely for pry, and weirdly took up burglarize) and universal -ise makes spelling easier. In some cases it's just because words/pronunciations came much later from French in BrE whereas they came from Spanish and Italian in AmE. American spelling on the other hand was intentionally simplified, and although the spelling reform Webster wanted never truly happened (if it did you'd be speaking the American languaj) it did lead to the dropping of -our for -or, -re for -er, -oe for -e, etc.<p>I'm biased but I do prefer the etymological spelling, even if it means that we do say lieutenant differently.</p>
]]></description><pubDate>Fri, 21 Feb 2025 08:22:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43125298</link><dc:creator>0tfoaij</dc:creator><comments>https://news.ycombinator.com/item?id=43125298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43125298</guid></item><item><title><![CDATA[New comment by 0tfoaij in "BritCSS: Fixes CSS to use non-American English"]]></title><description><![CDATA[
<p>Obliged to point out that Latin -or words were often spelled with -or, -our, and -ur in Old French also. If you are using Wiktionary as a source you have to click through to the Old French definitions to see the alternate forms, as well as parsing the descendant table to see the derived forms that the simple etymology blurb often leaves out. Doing so you can also see that Middle Dutch has 'coleur' (modern kleur) which very likely did not originate from Middle French given the timeframe.<p>The earliest quotations for colour in the Oxford English Dictionary are from around 1300 where it was spelled 'colur' (cf Welsh) which while being post-Norman England is not post-Norman English. For Norman/Angevin-England the OED also has a quotation for honour as 'onur' listed as before 1200 (and again as 'onour' from around 1300). If you want to make a case of superfluous 'u's being added a better example would be something like chancellor, where the 'u' was added in Middle English and then later removed, rather than colour (or honour) where the 'u's have existed since the earliest quotations. The reason color and honor stuck about in English is most likely because that is how they were spelled in Latin.</p>
]]></description><pubDate>Fri, 21 Feb 2025 06:45:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43124711</link><dc:creator>0tfoaij</dc:creator><comments>https://news.ycombinator.com/item?id=43124711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43124711</guid></item><item><title><![CDATA[New comment by 0tfoaij in "Extracting financial disclosure and police reports with OpenAI Structured Output"]]></title><description><![CDATA[
<p>OpenAI stopped releasing information about their models after gpt-3, which was 175b, but the leaks and rumours that gpt-4 is an 8x220 billion parameter model are most certainly correct. 4o is likely a distilled 220b model. Other commercial offerings are going to be in the same ballpark. Comparing these to llama 3 8b is like comparing a bicycle or a car to a train or cruise ship when you need to transport a few dozen passengers at best. There are local models in the 70-240b range that are more than capable of competing with commercial offerings if you're willing to look at anything that isn't bleeding edge state of the art.</p>
]]></description><pubDate>Mon, 14 Oct 2024 19:17:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=41840877</link><dc:creator>0tfoaij</dc:creator><comments>https://news.ycombinator.com/item?id=41840877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41840877</guid></item><item><title><![CDATA[New comment by 0tfoaij in "ARIA: An Open Multimodal Native Mixture-of-Experts Model"]]></title><description><![CDATA[
<p>MoE's still require the total number of parameters (46b, not 56b, there's some overlap) to be in ram/vram, but the benefit is that the inference speed will be based on the amount of active parameters used, which in the case of Mixtral is 2 experts at 7b each for an inference speed comparable to 14b dense models. This 3x improvement in inference speed would be worth the additional ram usage alone, especially for cpu inference where memory bandwidth rather than total memory capacity is the limiting factor, but as a bonus there's a general rule you can use calculate how well MoE's will compare to dense models by taking the square root of the active parameters * total parameters, meaning Mixtral ends up comparing favourably to 25b dense models for example. In the case of ARIA it's going to have the memory usage of a 25b model, with the performance of a 10b~ model while running as fast as a 4b model. This is a nice trade off to make if you can spare the additional ram.<p>If it helps, MoE's aren't just disparate 'expert' models trained to deal with specific domain knowledge jammed into a bigger model, but rather are the same base model trained in similar ways where each model ends up specialising on individual tokens. As the image dartos linked shows, you can end up with some 'experts' in the model that really, really like placing punctuation or language syntax for whatever reason.</p>
]]></description><pubDate>Fri, 11 Oct 2024 14:11:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41809622</link><dc:creator>0tfoaij</dc:creator><comments>https://news.ycombinator.com/item?id=41809622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41809622</guid></item></channel></rss>