<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thisoneforwork</title><link>https://news.ycombinator.com/user?id=thisoneforwork</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 03:51:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thisoneforwork" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thisoneforwork in "Meltdown Proof-of-Concept"]]></title><description><![CDATA[
<p>Would "flushing on ignore" not leave the cache side channel open for many instructions before the abort?</p>
]]></description><pubDate>Tue, 09 Jan 2018 21:45:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=16110426</link><dc:creator>thisoneforwork</dc:creator><comments>https://news.ycombinator.com/item?id=16110426</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16110426</guid></item><item><title><![CDATA[New comment by thisoneforwork in "Meltdown Proof-of-Concept"]]></title><description><![CDATA[
<p>Firstly, thanks for the question. As mentioned, not a CPU designer or trying to teach Intel what to do. More like relying on the hive mind to see if I have the right idea.<p>A second instruction in the pipeline would read from the above mentioned L0 cache (let us call it load buffer), much like it would for tentative memory stores from the store buffer.<p>Also, two memory fetches in parallel are not twice as long as a memory fetch, if that would be the solution (which I guess would not be the case, as I imagine race conditions appearing)</p>
]]></description><pubDate>Tue, 09 Jan 2018 21:33:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=16110329</link><dc:creator>thisoneforwork</dc:creator><comments>https://news.ycombinator.com/item?id=16110329</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16110329</guid></item><item><title><![CDATA[New comment by thisoneforwork in "Meltdown Proof-of-Concept"]]></title><description><![CDATA[
<p>Not a CPU designer, but my guess is that they will move the cache management logic from the MMU to the µOP scheduler, which will commit to cache on retirement of the speculatively executed instruction. They would then need to introduce some sort of L0 cache, accessible only at the microarchitectural level, bound to a speculative flow, and flushed at retirement.</p>
]]></description><pubDate>Tue, 09 Jan 2018 21:05:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=16110057</link><dc:creator>thisoneforwork</dc:creator><comments>https://news.ycombinator.com/item?id=16110057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16110057</guid></item><item><title><![CDATA[New comment by thisoneforwork in "A Map Showing How Much Time It Takes English-Speakers to Learn Foreign Languages"]]></title><description><![CDATA[
<p>As it is sometimes said:<p>Die Möglichkeiten der deutschen Grammatik können einen, wenn man sich drauf, was man ruhig, wenn man möchte, sollte, einlässt, überraschen.</p>
]]></description><pubDate>Fri, 01 Dec 2017 10:23:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=15823037</link><dc:creator>thisoneforwork</dc:creator><comments>https://news.ycombinator.com/item?id=15823037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15823037</guid></item><item><title><![CDATA[New comment by thisoneforwork in "Compute Engine machine types with up to 96 vCPUs and 624GB of memory"]]></title><description><![CDATA[
<p>This is not a SKU to play with. If $20/hr is indeed the price (I don't know), this is the hourly cost of a couple of waiters. You get to run SAP on someone's infra and someone to support it.</p>
]]></description><pubDate>Mon, 09 Oct 2017 19:49:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=15436870</link><dc:creator>thisoneforwork</dc:creator><comments>https://news.ycombinator.com/item?id=15436870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15436870</guid></item><item><title><![CDATA[New comment by thisoneforwork in "Microsoft gives up on Windows 10 Mobile"]]></title><description><![CDATA[
<p>Disclosure: I work at Microsoft, I have worked for Microsoft before and quit, and come back. I love the company, but I am no zealot (tried to standardize a company on Macs during my six years away, because it made sense). I never worked on Windows Phone, but I know the company and the tech well.<p>Here is where we fucked up:<p>1. We were, for a long time, a company, where every product/business group had to pay for its own right to exist. Everyone had their own P&L, contribution margin targets, marketing. You had to make money by yourself to stay alive. KT made sure we all understood this.<p>2. We had a history of "fast follower" successes - Windows, Word, Windows Server, SQL Server, Exchange, IE, even Intune nowadays, and many many others got successful not by disrupting the current market leader or by hardcore innovation, but by leveraging either an open or standard platform and always getting better, without trying to rewrite the rules of the game. OK, maybe Office rewrote them when it came out, but it was packaging.<p>3. Balmer (whom I love as a leader) got trolled by Apple's and Google's success, and Microsoft graduating from not really cool to quite uncool. So he decided to tackle them the way it had worked before (point 2.). Simultaneously, he tried to correct point 1, but, as radical as his 2014 reorganization to break org barriers was, he did not get rid of KT (Kevin Turner). KT brought in the money, KT defined the culture. Everyone had to keep making their own money.<p>We could have:
Offered the mobile OS for free from day one. 
Given Office on Mobile for free from day one. 
Bought or OEMed Xamarin a lot sooner. 
Returned 100% of app revenue to app devs who sell through the Windows Store. 
Made dev tools (Studio CE) free earlier. 
Guaranteed no data collection (remember the Scroogled campaign…?)<p>All those have either been done, or are irrelevant now, while the stock is still at a record high, after we lost the game... We could have done all of the above and fare better than we have, and we have fared well.<p>Instead, we comp hardware sellers on MARGIN, as if it makes a bloody difference. We monetize the post install experience. All bullshit for pennies. Everyone had to make money on their own so we missed the bigger picture.<p>Satya fixed this, and it hurt, as it was the only way left to go. I gave up on a phone I really liked, as I saw no future.<p>I don't know if I should hope for us bringing new phones out, but I sure hope we never again let our Operating Mechanisms kill our ability to see the big picture and disrupt the market.</p>
]]></description><pubDate>Mon, 09 Oct 2017 19:44:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=15436831</link><dc:creator>thisoneforwork</dc:creator><comments>https://news.ycombinator.com/item?id=15436831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15436831</guid></item><item><title><![CDATA[New comment by thisoneforwork in "Compute Engine machine types with up to 96 vCPUs and 624GB of memory"]]></title><description><![CDATA[
<p>Azure's M128 VM size has 128 vCPUs and 2048GB of memory already.</p>
]]></description><pubDate>Fri, 06 Oct 2017 12:58:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=15416420</link><dc:creator>thisoneforwork</dc:creator><comments>https://news.ycombinator.com/item?id=15416420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15416420</guid></item></channel></rss>