<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: quantumet</title><link>https://news.ycombinator.com/user?id=quantumet</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 09:52:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=quantumet" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by quantumet in "AWS North Virginia data center outage – resolved"]]></title><description><![CDATA[
<p>They are, sometimes. Google built this one in Finland in 2011 at the site of an old paper mill, which was already set up to draw water from the Baltic Sea (which isn't as salty as the Atlantic is, but still not fresh water):<p><a href="https://datacenters.google/locations/hamina-finland/" rel="nofollow">https://datacenters.google/locations/hamina-finland/</a><p>> Using a cooling system with seawater from the Bay of Finland and a new offsite heat recovery facility, our Hamina data centre is at the forefront of progressing our sustainability and energy-efficiency efforts.</p>
]]></description><pubDate>Fri, 08 May 2026 23:17:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=48069952</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=48069952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48069952</guid></item><item><title><![CDATA[New comment by quantumet in "Does Not Translate"]]></title><description><![CDATA[
<p>> Then there's plenty examples where a direct English equivalent exists even if it's not a literal translation of the original: nachäffen (de) = na-apen (nl) = mimicking (en), but English lacks the animal connotation (Affe == aap == ape)<p>Even here, english has the term 'aping': <a href="https://www.google.com/search?q=define%3Aaping" rel="nofollow">https://www.google.com/search?q=define%3Aaping</a>, with the same meaning and retains the same animal connotation.</p>
]]></description><pubDate>Fri, 14 Jan 2022 23:45:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=29942037</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=29942037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29942037</guid></item><item><title><![CDATA[New comment by quantumet in "How to Get Fired Using Switch Statements and Statement Expressions (2016)"]]></title><description><![CDATA[
<p>My favorite "extremely compact C" style:<p><pre><code>   if (*len * ("11124811248484"[*type < 14 ? *type:0]-'0') > 4) { ... }
</code></pre>
because naming lookup tables is clearly too verbose. Among other interesting decisions.</p>
]]></description><pubDate>Thu, 15 Oct 2020 22:57:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=24795306</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=24795306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24795306</guid></item><item><title><![CDATA[New comment by quantumet in "Experimental Nighttime Photography with Nexus and Pixel"]]></title><description><![CDATA[
<p>To first order, long exposures are probably less power-hungry - much of the sensor power is burned in the image readout, and  longer exposures mean you're reading images less often.<p>When collecting light, an image sensor pixel isn't really using up any active power (each pixel is basically a capacitor collecting electrons generated by light hitting the silicon).</p>
]]></description><pubDate>Tue, 25 Apr 2017 20:41:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=14197774</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=14197774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=14197774</guid></item><item><title><![CDATA[New comment by quantumet in "Experimental Nighttime Photography with Nexus and Pixel"]]></title><description><![CDATA[
<p>Mostly hardware.<p>Shortest explanation: The sensor exposure time register has a maximum value.<p>Next shortest: But it's actually in units of row readout time, on many sensors, which is also configurable, so the exposure time can be made longer at the cost of slower image readout.  In normal operation, readout has to happen at 30fps at least, so extra code is needed to switch to slower readout for extended exposure values.  This code then needs validation, the image processing tuning tables need to be updated and verified for the new long exposure durations, and any preview glitches, etc, from resetting base sensor configurations need to be addressed.  So a lot of extra work, for a relatively niche feature on a smartphone.<p>Even longer: Many sensors also have an external shutter trigger signal pin, for unlimited exposure duration.  But that needs to be wired to the CPU, and all the SW considerations above also apply.</p>
]]></description><pubDate>Tue, 25 Apr 2017 19:15:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=14197042</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=14197042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=14197042</guid></item><item><title><![CDATA[New comment by quantumet in "Google Fonts Redesigned"]]></title><description><![CDATA[
<p>You can select a size for a given font by hovering over it to reveal the size slider and other options (desktop version at least), and then applying the size to all.</p>
]]></description><pubDate>Tue, 14 Jun 2016 17:49:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=11903840</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=11903840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11903840</guid></item><item><title><![CDATA[New comment by quantumet in "Google Launches Mortgage Shopping Tool in California"]]></title><description><![CDATA[
<p>Once you have a list, there's a dropdown to filter by type of loan (defaults to 'All')</p>
]]></description><pubDate>Mon, 23 Nov 2015 19:53:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=10616840</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=10616840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10616840</guid></item><item><title><![CDATA[Developing for Android: Introduction]]></title><description><![CDATA[
<p>Article URL: <a href="https://medium.com/google-developers/developing-for-android-introduction-5345b451567c">https://medium.com/google-developers/developing-for-android-introduction-5345b451567c</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=9687533">https://news.ycombinator.com/item?id=9687533</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 09 Jun 2015 18:09:00 +0000</pubDate><link>https://medium.com/google-developers/developing-for-android-introduction-5345b451567c</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=9687533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9687533</guid></item><item><title><![CDATA[New comment by quantumet in "Concise electronics for geeks (2010)"]]></title><description><![CDATA[
<p>For those looking for a less-concise, but more comprehensive reference, The Art of Electronics was just released in its 3rd edition!<p><a href="http://artofelectronics.net/" rel="nofollow">http://artofelectronics.net/</a><p>With a lot of relevant new content (references to Arduinos, etc), which is great since the last edition was from 1989.<p>And while the basics of transistors, etc, haven't changed since then, it's arguable that the kinds of circuits that are of most relevance may have changed. (A/D conversion, general interfacing with digital logic, things that didn't have simple IC solutions in 89, etc).</p>
]]></description><pubDate>Wed, 03 Jun 2015 21:28:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=9655993</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=9655993</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9655993</guid></item><item><title><![CDATA[New comment by quantumet in "Ten Years of Git: An Interview with Linus Torvalds"]]></title><description><![CDATA[
<p>Eh, I mildly disagree on web site review. Perhaps not GitHub's pull request/review model, but big projects like Android and Chromium do all their review on web interfaces.</p>
]]></description><pubDate>Mon, 06 Apr 2015 19:48:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=9330299</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=9330299</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9330299</guid></item><item><title><![CDATA[New comment by quantumet in "Show HN: Camera2 API Support on Galaxy S6 and HTC One M9"]]></title><description><![CDATA[
<p>As a minor correction, the N5 front camera is a LIMITED, not a LEGACY device.<p>It doesn't quite do the per-frame synchronized controls required for FULL like the main back-facing camera, but otherwise offers manual control and high-speed output.</p>
]]></description><pubDate>Tue, 17 Mar 2015 00:09:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=9215437</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=9215437</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=9215437</guid></item><item><title><![CDATA[New comment by quantumet in "Android: I don't need your permission"]]></title><description><![CDATA[
<p>So does Android:<p><a href="http://developer.android.com/training/managing-audio/audio-focus.html" rel="nofollow">http://developer.android.com/training/managing-audio/audio-f...</a><p><a href="http://android-developers.blogspot.com/2013/08/respecting-audio-focus.html" rel="nofollow">http://android-developers.blogspot.com/2013/08/respecting-au...</a></p>
]]></description><pubDate>Tue, 16 Dec 2014 22:39:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=8760037</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=8760037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8760037</guid></item><item><title><![CDATA[New comment by quantumet in "Rosetta finds comet's water vapour to be significantly different from Earth's"]]></title><description><![CDATA[
<p>Quite a lot; see for example:<p><a href="http://en.wikipedia.org/wiki/Graphical_timeline_of_the_universe" rel="nofollow">http://en.wikipedia.org/wiki/Graphical_timeline_of_the_unive...</a></p>
]]></description><pubDate>Thu, 11 Dec 2014 00:33:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=8732570</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=8732570</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8732570</guid></item><item><title><![CDATA[New comment by quantumet in "What Do Blind People Actually See?"]]></title><description><![CDATA[
<p><a href="http://en.wikipedia.org/wiki/Aura_(symptom)" rel="nofollow">http://en.wikipedia.org/wiki/Aura_(symptom)</a><p>It's a grab bag of weird perceptual effects that can precede migrane attacks (and apparently epileptic seizures), and they can be very disconcerting if you don't know about them.<p>In my case (only had them once or twice), I experienced the same visual field disassociation as you did (in addition to the scintillating scotoma thing) - I could see individual things in my field of view just fine, but I could not properly combine the whole scene into a whole. Like looking at a cubist artwork - each part is reasonable but the whole doesn't fit together, or had parts missing (not holes, just discontinuities).  Of course, very shortly after that started, the migrane headache arrived.<p>Might want to consider talking to a doctor about it; while AFAIK auras are harmless if a bit scary, if you don't have them associated with a known issue like migrane headaches, might be good to try to find out where they're coming from.  Not that anyone really seems to understand the mechanisms involved.</p>
]]></description><pubDate>Wed, 13 Aug 2014 23:52:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=8175518</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=8175518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8175518</guid></item><item><title><![CDATA[New comment by quantumet in "Reflected hidden faces in photographs revealed in pupil"]]></title><description><![CDATA[
<p>Photon shot noise (<a href="http://en.wikipedia.org/wiki/Shot_noise#Optics" rel="nofollow">http://en.wikipedia.org/wiki/Shot_noise#Optics</a>) is as fundamental as diffraction - it's a property of light, not a property of the sensor.<p>If your sensor counts a mean of 100 photons per pixel, then you'll see shot noise with standard deviation of 10 photons in each of those, for a signal-to-noise ratio of 10.  If you quadruple your pixel size and now measure 400 photons per pixel, then your SNR goes up to 20.<p>This is why bigger sensors (more captured photons for same light level and exposure time) are fundamentally better at image capture.</p>
]]></description><pubDate>Fri, 27 Dec 2013 19:15:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=6972113</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=6972113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=6972113</guid></item><item><title><![CDATA[New comment by quantumet in "Great Works in Programming Languages (2004)"]]></title><description><![CDATA[
<p>I think Claude Shannon's (of information theory fame) master's thesis beats out all others in the field of computer science.<p>"A Symbolic Analysis of Relay and Switching Circuits", 1938, proves that you can use electrical switches to do boolean algebra! He basically invented the digital computer, as a master's thesis.</p>
]]></description><pubDate>Thu, 13 Jun 2013 08:27:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=5872855</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=5872855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=5872855</guid></item><item><title><![CDATA[New comment by quantumet in "What does randomness look like?"]]></title><description><![CDATA[
<p>The Poisson distribution is just one random number distribution; there are several others for situations where the events are correlated, or have other properties.  Half the fun of probability is figuring out which distribution is the right one to apply to the question at hand. So if your measurements don't match up to Poisson, it doesn't mean they're not random - they could just be interdependent.<p>So yes, for a Poisson process, the spread (standard deviation) is equal to the square root of the mean; as the number of events gets large, the Poisson distribution approaches the normal distribution, but the relationship between the standard deviation and the mean continues to hold.</p>
]]></description><pubDate>Sat, 22 Dec 2012 00:36:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=4955289</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=4955289</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=4955289</guid></item><item><title><![CDATA[New comment by quantumet in "Restoration of defocused and blurred images"]]></title><description><![CDATA[
<p>No, there's no real worry of that, barring some serious revolution in deconvolution. Existing techniques can only deal with blurs of certain kinds, and theory suggests that the standard blurs used for faces/etc are in fact not reversible at all.<p>A short explanation can be done using the Fourier transform. Blurs like Gaussian blur, and photographic camera blurs (with some simplifying assumptions) are convolutions; they apply a 'blur kernel' to each pixel of the image, which spreads energy from that pixel to all the neighboring pixels based on the shape of the kernel. Visualizing the outcome of a convolution is not straightforward for complex scenes, but here the Fourier transform helps.<p>When looked at in the frequency domain, the convolution operator turns into a multiplication operator; the spectrums of the image and the blur kernel are simply multiplied frequency-by-frequency. So you can directly see where information is being lost in the final image, by seeing what frequencies of the blur kernel are zero - at those frequencies, the output image has lost all original information.<p>Deconvolution techniques are all trying to restore the original image; in theory all you need to do is to take the blur kernel, and divide by its frequency spectrum to obtain the original.  Assuming you have no noise, etc, in the process, this works fine, except where the blur kernel is zero - division by zero doesn't get you very far, and the information there is truly lost.<p>With camera handshake, the shape of the blur kernel tends to be a squiggle (the path of the camera motion), and the frequency spectrum is reasonably nicely behaved - there may be no zeros or just a few spots. So reversing the blur is possible, maybe with some additional interpolation to cover up the nulls.  Out-of-focus blur (bokeh) is much harder, since it tends to be much more uniform and smooth, like a gaussian blur.<p>A gaussian blur turns out to have a gaussian frequency spectrum as well - that means the blur kernel has 0 frequency content past a cutoff point, and the wider the blur, the lower the frequency cutoff, and the more information is lost. So deconvolution can't really work directly; you can make assumptions about what was there before (priors), to guide the reconstruction. But at some point it's about as good as pasting a random face from the internet on the blurred head. The question is mostly about where that cutoff is - how much can your knowledge of 'this is a face' make up for the zeroed-out information? In practice, you're probably pretty safe if you've blurred the face to the point where no features remain. If you're really worried about it, throwing in some random noise, etc, makes the problem even more impossible.<p>So in short: We can probably do OK on camera shake and maybe out-of-focus bokeh. We can't recover from smooth uniform blurs like gaussian blurs.</p>
]]></description><pubDate>Sun, 21 Oct 2012 17:34:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=4680188</link><dc:creator>quantumet</dc:creator><comments>https://news.ycombinator.com/item?id=4680188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=4680188</guid></item></channel></rss>