<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jinto36</title><link>https://news.ycombinator.com/user?id=jinto36</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 08:22:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jinto36" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jinto36 in "What Is WebTV?"]]></title><description><![CDATA[
<p>WebTV largely worked better than I think it often gets credit for, and I echo the sentiment elsewhere that it felt "futuristic" in a sense. I had a Windows desktop, but we came into possession of a Philips WebTV box since my father was in sales and his company had a catalog of sales incentive items you could get for meeting sales targets. I <i>really</i> did not want use AOL like it seemed everyone else did, and the WebTV subscription was pretty reasonable compared to other options. We had the version with the hard drive and wireless keyboard. The hardware was really pretty decent- the keyboard was ok, I could print with it, and the feature I thought that really set it apart was the ports for video capture. I don't think they ever implemented a good way to use that capability for video, but I used it to capture screenshots from our family camcorder and attach them to email or post them on the webtv personal "website" or print them.<p>My early use of eBay was through WebTV, with both buying and selling, and it largely worked. You could browse webrings and read email from the couch!<p>Most of all, the dialing music was fantastic and I still listen to it once in a while: <a href="https://www.youtube.com/watch?v=brZYWcGgg4Y" rel="nofollow">https://www.youtube.com/watch?v=brZYWcGgg4Y</a><p>When free ad-supported dialup services came around (Juno, Bluelight, NetZero) I alternated using those and WebTV for a while. As pages moved away from simple text/table/image based sites, page rendering quality unsurprisingly degraded. I think the version we owned had some Flash support but it was slow.<p>Looking back on it, it's impressive how legible text was on a 20" CRT TV in the interface (through S-Video). It was more usable than some modern "smart" TV interfaces.</p>
]]></description><pubDate>Tue, 05 Mar 2024 04:12:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=39599359</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=39599359</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39599359</guid></item><item><title><![CDATA[New comment by jinto36 in "What Is WebTV?"]]></title><description><![CDATA[
<p>Speaking of WebTV music, I was a huge fan of the original Philips WebTV dialing music. Sometimes I would just let it loop in the background when I was doing housework. <a href="https://www.youtube.com/watch?v=brZYWcGgg4Y" rel="nofollow">https://www.youtube.com/watch?v=brZYWcGgg4Y</a> Perhaps unsurprisingly, a couple of years later I started listening to a lot of music in the Demoscene.</p>
]]></description><pubDate>Tue, 05 Mar 2024 03:45:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=39599189</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=39599189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39599189</guid></item><item><title><![CDATA[New comment by jinto36 in "Mumbai embraces its booming flamingo population"]]></title><description><![CDATA[
<p>Japanese culture also has a number of "ritual purity" elements, such as taking shoes from outside off in the genkan (vestibule) before stepping into the main part of a house, and ritual cleansing with water when visiting a shrine. I understand that Japan is far more economically advantaged than India, but even so, the difference in how clean things are between New York City (for example) and any major city in Japan is quite stark.
[I'm from the US but lived in Japan for a year. Haven't been to India but I know a lot of people from there.]</p>
]]></description><pubDate>Sat, 21 Jan 2023 12:43:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=34466062</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=34466062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34466062</guid></item><item><title><![CDATA[New comment by jinto36 in "Lights have been on at this school for a year because no one can turn them off"]]></title><description><![CDATA[
<p>Reminds me of this article:
"1980s computer controls GRPS heat and AC" About a single Amiga controlling HVAC systems for 19 public schools.
<a href="https://www.woodtv.com/news/grand-rapids/1980s-computer-controls-grps-heat-and-ac/" rel="nofollow">https://www.woodtv.com/news/grand-rapids/1980s-computer-cont...</a></p>
]]></description><pubDate>Fri, 20 Jan 2023 13:54:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=34453127</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=34453127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34453127</guid></item><item><title><![CDATA[New comment by jinto36 in "I've procrastinated working on my thesis for more than a year"]]></title><description><![CDATA[
<p>One thing that works for me is to just start writing figure legends. That seems easy, it doesn't feel like "writing the paper". I end up basically writing the results section for that figure or table, so I just cut and paste most of it to the actual results later, and keep a summarized version for the actual figure legend.
By coincidence, I'm trying to submit a PhD thesis today (Genetics/Computational Biology).</p>
]]></description><pubDate>Fri, 20 Jan 2023 13:37:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=34452925</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=34452925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34452925</guid></item><item><title><![CDATA[New comment by jinto36 in "Fourth membrane is discovered in the brain"]]></title><description><![CDATA[
<p>A lot of this is due to advances in imaging, particularly two-photon microscopy, that enable acquiring images (and video to some extent) in live subjects, e.g. mice and rats, below the top surface of tissue. For neuroscience, you can either thin the skull or install a window, and image more than 100 microns into the brain, in live animals that are anesthetized or otherwise immobilized. Here's a nice article with an overview of two-photon for this purpose (direct PDF link): <a href="https://www.hifo.uzh.ch/research/helmchen/publication/helmchen2005_natmet.pdf" rel="nofollow">https://www.hifo.uzh.ch/research/helmchen/publication/helmch...</a> that paper was published a while ago now, but the basics are still relevant. Two-photon played a big part in the identification of the "glymphatic system" initially as well. The Nedergaard lab does <i>a lot</i> of imaging, and they've built custom microscopes as well. (Source: I used to work in their department, and I'm doing my PhD work in the same building. edit: in Rochester, not at KU, though I visited there when they were first outfitting the lab space in Copenhagen.</p>
]]></description><pubDate>Sat, 07 Jan 2023 02:26:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=34284431</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=34284431</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34284431</guid></item><item><title><![CDATA[New comment by jinto36 in "Thanks to DALL-E, the race to make artificial protein drugs is on"]]></title><description><![CDATA[
<p>Alphafold <i>is</i> a big improvement, but a structure of a single protein in isolation isn't representative of how these things exist in vivo. Binding substrates can modify protein shapes, and proteins often function in complexes, which can form some pretty complex arrangements, where positioning is critical to function.
I think training set bias is an issue to some extent, even with single-protein prediction. For example, I've been looking at a family of transcription factors, and most of the resolved crystal structures are of just the DNA-binding domain, crystallized with the substrate (DNA) bound. Alphafold predictions for homologous proteins that haven't been experimentally resolved but share a decent amount of sequence similarity thus have high confidence for the DNA-binding domain, but lower confidence in other parts of the protein, even if they're "ordered" regions (e.g. helices and sheets rather than floppy loops), and all the predictions for the DNA-binding domain look like the bound-to-DNA conformation. So we don't have a good way yet to predict different "modes" of a protein that has interaction-dependent conformations.
Technically with Alphafold if you were interested in modelling a protein that had similar experimentally resolved both with and without substrates bound, but were interested in sampling just one of those states, you could customize your sequence database to include one or the other, which would be mostly manual curation.<p>I've been testing out the multimer (protein complex) mode of Alphafold recently,  to see if could predict interactions for a family of proteins where some members in the family are known to form complexes, but others previously were found to not form complexes at least when expressed in vitro rather than in vivo. So far I've found that if you try to throw two completely unrelated proteins together, they won't be modeled with any contacts, but for the ones in the family I'm interested in, there's always at least one (of the five models per run) that has  them interacting such that there's something that looks like a real DNA-binding domain. For the latter case, it's presently hard to know based just on Alphafold output if it's a structure that could actually form, or if it's just due to bias in the training data, with perhaps the rest of the structured regions of the protein being conformed in unrealistic ways due to less training information for those parts.<p>TL;DR Alphafold results are biased by existing experimentally resolved structures, and not based on simulating physics, so proteins- or parts of proteins- that don't have good coverage in existing experimental data are not going to be predicted with high confidence.</p>
]]></description><pubDate>Wed, 04 Jan 2023 14:46:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=34246066</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=34246066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34246066</guid></item><item><title><![CDATA[New comment by jinto36 in "Get Running with Couch to 5K"]]></title><description><![CDATA[
<p>Same for me, I try to make it meditative in a way. I don't use earbuds or anything, just try to continue existing and moving. Sometimes identify birds. Practice talking to myself in languages I'm studying.
In races it turns into body state monitoring and trying to determine when/if to adjust pace. Check in on form. On heart rate. Pick someone up ahead to try to catch. Count down to the next gel or electrolyte tab or water station.<p>I've done quite a few 2hr+ runs (half marathon to marathon) and 100 mile bike rides, and after a while for me it turns into a psychological game.<p>Trail running, if you haven't tried it, is much more "stimulating" I might say, depending on where you are there's a lot more focus and attention required to stay on the trail, to stay upright (slipping on mud, ending up in a river), to dodge trees and rocks as required, etc. I personally find there's much more of an aspect of being "in the zone" for trail running, and especially in races, when I miss a turn and have to stop and backtrack, it becomes really obvious that I was in some kind of "flow" state and then got pulled out of it.</p>
]]></description><pubDate>Sun, 01 Jan 2023 18:26:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=34209094</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=34209094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34209094</guid></item><item><title><![CDATA[New comment by jinto36 in "VHS-Decode – Software defined VHS decoder"]]></title><description><![CDATA[
<p>There are other weird VHS-based formats, such as WVHS, which was used to store HD-ish analog video on VHS- <a href="https://en.wikipedia.org/wiki/W-VHS" rel="nofollow">https://en.wikipedia.org/wiki/W-VHS</a><p>Alesis also developed an 8-track digital audio recorder based on VHS, ADAT, which used SVHS tapes and could record 20-bit 48khz. ADAT was pretty popular in smaller studios, and was great for the time before multi-gigabyte hard drives. <a href="https://en.wikipedia.org/wiki/ADAT" rel="nofollow">https://en.wikipedia.org/wiki/ADAT</a></p>
]]></description><pubDate>Sun, 11 Dec 2022 15:22:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=33943900</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33943900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33943900</guid></item><item><title><![CDATA[New comment by jinto36 in "Synth pioneer and maker of ‘Popcorn’, Gershon Kingsley (2019)"]]></title><description><![CDATA[
<p>I've heard so many demoscene/8-bit covers of it at this point that I forgot the earlier versions had acoustic drums!<p>Some examples from scene.org:
<a href="https://files.scene.org/view/parties/2010/aaa10/music/13_popcorn-husman.mp3" rel="nofollow">https://files.scene.org/view/parties/2010/aaa10/music/13_pop...</a>
<a href="https://files.scene.org/view/music/groups/fusion_music_crew/cosmic_trance/fmc0401_saltpopcorn.mp3" rel="nofollow">https://files.scene.org/view/music/groups/fusion_music_crew/...</a></p>
]]></description><pubDate>Thu, 08 Dec 2022 15:03:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=33908557</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33908557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33908557</guid></item><item><title><![CDATA[New comment by jinto36 in "$7M lost on Amazon inventory, then bankrupt"]]></title><description><![CDATA[
<p>We haven't done lightning/thunderbolt, but the Youtube reviews include the different USB-C power modes. We also look at some aspects of safety, such as when overcurrent protection in kicks in relative to the device rating, and if it automatically resets after tripping or requires power cycling.</p>
]]></description><pubDate>Wed, 07 Dec 2022 03:13:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=33890230</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33890230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33890230</guid></item><item><title><![CDATA[New comment by jinto36 in "Adobe at 40"]]></title><description><![CDATA[
<p>I haven't tried Affinity Photo but Affinity Designer has been a pleasant surprise. Worth the non-recurring $50 investment. I want to like Inkscape, but there are just too many quirks still. PDF output from Designer so far always looks great. My primary use case is putting together figures for scientific papers.</p>
]]></description><pubDate>Tue, 06 Dec 2022 03:04:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=33875382</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33875382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33875382</guid></item><item><title><![CDATA[New comment by jinto36 in "$7M lost on Amazon inventory, then bankrupt"]]></title><description><![CDATA[
<p>Re: Anker chargers, a few perform pretty well but many are kind of middling. A friend has been doing a lot of reviews of these on Youtube (<a href="https://www.youtube.com/c/AllThingsOnePlace" rel="nofollow">https://www.youtube.com/c/AllThingsOnePlace</a>) and we started compiling test data- focusing on power conversion metrics such as efficiency and power factor, summarized into a single "power quality score" on a 0-200 scale- in a database at <a href="https://pqs.app" rel="nofollow">https://pqs.app</a>.<p>For example, for the Anker power adapters we've tested: <a href="https://pqs.app/devices?categoryId=1&search=anker" rel="nofollow">https://pqs.app/devices?categoryId=1&search=anker</a>. We're also trying to turn it into a business.</p>
]]></description><pubDate>Sat, 03 Dec 2022 15:30:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=33844172</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33844172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33844172</guid></item><item><title><![CDATA[New comment by jinto36 in "Show HN: A Japanese learning app focused on efficient vocab/grammar acquisition"]]></title><description><![CDATA[
<p>Akebi for Android is pretty good. For a while I used the Nintendo DS Japanese dictionary (Rakubiki Jiten) for the ability to do stylus-based character recognition, which was less frustrating to use than the Windows IME pad feature. Akebi works reasonably well for that function.<p>I only just started learning Chinese, so I haven't looked at Pleco yet and can't really compare them.<p>I'll agree with some other comments that once you get into the area bordering intermediate and advanced proficiency there are many fewer resources, but by then you have a better idea of what you don't know, and and can use your existing base to sort of plot a path to fill those gaps.</p>
]]></description><pubDate>Fri, 02 Dec 2022 04:38:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=33826851</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33826851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33826851</guid></item><item><title><![CDATA[New comment by jinto36 in "Commodore 128D Computer (2001)"]]></title><description><![CDATA[
<p>I pulled a 128D out of a dumpster (literally) around 2002, in the US. I was always a bit confused if it was actually a 128D or a 128DCR since the articles I checked at the time said the 128D wasn't sold in the US, but the nameplate does say "128D" on it. Newer articles mention that the DCR had a metal case, whereas the original 128D was plastic so now I know mine is the DCR. I've never seen another 128D in person so it was really difficult to tell.</p>
]]></description><pubDate>Fri, 02 Dec 2022 04:19:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=33826719</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33826719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33826719</guid></item><item><title><![CDATA[New comment by jinto36 in "Commodore 128D Computer (2001)"]]></title><description><![CDATA[
<p>Ten years ago I went through my Apple collection to remove those RTC batteries when people started reporting that they were exploding. I have an Apple "set top box" prototype- didn't have one with a ROM that allowed to actually function as a standalone system, but it was still a really neat artifact. The battery had exploded and turned the inside of the case into a rusty mess. That was a painful moment of realization. I also lost a Mac Classic to a bursting battery but I managed to get the batteries out of my other systems in time. I also have a Bandai Pippin and since that's also basically a Mac it's also got the same RTC battery, and luckily it wasn't all that hard to disassemble non-destructively to remove it. Those batteries did tend to last a <i>really</i> long time, and I imagine in some systems the clock is probably still keeping time until the moment the battery melts.<p>If you've had old Apple systems on the shelf for a long time- or an Apple IIgs which also had a battery-backed RTC- get those original batteries out!</p>
]]></description><pubDate>Fri, 02 Dec 2022 04:05:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=33826637</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33826637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33826637</guid></item><item><title><![CDATA[New comment by jinto36 in "Consider working on genomics"]]></title><description><![CDATA[
<p>Protein structure prediction was a huge deal, which is why AlphaFold received so much fanfare. It is actually pretty good. The next step is to predict where multi-protein complexes would interact- which is not just as simple as predicting the structure of two proteins independently and then trying to fit them together like a puzzle, because the the interactions can also change the structure.
While it's not as hard as it used to be to experimentally determine protein targets of, for example, a protein kinase, it's still not an arbitrary or cheap experiment, and to do that for the many thousands of such proteins, across different conditions (stress, presence of co-factors, etc) and in different organisms would be rather a lot of work. Something like alphafold that makes reasonable predictions and can be used to help you focus on what's most likely to be relevant to your disease or process of interest helps quite a bit.<p>There's also more need for integrating "multi-omics" data, where you have data from multiple assays (gene expression, phospho-proteomics, lipidomics, epigenetics, small RNA expression, etc etc) with the goal of somehow combining all these different assay results from various levels of gene regulation, to get closer to figuring out actual mechanism for complex processes. Building on that, we can also do single-cell multi-omics to some extent- where you have results from different sequencing-based assays on the level of the same <i>individual cell</i>. This is still pretty limited, but it's exciting and advancing pretty quickly. This will eventually be combined with things like spatial transcriptomics, which is useful for mapping out what's going on in heterogeneous tissue samples like tumors, for example, so we'll end up with spatial single-cell multi-omics, at which point you're looking at 1) some quantitative trait for multiple genes/loci/molecules, and often 10k+ of such features at the same time per assay, 2) multiple assays, such as DNA accessibility and gene expression, in 3) single-cells, of which you might have 10k of in a single sample, 4) across a physical tissue sample where individual cells are spatially mapped, and where you probably want to figure out how  cells might influence the state of those around them, and 5) in multiple different samples, where you might want to compare disease vs control, or look for correlation to heterogeneity of results within one group.<p>There's a lot of public data already available for single-cell gene expression projects if you want to get a feel for how these things are structured and how (passable but not amazing) the existing tooling is- one of the main repositories for this data is the NCBI's SRA <a href="https://www.ncbi.nlm.nih.gov/sra" rel="nofollow">https://www.ncbi.nlm.nih.gov/sra</a> but you'll quickly note that searching and browsing is not as easy as you might think it would be- because one of the main limiting factors in bioinformatics is how bad everyone is at keeping terminology consistent. For many bioinformaticians, a majority of time is spent in the data cleaning phase. It's awful. Sometimes the experimental parameters make it into SRA or GEO, but sometimes you have to read through the associated paper to pull that out. Often it's only large consortium projects like the The Cancer Genome Atlas (TCGA) or the Genotype-Tissue Expression project (GTEx) - which have enough funding for staff dedicated to data management- end up publishing datasets that are easy to "consume" without having to jump through a whole bunch of hurdles to figure out how the data was produced.<p>I have a BS/MS in bioinformatics and I'm presently a PhD candidate in genetics and computational biology defending in February.</p>
]]></description><pubDate>Sat, 19 Nov 2022 19:54:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=33673954</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33673954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33673954</guid></item><item><title><![CDATA[New comment by jinto36 in "The truffle industry is a scam"]]></title><description><![CDATA[
<p>In the US I can go to the supermarket and get "truffle macaroni and cheese" in a box for $5, because everyone knows that truffles are this rare, mysterious, and luxurious thing, and so incorporating them (or their essence) into relatively inexpensive food items allows companies to tack a small amount onto the price, but gives the consumers the feeling that even this luxurious thing is available to them.<p>Similar to why people might knowingly buy knock-off "designer" items.</p>
]]></description><pubDate>Sat, 19 Nov 2022 15:08:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=33670758</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33670758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33670758</guid></item><item><title><![CDATA[New comment by jinto36 in "Why Meta’s latest large language model survived only three days online"]]></title><description><![CDATA[
<p>It even generated indicators for references, but not the references themselves. I could see it being useful if it was some kind of system that could basically synthesize wikipedia articles from the literature for topics that don't already have a nice review or other sort of summary, but references to actual scholarly works are absolutely essential for that to be useful. I don't know how taking random sentences out of context that happen to have the same theme, without any sort of actual sources, would help anyone aside from paper mills.</p>
]]></description><pubDate>Sat, 19 Nov 2022 14:47:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=33670538</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33670538</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33670538</guid></item><item><title><![CDATA[New comment by jinto36 in "Crypto Confidence Soars After CEO Defrauds Customers Just Like Real Bank"]]></title><description><![CDATA[
<p>That Sequoia letter aged amazingly poorly in just two months.
"...FTX—a company that may very well end up creating the dominant all-in-one financial super-app of the future. Nothing is a sure bet in crypto, but just the possibility that FTX could join—or even eclipse—the big four of American banking (JPMorgan Chase, Bank of America, Wells Fargo and Citibank) means it’s already valued at $32 billion."
More emphasis should have been on the "nothing is a sure bet".</p>
]]></description><pubDate>Sat, 12 Nov 2022 16:04:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=33574274</link><dc:creator>jinto36</dc:creator><comments>https://news.ycombinator.com/item?id=33574274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33574274</guid></item></channel></rss>