<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gshulegaard</title><link>https://news.ycombinator.com/user?id=gshulegaard</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 08:42:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gshulegaard" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gshulegaard in "Debian decides not to decide on AI-generated contributions"]]></title><description><![CDATA[
<p>I don't know, it's a pretty leap for me to consider AI being hard to distinguish from human contributions.<p>AI is predictive at a token level.  I think the usefulness and power of this has been nothing short of astonishing; but this token prediction is fundamentally limiting. The difference between human _driven_ vs AI generated code is usually in design.  Overly verbose and leaky abstractions, too many small abstractions that don't provide clear value, broad sweeping refactors when smaller more surgical changes would have met the immediate goals, etc. are the hallmarks of AI generated code in my experience.  I don't think those will go away until there is another generational leap beyond just token prediction.<p>That said, I used human "driven" instead of human "written" somewhat intentionally.  I think AI in even its current state will become a revolutionary productivity boosting developer aid (it already is to some degree).  Not dissimilar to a other development tools like debuggers and linters, but with much broader usefulness and impact.  If a human uses AI in creating a PR, is that something to worry about?  If a contribution can pass review and related process checks; does it matter how much or how little AI was used in it's creation?<p>Personally, my answer is no.  But there is a vast difference between a human using AI and an AI generated contribution being able to pass as human.  I think there will be increasing degrees of the former, but the latter is improbable to impossible without another generational leap in AI research/technology (at least IMO).<p>---<p>As a side note, over usage of AI to generate code _is_ a problem I am currently wrangling with.  Contributors who are over relying on vibecoding are creating material overhead in code review and maintenance in my current role.  It's making maintenance, which was already a long tail cost generally, an acute pain.</p>
]]></description><pubDate>Tue, 10 Mar 2026 18:14:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47326873</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=47326873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47326873</guid></item><item><title><![CDATA[New comment by gshulegaard in "Unconventional PostgreSQL Optimizations"]]></title><description><![CDATA[
<p>IIRC `MERGE` has been part of SQL for a while, but Postgres opted against adding it for many years because it's syntax is inherently non-atomic within Postgres's MVCC model.<p><a href="https://pganalyze.com/blog/5mins-postgres-15-merge-vs-insert-on-conflict" rel="nofollow">https://pganalyze.com/blog/5mins-postgres-15-merge-vs-insert...</a><p>This is somewhat a personal preference, but I would just use `INSERT ... ON CONFLICT` and design my data model around it as much as I can.  If I absolutely need the more general features of `MERGE` and _can't_ design an alternative using `INSERT ... ON CONLFICT` then I would take a bit of extra time to ensure I handle `MERGE` edge cases (failures) gracefully.</p>
]]></description><pubDate>Tue, 20 Jan 2026 19:22:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46696549</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=46696549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46696549</guid></item><item><title><![CDATA[New comment by gshulegaard in "A computer upgrade shut down BART"]]></title><description><![CDATA[
<p>I remember first moving to Philly and getting a SEPTA Key and thinking, "This is dumb, it's literally just a MasterCard. Why can't I use my credit card like NY?"  Then a few years later they rolled out support for other bank cards and I immediately took my SEPTA Key out of my wallet.</p>
]]></description><pubDate>Sun, 07 Sep 2025 07:43:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45156219</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=45156219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45156219</guid></item><item><title><![CDATA[New comment by gshulegaard in "A computer upgrade shut down BART"]]></title><description><![CDATA[
<p>When I lived in Portland you technically _could_ pay TTP, but I don't quite count it because Hop pass accrual meant if you used TriMet regularly you needed a Hop card anyway.  Just out of curiosity I checked around and it looks like they extended that functionality to regular bank cards around two years ago [1]. Which is awesome as now the only reason to get a Hop pass is for people that qualify for reduced fares or are unbanked (which makes sense).<p>> The multi-stage turnstiles at the RER stations… ugh.<p>Ah yes, had one of many, "I look like the tourist I am," moments navigating those visiting the Versailles.<p>[1] <a href="https://www.reddit.com/r/Portland/comments/1awweix/trimet_expands_hop_fastpass_benefits_to/" rel="nofollow">https://www.reddit.com/r/Portland/comments/1awweix/trimet_ex...</a></p>
]]></description><pubDate>Sun, 07 Sep 2025 07:38:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45156199</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=45156199</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45156199</guid></item><item><title><![CDATA[New comment by gshulegaard in "A computer upgrade shut down BART"]]></title><description><![CDATA[
<p>And yet I included a favorable international comparison as well.</p>
]]></description><pubDate>Sat, 06 Sep 2025 18:38:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45151787</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=45151787</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45151787</guid></item><item><title><![CDATA[New comment by gshulegaard in "A computer upgrade shut down BART"]]></title><description><![CDATA[
<p>I don't know what your frame of reference is, but BART is above average for US public transit payment systems.<p>I've lived in the San Francisco Bay Area CA, Portland OR, and Philadelphia PA over the last 10 years.  All of those metros have comparable public transit payment systems with auto-loading special use cards and are at various stages of adopting support for tap to pay.  Honestly, within the US I can only think of NYC as having a better payment system as they were first movers on tap-to-pay adoption and it's basically fully adopted.<p>Internationally I think there is a larger range of experiences. I don't travel enough to properly gauge it, but I was in Paris in the last year and I don't think public transit payment was better.  Still had to acquire specialized  fare cards and navigate different payment systems between RATP and RER.  Honestly, SF Bay comes out slightly ahead of Paris if only because Clipper is unified between various transit options (BART, Bus, Ferry, CalTrain) IMO.</p>
]]></description><pubDate>Fri, 05 Sep 2025 19:30:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45142648</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=45142648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45142648</guid></item><item><title><![CDATA[New comment by gshulegaard in "Python has had async for 10 years – why isn't it more popular?"]]></title><description><![CDATA[
<p>I also think asyncio missed the mark when it comes to it's API design.  There are a lot of quirks and rough edges to it that, as someone who was using `gevent` heavily before, strike me as curious and even anti-productive.</p>
]]></description><pubDate>Tue, 02 Sep 2025 19:26:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45107865</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=45107865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45107865</guid></item><item><title><![CDATA[New comment by gshulegaard in "Is chain-of-thought AI reasoning a mirage?"]]></title><description><![CDATA[
<p>> but we know that reasoning is an emergent capability!<p>Do we though?  There is widespread discussion and growing momentum of belief in this, but I have yet to see conclusive evidence of this.  That is, in part, why the subject paper exists...it seeks to explore this question.<p>I think the author's bias is bleeding fairly heavily into his analysis and conclusions:<p>> Whether AI reasoning is “real” reasoning or just a mirage can be an interesting question, but it is primarily a philosophical question. It depends on having a clear definition of what “real” reasoning is, exactly.<p>I think it's pretty obvious that the researchers are exploring whether or not LLMs exhibit evidence of _Deductive_ Reasoning [1].  The entire experiment design reflects this.  Claiming that they haven't defined reasoning and therefore cannot conclude or hope to construct a viable experiment is...confusing.<p>The question of whether or not an LLM can take a set of base facts and compose them to solve a novel/previously unseen problem is interesting and what most people discussing emergent reasoning capabilities of "AI" are tacitly referring to (IMO).  Much like you can be taught algebraic principles and use them to solve for "x" in equations you have never seen before, can an LLM do the same?<p>To which I find this experiment interesting enough.  It presents a series of facts and then presents the LLM with tasks to see if it can use those facts in novel ways not included in the training data (something a human might reasonably deduce).  To which their results and summary conclusions are relevant, interesting, and logically sound:<p>> CoT is not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching, fundamentally bounded by the data distribution seen during training. When pushed even slightly beyond this distribution, its performance degrades significantly, exposing the superficial nature of the “reasoning” it produces.<p>> The ability of LLMs to produce “fluent nonsense”—plausible but logically flawed reasoning chains—can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability.<p>That isn't to say LLMs aren't useful, just exploring it's boundaries.  To use legal services as an example, using an LLM to summarize or search for relevant laws, cases, or legal precedent is something it would excel at.  But don't ask an LLM to formulate a logical rebuttal to an opposing council's argument using those references.<p>Larger models and larger training corpuses will expand that domain and make it more difficult for individuals to discern this limit; but just because you can no longer see a limit doesn't mean there is none.<p>And to be clear, this doesn't diminish the value of LLMs.  Even without true logical reasoning LLMs are quite powerful and useful tools.<p>[1] <a href="https://en.wikipedia.org/wiki/Logical_reasoning" rel="nofollow">https://en.wikipedia.org/wiki/Logical_reasoning</a></p>
]]></description><pubDate>Thu, 14 Aug 2025 19:48:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44904817</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=44904817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44904817</guid></item><item><title><![CDATA[New comment by gshulegaard in "Asyncio: A library with too many sharp corners"]]></title><description><![CDATA[
<p>I feel like this started/greatly accelerated when Guido stepped down as BDFL.  Python at is on a path where the essence of what made it popular (readable, well designed, productive) is being crushed under the weight of it’s popularity.  The language now feels bloated and needlessly complex in areas that were previously limited, but simple.<p>I recently chased down a bug where something was accidentally made a class variable because a type hint was left off it by accident and it clicked for me that Python is not the same language I loved at the start of my career.</p>
]]></description><pubDate>Sun, 27 Jul 2025 02:32:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44698456</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=44698456</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44698456</guid></item><item><title><![CDATA[New comment by gshulegaard in "Major Flaws in 2025 Meta-Analysis on Fluoride and Children IQ Scores"]]></title><description><![CDATA[
<p>The only point I put forth is that public fluoridation of water supplies doesn't infringe absolutely on an individual's right to informed consent to treatment since there is at least 1 method (moving) available that an individual can utilize to opt out.  Others have pointed out that there may even be additional options available such as de-fluoridating yourself.<p>Did you have something on topic to contribute?<p>Or did you just want a soap box to voice your own opinions and I just happen to be collateral damage because you thought casting oblique aspersions about my qualifications would make you sound intelligent?</p>
]]></description><pubDate>Wed, 09 Apr 2025 21:30:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43638223</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=43638223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43638223</guid></item><item><title><![CDATA[New comment by gshulegaard in "Major Flaws in 2025 Meta-Analysis on Fluoride and Children IQ Scores"]]></title><description><![CDATA[
<p>You could always move to a country which doesn't fluoridate their water supply.<p>But I am struggling to see how this has anything to do with a white paper highlighting and examining flaws in another white paper.</p>
]]></description><pubDate>Wed, 09 Apr 2025 16:42:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43634195</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=43634195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43634195</guid></item><item><title><![CDATA[New comment by gshulegaard in "EdgeDB is now Gel and Postgres is the future"]]></title><description><![CDATA[
<p>These days if you want a PostgreSQL based Data Warehouse both Citus and Timescale are extensions/PostgreSQL based databases I would consider before Redshift.<p>But even in the 9.4 days (~a decade ago) I was pushing Terabytes worth of analytics data daily through a manually managed Postgres cluster with a team of <=5 (so not that difficult).  Since then there have been numerous improvements which make scaling beyond this level even easier (parallel query execution, better predicate push down by the query planner, and declarative partitioning to name a few).  Throw something like Citus (extension) into the mix for easy access to clustering and (nearly) transparent table sharding and you can go quite far without reaching for specialized data storage solutions.</p>
]]></description><pubDate>Wed, 26 Feb 2025 05:54:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43181073</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=43181073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43181073</guid></item><item><title><![CDATA[New comment by gshulegaard in "KlongPy: High-Performance Array Programming in Python"]]></title><description><![CDATA[
<p>I just briefly reviewed the README docs, I believe the KlongPy is a custom language which transpiles to Python.  The REPL block you are trying to interpret is the KlongPy REPL not a Python one.<p>Embedding KlongPy in a Python block would look more like this (also from the docs):<p><pre><code>    from klongpy import KlongInterpreter
    import numpy as np

    data = np.array([1, 2, 3, 4, 5])
    klong = KlongInterpreter()
    # make the data NumPy array available to KlongPy code by passing it into the interpreter
    # we are creating a symbol in KlongPy called 'data' and assigning the external NumPy array value
    klong['data'] = data
    # define the average function in KlongPY
    klong('avg::{(+/x)%#x}')
    # call the average function with the external data and return the result.
    r = klong('avg(data)')
    print(r) # expected value: 3
</code></pre>
Note the calls to "klong('<some-str-of-klong-syntax')".</p>
]]></description><pubDate>Mon, 02 Dec 2024 16:55:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42297997</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=42297997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42297997</guid></item><item><title><![CDATA[New comment by gshulegaard in "TypedDicts are better than you think"]]></title><description><![CDATA[
<p>Let me try again, from the first link I shared:<p>> The __slots__ declaration allows us to explicitly declare data members, causes Python to reserve space for them in memory, and prevents the creation of __dict__  and __weakref__ attributes. It also prevents the creation of any variables that aren't declared in __slots__.<p>Emphasis:<p>> prevents the creation of __dict__  and __weakref__ attributes. It also prevents the creation of any variables that aren't declared in __slots__.<p>In short, if you create a slotted object with __slots__ it sends you down a fairly orthogonal object lifecycle path which does not create or use __dict__ in anyway.  This obviously has drawbacks/limitations like not being able to add new members to the object like a normal Python object.<p>From the second article:<p>> However, if you have __slots__, the descriptor is cached (which contains an offset to directly access the PyObjectwithout doing dictionary lookup). In PyMember_GetOne, it uses the descriptor offset to jump directly where the pointer to the object is stored in memory. This will improve cache coherency slightly, as the pointers to objects are stored in 8 byte chunks right next to each other (I’m using a 64-bit version of Python 3.7.1). However, it’s still a PyObject pointer, which means that it could be stored anywhere in memory. Files: ceval.c, object.c, descrobject.c<p>Which I think addresses your concern about parent dict access...but I could also be misunderstanding your point.</p>
]]></description><pubDate>Thu, 17 Oct 2024 18:29:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=41872303</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=41872303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41872303</guid></item><item><title><![CDATA[New comment by gshulegaard in "TypedDicts are better than you think"]]></title><description><![CDATA[
<p>Nope, __slots__ exist explicitly as an alternative to __dict__:<p><a href="https://wiki.python.org/moin/UsingSlots" rel="nofollow">https://wiki.python.org/moin/UsingSlots</a><p>Whether or not the performance matters...well that's somewhat subjective since Python has a fairly high performance floor which makes performance concerns a bit of a, "Why are you doing it in Python?" question rather than a, "How do I do this faster in Python?" most of the time.  That said it _is_ more memory efficient and faster on attribute lookup.<p><a href="https://medium.com/@stephenjayakar/a-quick-dive-into-pythons-slots-72cdc2d334e" rel="nofollow">https://medium.com/@stephenjayakar/a-quick-dive-into-pythons...</a><p>Anecdotally, I have used Slotted Objects to buy performance headroom before to delay/postpone a component rewrite.</p>
]]></description><pubDate>Thu, 10 Oct 2024 22:33:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=41804250</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=41804250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41804250</guid></item><item><title><![CDATA[New comment by gshulegaard in "Apple Watch Series 10"]]></title><description><![CDATA[
<p>I was in the same boat, but the 30 minute fast charging now makes me think that this actually might work.  Sleep with it on, wake up, pop it on the charger while you get ready, bam basically a full charge by the time you leave the house.<p>I don't wear an Apple watch at night (and I don't plan to upgrade to this one) but for the first time I think I could see how this might work for someone.</p>
]]></description><pubDate>Mon, 09 Sep 2024 21:52:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=41494420</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=41494420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41494420</guid></item><item><title><![CDATA[New comment by gshulegaard in "How Figma's databases team lived to tell the scale"]]></title><description><![CDATA[
<p>I have worked on teams that have both sharded and partitioned PostgreSQL ourselves (somewhat like Figma) (Postgres 9.4-ish time frame) as well as those that have utilized Citus.  I am a strong proponent of Citus and point colleagues in that direction frequently, but depending on how long ago Figma was considering this path I will say that there were some very interesting limitations to Citus not that long ago.<p>For example, it was only 2 years ago that Citus allowed the joining of data in "local" tables and data retrieved from distributed tables (<a href="https://www.citusdata.com/updates/v11-0" rel="nofollow">https://www.citusdata.com/updates/v11-0</a>).  In this major update as well, Citus enabled _any_ node to handle queries, previously all queries (whether or not it was modifying data) had to go through the "coordinator" node in your cluster.  This could turn into a pretty significant bottleneck which had ramifications for your cluster administration and choices made about how to shape your data (what goes into local tables, reference tables, or distributed tables).<p>Again, huge fan of Citus, but it's not a magic bullet that makes it so you no longer have to think about scale when using Postgres.  It makes it _much_ easier and adds some killer features that push complexity down the stack such that it is _almost_ completely abstracted from application logic.  But you still have be cognizant of it, sometimes even altering your data model to accommodate.</p>
]]></description><pubDate>Thu, 14 Mar 2024 19:47:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=39708203</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=39708203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39708203</guid></item><item><title><![CDATA[New comment by gshulegaard in "Amazon announces online car sales for the first time, starting with Hyundai"]]></title><description><![CDATA[
<p>Not everyone has a Costco membership or a Costco near them.  There also doesn't seem to be any indication that there is a pre-arranged pricing arrangement for Amazon like Costco.<p>Amazon is simply letting Hyundai dealerships list car inventory on Amazon, Amazon isn't selling the cars.</p>
]]></description><pubDate>Thu, 16 Nov 2023 21:13:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=38295621</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=38295621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38295621</guid></item><item><title><![CDATA[New comment by gshulegaard in "Psychedelic Mushrooms Hit the Market in Oregon"]]></title><description><![CDATA[
<p>While I agree with what you are saying in principle, I do want to point out that there is a massive difference between anecdotal evidence and _research_.  In a purely academic sense, it is not an unreasonable statement to make that there is no evidence.  In contrast, in the opioid case there existed scientific evidence of the highly addictive nature of the drugs that was more than just suppressed, they were outright _lied_ about by the pharmaceutical company behind the drug.<p>> Purdue Pharma created false advertising documents to provide doctors and patients illustrating that time-released OxyContin was less addictive than other immediate release alternatives. Furthermore, they sought out doctors who were more likely to prescribe opioids and encouraged them to prescribe OxyContin because it was safer. They did this because OxyContin quickly became a cash cow for the company. (<a href="https://oversight.house.gov/release/comer-purdue-pharma-and-sackler-family-hold-tremendous-responsibility-for-growing-opioid-epidemic/" rel="nofollow noreferrer">https://oversight.house.gov/release/comer-purdue-pharma-and-...</a>)<p>A degree of malfeasance in the same realm as Big Tobacco's denials of the risks and addictiveness of smoking:<p><a href="https://www.cbsnews.com/news/big-tobacco-kept-cancer-risk-in-cigarettes-secret-study/" rel="nofollow noreferrer">https://www.cbsnews.com/news/big-tobacco-kept-cancer-risk-in...</a>
<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2879177/" rel="nofollow noreferrer">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2879177/</a><p>Although, perhaps could be considered worse since it occurred more recently in a theoretically more highly regulated market than mid 1900's tobacco.</p>
]]></description><pubDate>Tue, 24 Oct 2023 15:37:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=38000655</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=38000655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38000655</guid></item><item><title><![CDATA[New comment by gshulegaard in "HTTP/2 rapid reset attack impacting Nginx products"]]></title><description><![CDATA[
<p>Thanks for taking the time to explain the nuanced implementation difference.</p>
]]></description><pubDate>Fri, 13 Oct 2023 17:01:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=37872817</link><dc:creator>gshulegaard</dc:creator><comments>https://news.ycombinator.com/item?id=37872817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37872817</guid></item></channel></rss>