RSS feed readers don’t work as well as they should. Today I’d like to talk about issues from an application and server developer viewpoint; I’ll save some thoughts about usability in various use cases for another time.
In this point I’m going to suggest that, from a technical perspective, the first step to fixing some important issues is to think of feed readers as being composed of two layers of caches. That helps clarify why some of these problems have been hard to solve, and at the same time points toward some existing solutions that we can reuse.
But first: email
I think the most interesting recent development in email standards is JMAP. Despite the similarity of its name with IMAP, the widely used “Internet Message Access Protocol”, JMAP’s RFC8620 doesn’t actually say much about email. Instead it describes a generic “JSON Meta Application Protocol”:
This document specifies a protocol for clients to efficiently query, fetch, and modify JSON-based data objects, with support for push notification of changes and fast resynchronisation and for out-of- band binary data upload/download.
A separate standard, RFC8621, describes how to apply JMAP to sending and receiving email, and there are draft proposals for how to sync calendars and contacts as well.
Two hard problems
There are only two hard things in Computer Science: cache invalidation and naming things. —Phil Karlton
The reason that I think JMAP is particularly interesting is because it sets out to solve cache invalidation problems for a useful range of applications—namely, those where:
- you have a data-set on a server,
- which you want to access from multiple clients,
- and you want the user experience to be as if the client kept the whole data set locally,
- but those clients may not have enough resources to do so.
JMAP defines ways for a client to cache a portion of the server’s data set, for the server to inform the client when that cache may be out of date, and for the client to transfer as little data as possible to get its cache back up to date.
I’ve spent a lot of time working with the X Window System network protocol, and learned some good techniques for dealing with high-latency or low-bandwidth networks. From my studies of X I also concluded that its extension mechanism was key to why protocol version 11 survived from 1987 and is still in widespread use today. In both areas, I’m happy to see that JMAP’s designers did an excellent job: they eliminated many round-trip delays and ensured that client and server can mutually agree on which extensions to use.
The JMAP-based specification for email doesn’t have to do much more than describe what data should be associated with an email or mailbox or whatever other objects are appropriate. It inherits from JMAP the ability to efficiently synchronize subsets of the server’s email store, allowing clients to work offline and on flaky internet connections.
Other applications could also build on JMAP. While calendar and contacts specifications are already underway, today I’d like to suggest another use case.
Client/server architectures for RSS feed readers
In my previous post, “WebSub plus Push”, I complained that developers who want to offer software that notices when a web page changes either need to spam the page with “have you changed yet?” requests or put up application-specific servers.
I’m particularly interested in this for native apps that monitor RSS1 feeds to inform me when I have new stuff to read, such as a news article or a chapter of some serialized fiction or a page of a webcomic.
Some sites I care about have RSS feeds which are largely unusable because their servers treat the repeated “have you changed yet?” requests as an attack. And honestly, that’s fair… except that those sites don’t implement WebSub and desktop/mobile feed readers couldn’t use it even if they did, so there’s literally no other option today.
Getting publishers to implement WebSub is its own challenge which I’m going to set aside for now, because if most feed readers won’t use it then it’s hard to argue that publishers should put in the effort.
Given that the only way forward is for native apps to require a dedicated server that’s always reachable from the internet, what should a client/server architecture look like for a feed reader?
Various web-based feed readers offer APIs intended for mobile and desktop apps; Tiny Tiny RSS and NewsBlur are two examples. The Indieweb community’s answer is Microsub, and the motivation they give is, I think, fairly typical of the field:
Microsub provides a standardized way for reader apps to interact with feeds. By splitting feed parsing and displaying posts into separate parts, a reader app can focus on presenting posts to the user instead of also having to parse feeds. A Microsub server manages the list of people you’re following and collects their posts, and a Microsub app shows the posts to the user by fetching them from the server.
I have several issues with these protocols, but today I’m going to focus on one: None of these protocols do a great job at cache invalidation. Clients can’t tell which data they already have might still be valid, and also must poll the server to learn about changes. (There’s been discussion for Microsub about streaming and sync but apparently no consensus has been reached.)
As you’ve probably guessed, this is where I suggest that JMAP could be a good foundation for a client/server API for feed readers.
JMAP for feed readers
Most of the work needed to define a JMAP-based feed reader API is in deciding what kinds of objects need to be synchronized and what information needs to be attached to each kind of object.
Looking at the three APIs I mentioned above, a first cut at the list of object types is:
Newsblur might add an extension capability for their “intelligence classifiers”—and I think it’s super important that JMAP standardized a well thought out extension mechanism to allow for exactly this kind of situation—but otherwise I think these are the primary object types in most feed readers.
In existing practice, categories generally just permit setting a name and an optional parent category, allowing them to be organized into a tree—or, technically, a forest. The above APIs also provide read-only information about each category, such as the number of read and unread posts across all feeds in that category. I’d look at the way the JMAP email specification models Mailbox objects for a good starting point here.
Each feed needs to be created by setting a source URL and a category. Feeds have a variety of additional properties that are generally read-only because they come from the publisher. Similarly, posts generally would only be created by the server, from feed entries retrieved from the feed’s publisher, and the client would only be able to set state such as “I’ve read this post”.
But there’s no reason that feeds and posts must be read-only! The exact same data model could work for posting to your own personal blog or social media accounts. Both NewsBlur and Tiny Tiny RSS have dedicated API endpoints for this purpose, and in the Indieweb ecosystem there’s a complementary specification called Micropub. But JMAP reminds us that there’s no operational difference between reading a post written by somebody else versus reading one you wrote on another device.
Microsub delegates the representation of feeds and posts to JF2, which is probably a good starting place for the similarly JSON-based JMAP. However, I’d pay careful attention to JF2’s extension mechanisms. If the JMAP server is going to be transforming RSS and Atom and Microformats-based feeds into a common representation, it really needs to pass information it doesn’t understand to the client, so that client development isn’t blocked waiting for servers to catch up with new extensions.
The last question a JMAP-based specification needs to answer is: What
search queries do clients need to be able to ask the server to perform
on their behalf? I’d start by looking through the searches offered by
existing APIs and comparing them to the
Email/query sections of the JMAP Mail specification.
Cache coherency between server and publisher
So far I’ve been talking about how a feed reader application interacts with an internet server that acts as an agent for the reader, but there are some related topics to consider in how that server agent interacts with feed publishers.
People working on RSS-related tools usually assume that only the most recent posts are important. This is generally true for content such as news articles or social media posts. But it’s false often enough that many feed readers will save old posts that they’ve seen, even after those posts disappear from the publisher’s feed.2
That doesn’t work very well.
It relies on your feed reader having started to watch the feed before the oldest post you might ever be interested in disappears.
People who follow RSS feeds have just gotten used to sometimes seeing stale contents, or duplicates of posts they already read.
On top of all that, I’ve been told by developers of two different cloud-based feed readers that this practice of saving the complete contents of old posts contributes significantly to their storage and hosting costs.
Fortunately, reframing these as caching problems points us in the right direction to solve them. We just need reliable mechanisms for:
- cache invalidation (because the publisher says it changed),
- cache eviction (because the client doesn’t have resources to keep it),
- and cache reloading (because either invalidation or eviction made a request miss the cache).
All three needs are addressed by RFC5005, “Feed Paging and
Archiving”, optionally supplemented by RFC6721, “The Atom
deleted-entry Element”. These specifications are from 2007 and 2012
respectively, and I hate how little-known they are.
Section 2 of RFC5005 covers “Complete Feeds”, where the publisher asserts that every post in the history of the feed is, in fact, present in the current feed document. In other words, the publisher is directing any feed reader which encounters such a feed to discard its cached copy of any post that was previously included but isn’t any more.
However, feeds aren’t usually very interesting unless they have enough posts that transferring all of them every time anybody checks for updates would quickly eat up everyone’s bandwidth. So section 4 defines “Archived Feeds”3, which allow splitting the feed up into multiple linked documents. It’s been carefully designed to transfer reasonably minimal information on changes. One of these days I’ll write up why every part of that design is necessary, because the RFC itself is unfortunately terse about rationale.
When a post is added or edited, archived feeds work most efficiently if the publisher appends that post to the end of the feed, no matter how long ago its publication date is. Doing the same for deleting a post requires having something you can append that says “this was deleted”, rather than just silently making it disappear.
That’s where RFC6721 comes in with its
deleted-entry element. Systems
which don’t already keep track of deleted posts may find it easier to
invalidate all archives going back to the point where a deleted post was
first published, but that does increase the amount of data that has to
be transferred in the (generally rare) situation where a post is
deleted. So implementing
deleted-entry is a good optimization for
With either complete or archived feeds, a feed reader can freely evict any information about a feed from its cache, reload it later from the original feed, and reliably detect when its cache is out of date.
RSS feed reader software could be more robust, less expensive to operate, and faster at responding to user input by thinking of it simply as two layers of caches: one cache that’s always publicly reachable on the internet, and any number of second-tier caches running on the end-user’s devices. WebSub together with RFC5005, the Feed Paging and Archiving specification, gives us a foundation for the first layer; JMAP is a strong contender for the second layer.
My plea is for developers who are working in the RSS feed ecosystem to implement these standards, and for the rest of you to advocate for these standards with the developers of your favorite tools.
I’ve written several pieces of software related to RFC5005 which may help, including:
- a WordPress plugin (see also my WordPress Core issue)
- a jekyll-feed patch
- a very rough prototype feed reader
- predictable, a feed generator for sites with consistent update schedules
This blog is itself using my patched version of jekyll-feed, so you can also see an example of an archived feed at https://jamey.thesharps.us/feed.xml.
RSS is the most widely-recognized of the mechanisms for web syndication feeds, so in this article I often use that term generically. Personally, I prefer Atom, and other formats like JF2 or the Microformats-based h-feed are valid alternatives as well. ↩
Of course, there’s also a lot of content out there where it’s necessary to see old posts in order to understand newer ones. Many podcasts and webcomics fall in this category, for example. I have a lot to say about that but it’s a topic for a later post. ↩
I’m skipping over RFC5005’s section 3, “Paged Feeds”, because it explicitly is not intended for the kind of cache coherency I’m talking about in this post, and can’t be used for that purpose. The specification says, “[C]lients SHOULD NOT present paged feeds as coherent or complete, or make assumptions to that effect,” and, “Unlike paged feeds, archived feeds enable clients to do this without losing entries.” ↩