feeds
I made a set of tools to pull RSS or Atom feeds from the internet and read them in the terminal. Before, I was using sfeed, which is a collection of tools for downloading RSS and Atom feeds, keeping track of read and unread items, viewing feeds and items in a TUI (Text UI) written with ncurses, and performing some action to them when they are selected and you press enter in the TUI (like opening media at the link in an appropriate media player). Unfortunately, it was unclear how to get the XML library it uses working on Gentoo, my Linux distro of choice. I thought rewriting it in Rust would give me the opportunity to make some system-level improvements, as well as a binary with less dependencies external dependencies.
Downloading
I started by trying to replace the tool sfeed uses to download feeds from the web. This tool downloads the RSS or Atom feed located at each url in a list of urls supplied by the user. The original downloader frustrates me because by default it downloads every feed sequentially. Becuase most of the time downloading a feed is spent waiting for a server to fufill requests for data, updating all of my feeds could frequently take minutes, especially on slower wifi. Even when using all of the threads on a system, the total runtime was tied to the the number of threads the CPU has, which could still mean tens of seconds if one is subscribed to a large number of feeds, or not on a system with many threads.With that in mind, I had already planned to use a combination of reqwest and tokio for the downloader. Reqwest is a commonly used Rust crate for making http(s) requests. Using it is thankfully unremarkable- I set it up, feed it urls from the list of feeds I'm subscribed to, and I get data back. Tokio is more interesting in this context. It allows me to schedule many of these downloads simultaenously, on one or many threads. It is assumed work done with tokio will spend a significant amount of its runtime waiting for some other task to complete. This assumption maps nicely onto grabbing data from many different servers- most of the time we spend is waiting for the server to send us data, and then doing a relatively small amount of processing. As a result, the time it takes the new downloader and parser to run is only as long as it takes for the slowest feed to be downloaded and parsed, not a combination of several downloads and parsings as in sfeed's downloader/parser.
One of the things I like about the modular nature of many command line tools is that I was able to test my downloader with sfeed's reader. This made debugging much easier, as the reader I would build should be compatible with the data given to sfeed's It was also encouraging, as finishing just the downloader would be a quality of life improvement for me.