We frequently get books from the library, and it sometimes gets hard to keep track of what's due when. So I wrote a Python program to scrape the web page for the local library system (Minuteman Library Network in eastern Massachusetts). It's old enough that it scrapes the "classic" interface, which I also find easier to use. The scraper is very primitive, picking things out of the HTML with regular expressions, but it does the job.
Given a file listing library card IDs and PINs, the program prints a list of books checked out, along with a note for any books on hold that have come in. I use it with a cron job on a Linux machine so the results are mailed to be every night. It's available on github at treese/booksdue. As noted in the README, the only Python package dependency is mechanize, for pretending to be a web browser.
This kind of program is a reminder that APIs for data make it possible to use the data in different ways, without the application vendor having to do all the work.