I want to scrape an entire wiki that uses MediaWiki software. The amount of pages are pretty small, but they have plenty of revisions, and I'd like to preferably scrape revisions as well.
The wiki does not offer database dumps, unlike Wikipedia. Are there any existing software/scripts designed to scrape MediaWiki sites?