If you want to include additional recovery data with your backups, you could use Parchive-type solutions. You specify the amount of redundancy/recovery data that you want to generate and how (if at all) to split it. The benefit of using this method is that it's agnostic to the actual backup and storage methods you choose. You can use zip or tar or Windows Backup or anything else that generates files and feed them through Parchive tools to generate additional recovery files.
Keep in mind that both Amazon Glacier and S3 services have an ability to generate file checksum, so once you upload a file, you can compare local and remote checksums to make sure the file got transferred without errors.
Furthermore, this is what Amazon has to say on this topic:
Durable – Amazon Glacier is designed to provide average annual durability of 99.999999999% for an archive. The service redundantly stores data in multiple facilities and on multiple devices within each facility. To increase durability, Amazon Glacier synchronously stores your data across multiple facilities before returning SUCCESS on uploading archives. Unlike traditional systems which can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks and is built to be automatically self-healing.
This means that there’s only a 0.00000000001 (1e-11) probability of any one of your files going poof over the course of a single year. Put another way, if you store 100 billion files in Glacier for one year, you can expect to lose one of them.
If you want additional assurance, consider uploading your data to multiple Glacier regions or to a totally different service provider in another geo region.