this is a bug, thank you for reporting!
Forever. Actually, I think, in 3-5-10 years all the content of the archive (it is only ~20Tb) could fit in a mobile phone memory, so anyone will be able to have a synchronized copy of the full archive. So my “forever” is not a joke.
Two persons, currently.
Not yet.
You are the first person asking for this :)
No.
You can create a collection of your archived paged on http://delicious.com/ or http://pinterest.com/
necesitas navegar al websitio original para usar la multimedia de la pagina
You should contact the issuer bank to ensure they have the cards blocked. Banks can be found by the prefix of the card number (http://en.wikipedia.org/wiki/List_of_Issuer_Identification_Numbers).
You can download a .zip file (there is a link in the header).
It is difficult to use archive.is for pirating due to limited size of the page it can save. Of course, it is still possible, by UU-encoding a movie or windows.iso and then by splitting it into small parts. But there are plenty of convenient tools to do that, for example, the torrent trackers or mega.co.nz. Or even The Internet Archive and WebCite, because they can save big binary files.
There is no spider (as a machine which takes decisions what to archive).
All the urls are entered manually by users (or taken from https://en.wikipedia.org/wiki/Special:RecentChanges, where they also appear as a result of user edits).
If the archive would check and obey robots.txt, then, if archiving is disallowed, the error should be seen to the user, right?
Then, on seeing the error, the user will archive the page indirectly, first feeding the url to an url shortener (bit.ly, …) or to an anonimizer (hidemyass.com, ..) or to another on-demand archive (peeep.us, …), and then archive the same content from another url, thus bypassing the robots.txt restrictions.
So, this check will not work the same way as it works with IA Archiver (which is actually a machine which takes decisions).