Thank you for reporting, the problem was on my side.
Should be fixed now.
To increase reliability and be more confident, you can put your link to all the archiving sites, not only to http://archive.is/ (there are also http://peeep.us/ http://webcitation.org/ http://hiyo.jp/ http://megalodon.jp/ ).
something like
curl —data url=http://url-to-submit.com/ http://archive.is/submit/
Please note, that it may take up to 1 hour to process 7000 urls (after you submit them and before they will be visible on the site).
They should not. As far as I understood, SE spamming is about spreading links to a site, not about copying the content.
The site supports OExchange (http://www.oexchange.org/), so the bookmarking tools can use it.
No, I hadn’t any discussion.
But I’ve heard some rumors about their plans to use real browsers to execute javascript, etc.
Currently all data is available via http.
Isn’t it enough ?
Yes, the new design (which I hope will be online soon) will have this feature.
For a while, you can check if the page was saved by querying URLs like
http://archive.is/http://www.google.com/ - all snapshots of the exact url
http://archive.is/www.google.com - all snapshots from the domain
http://archive.is/*.google.com - all snapshots from all subdomains
Write to webmaster@archive.is
Currently there is no automated way to do it.
And it is not so easy to implement as it might look, because a lot of archived pages are referenced from Wikipedia or other wikis. The deleting of a page should be somehow synchronized with fixing those references on the wikis. Otherwise, if would be ridiculous if the site which goal is to fight the dead link problem has dead links itself.