1. In browser’s menu: File -> Save As -> Compele page.
Anyway, adding something like “download as .zip” can make sense, for example for mobile users which do not have full featured browsers. I will add it.
2. I think, no. It is very tricky to run, it depends on an exact version of Chrome, which binary also must be patched in order to reduce security (to allow saving content of frames, etc).
Thank you for reporting, the problem was on my side.
Should be fixed now.
To increase reliability and be more confident, you can put your link to all the archiving sites, not only to http://archive.is/ (there are also http://peeep.us/ http://webcitation.org/ http://hiyo.jp/ http://megalodon.jp/ ).
something like
curl —data url=http://url-to-submit.com/ http://archive.is/submit/
Please note, that it may take up to 1 hour to process 7000 urls (after you submit them and before they will be visible on the site).
They should not. As far as I understood, SE spamming is about spreading links to a site, not about copying the content.
The site supports OExchange (http://www.oexchange.org/), so the bookmarking tools can use it.
No, I hadn’t any discussion.
But I’ve heard some rumors about their plans to use real browsers to execute javascript, etc.
Currently all data is available via http.
Isn’t it enough ?
Yes, the new design (which I hope will be online soon) will have this feature.
For a while, you can check if the page was saved by querying URLs like
http://archive.is/http://www.google.com/ - all snapshots of the exact url
http://archive.is/www.google.com - all snapshots from the domain
http://archive.is/*.google.com - all snapshots from all subdomains