The same time as to load a page into your browser. Although, saving the pages with heavy scripts or the pages full of Ads may take up to few minutes. There is 5 minutes timeout, if page is not fully loaded in 5 minutes, the saving considered failed. It is not often, but it happens.
The stored page with all images must be smaller than 50Mb
The archive runs Apache Hadoop and Apache Accumulo. All data is stored on HDFS, textual content is duplicated 3 times among servers in different datacenters and images are duplicated 2 times. All datacenters are in Europe.
Virtually forever. We have a lot of free space and alhough the archive grows with time, the storage and bandwidth get cheaper.
The pages violated the hoster's rules (cracks, porn, etc) may be deleted. Also, completely empty pages (or pages which has nothing but text like “502 Server Timeout”) may be deleted.
It is privately funded, there in no complex finance behind it. It may look more or less reliable compared to the startup-style funding or an univercity project, depends on which risks are taken into account. My death can cause interruption of service, but something like new market condition or changing head of a department can not.
I cannot make a promise that it will not. With the current grouth rate I am able to keep the archive free of ads. Well, I can promise it will have no ads at least till the end of 2014.
Each page has short url http://archive.is/XXXXX, where XXXXX is the unique indentfier of a page. Also, the page can be refered with urls like
The date can be extended further with hours, minutes and seconds:
Year, month, day, hours, minutes and seconds can be separated with dots, dash or colons to increase readability:
It is also possible to refer all snapshots of the given url
All saved pages from the domain
All saved pages from all the subdomains
No. But you can keep bookmarks to archived pages in one of the plenly bookmark managers, like Delicious, Googl Bookmarks, …
Because it is not free-walking crawler, it saves only one page. Services which request only one page do not obey robots.txt (e.g. screenshot- or pdf-making services, webcitation.org, isup.me, …)
Yes.
Yes.