The same time as to load a page into your browser. Although, saving the pages with heavy scripts or the pages full of Ads may take up to few minutes. There is 5 minutes timeout, if page is not fully loaded in 5 minutes, the saving considered failed. It is not often, but it happens.
The stored page with all images must be smaller than 50Mb
The archive runs Apache Hadoop and Apache Accumulo. All data is stored on HDFS, textual content is duplicated 3 times among servers in different datacenters and images are duplicated 2 times. All datacenters are in Europe.
Virtually forever. We have a lot of free space and although the archive grows with time, the storage and bandwidth get cheaper.
Pages which violate our hoster's rules (cracks, porn, etc) may be deleted. Also, completely empty pages (or pages which have nothing but text like “502 Server Timeout”) may be deleted.
It is privately funded; there are no complex finances behind it. It may look more or less reliable compared to startup-style funding or a university project, depending on which risks are taken into account.
I cannot make a promise that it will not. With the current growth rate I am able to keep the archive free of ads. Well, I can promise it will have no ads at least till the end of 2014.
Each page has short url http://archive.is/XXXXX, where XXXXX is the unique indentfier of a page. Also, the page can be refered with urls like
The date can be extended further with hours, minutes and seconds:
Year, month, day, hours, minutes and seconds can be separated with dots, dash or colons to increase readability:
It is also possible to refer all snapshots of the given url
All saved pages from the domain
All saved pages from all the subdomains
Yes.
http://archive.is/newest/http://reddit.com/ There is also http://archive.is/oldest/http://reddit.com/
There are two options:
add hashtag with the scroll position as a number between 0 (top of the page) and 100 (bottom). For example http://archive.is/FWVL#40%
select some text on the page and get URL with hashtag referring to the selection. For example http://archive.is/FWVL#selection-1493.0-1493.53
archive.is supports MementoWeb API. More info can be found here
No. But you can keep bookmarks to archived pages in one of the existing bookmark managers, like Delicious, Google Bookmarks, …
Because it is not a free-walking crawler, it saves only one page acting as a direct agent of the human user. Such services don't obey robots.txt (e.g. Google Feedfetcher, screenshot- or pdf-making services, isup.me, …)
Yes.
Yes.
Yes.
But take in mind that when you archive a page, your IP is being sent to the the website you archive as though you are using a proxy (in X-Forwarded-For header). This feature allows websites (e.g shops or the sites with weather forecast) target your region, not mine.
The archive is not a news agency nor an authoritative source of reference information. It merely certifies that at the given point of time there was a page on the web. The page might well contain a fairy tale and despite “One day Little Red Riding Hood goes to visit her granny” being a false statement it is not the reason to burn the books. Note that weather forecasts on the archived pages are outdated as well.
More questions and answers: http://blog.archive.is/archive