Archive.is blog

Blog of http://archive.is/ project
  • ask me anything
  • rss
  • archive
  • Can the archived pages be downloaded for local use on our computers? Will you be releasing the software that you use for archival?
    Anonymous

    1. In browser’s menu: File -> Save As -> Compele page.

    Anyway, adding something like “download as .zip” can make sense, for example for mobile users which do not have full featured browsers. I will add it.

    2. I think, no. It is very tricky to run, it depends on an exact version of Chrome, which binary also must be patched in order to reduce security (to allow saving content of frames, etc).

    • 1 day ago
  • I tried to archive some pages yesterday and today (2013-03-16), but I always got: "Error: Network error." It seems to be the same with different target sites. I had also tried with addresses targeting my own server here and had the same effect, while I and others could reach my server that way. Looking at my logs, I did not even find an attempt to connect to my server from this site (archive is). What is wrong?
    Anonymous

    Thank you for reporting, the problem was on my side.

    Should be fixed now.

     

    • 1 week ago
  • Hey! It's a great resourse. Is there any possibility, that my bookmark will be removed? It's not porn or anything, but i'm not sure about copyrights :)
    Anonymous

    To increase reliability and be more confident, you can put your link to all the archiving sites, not only to  http://archive.is/ (there are also  http://peeep.us/  http://webcitation.org/ http://hiyo.jp/ http://megalodon.jp/ ).

    • 1 week ago
  • Can you recommend the best method/script so I may batch archive about 7000 urls?
    Anonymous

    something like

    curl —data url=http://url-to-submit.com/ http://archive.is/submit/ 

    Please note, that it may take up to 1 hour to process 7000 urls (after you submit them and before they will be visible on the site).

    • 1 week ago
  • Does archive have plans for an API? Just curious. =)
    newhopegriffin

    What kind of API do you need? 

    • 2 weeks ago
  • If I archive my web pages on your site I assume that the search engines will not say I am a spammer because my material appears more than once on the internet.
    Anonymous

    They should not. As far as I understood, SE spamming is about spreading links to a site, not about copying the content.

    • 2 weeks ago
  • Any plans to provide an API or open source the code? This would be a really helpful addition to many of the bookmarking tools.
    jasonfredin

    The site supports OExchange (http://www.oexchange.org/), so the bookmarking tools can use it.

    • 1 month ago
  • You refer to the wonderful Wayback Machine. Have you had any discussions with the Internet Archive? Would you be interested in integrating your cool service with theirs somehow?
    Anonymous

    No, I hadn’t any discussion. 

    But I’ve heard some rumors about their plans to use real browsers to execute javascript, etc.

    • 1 month ago
  • You said u like the idea of sharing the data your site has collected. Seed it via torrent ;)
    Anonymous

    Currently all data is available via http.

    Isn’t it enough ?

    • 1 month ago
  • The current homepage shows a box from which you can submit a link to be archived (which works really well, by the way). However, I think it would also be a nice idea to allow looking for previous snapshots of a web page by entering the address at the home page. (For example an extra button that says "find saved copies".)
    Anonymous

    Yes, the new design (which I hope will be online soon) will have this feature.

    For a while, you can check if the page was saved by querying URLs like 
    http://archive.is/http://www.google.com/  - all snapshots of the exact url
    http://archive.is/www.google.com  - all snapshots  from the domain
    http://archive.is/*.google.com  - all snapshots from all subdomains 

    • 1 month ago
© 2012–2013 Archive.is blog
Next page
  • Page 1 / 2