I only use an iFrame to crawl and scrape content
(Draft) - Mon 23rd December, 2019 | By @betoayesaInjecting an iframe to any production page gives you full capabilities to automate navigation through it, avoiding for example cross-domain blocking issues. Browser's developer tool gives you big part of all you need to complete a small crawling and scraping project.
Also a library like jQuery, that will allow you to manipulate and have access to the DOM, and an iframe, that will support pagination for example.
๐ฌSome examples using browser's developer tools
Using Dev console, you can have a scraper with few lines of code. You can inject jquery on any page. Of course, with this code, you will find issues with malformed urls, browser errors that will block everything else, speed, and you probably get banned after a few minutes depending on the target site... but its just to show the core of the idea.Common lines of code
Work with jQuery or whatever you want. The thing is to access the DOM to manipulate it.Injecting jQuery:
var urls = result = []; $ = $ || jQuery; if (typeof $ == "undefined"){ var s = document.createElement("script"); s.type = "text/javascript"; s.src = "https://code.jquery.com/jquery-3.4.1.slim.min.js"; document.body.appendChild((s); } var $iframe = $('<iframe id="iframe" name="iframe" src="">').appendTo('body');Saving, outputing results:
var result = []; // I will use result to store all scraped data JSON.stringify(result); // will output JSON string in the developer's console
Seo audit (without iframe)
Simple code to get all urls from current page, then retrieve them all getting their title, h1 and h2 text and download all images. For example, quick seo audit of any site:Thanks god, this will be 0 level deep scraping, and you will be visiting a lot of urls = '#', so it will finish soon.
var urls = result = []; $('body a').each(function(){ urls.push($(this).attr("href")); }); urls.forEach(function(url){ $.get(url,function(body){ var scraped ={ title: $(body).find('title').text(), h1: $(body).find('h1').text(), h2: $(body).find('h2')[0].text() }; result.push(scraped); }); });
๐ Scraping All Craiglist's housing NYC listings (with an iframe)
With an iframe we can use pagination, you can keep a parent state while navigating through all site's pages. Target url: https://newyork.craigslist.org/d/apts-housing-for-rent/search/apavar data = []; var $iframe = $('<iframe id="iframe" name="iframe" src="">').appendTo('body'); $iframe.on('load',function(){ $iframe.contents().find('.result-row').each(function(){ data.push({ title: $(this).find('.result-title').text(), img: $(this).find('img').attr("src"), price: $(this).find('.result-price:first').text() }); }); setTimeout(function(){ $iframe.prop("src",$iframe.contents().find('body').find('.next.button').attr("href")); },500); }); // And everything starts running when you set first iframe's target url $iframe.prop("src", "https://newyork.craigslist.org/d/apts-housing-for-rent/search/apa"); // data array will be collecting scraped data. console.log(JSON.stringify(data)); // Will give you a JSON string that you can exportAttention this code won't stop, it will be going through all paginated results.
Scraping your own tweets on Twitter
With twitter you don't need an iframe, you just need to scroll down to get more tweets. A part from it, it's just not possible to inject jQuery here, you need the http tunneling component to modify repsonse headers.var data = []; function extract(){ var els = document.querySelectorAll('article'); els.forEach(function(el){ data.push(el.innerText); // I'm storing all html inside it }); window.scrollTo(0,document.body.scrollHeight); }; setInterval(extract,2000);
๐ Want more examples? Ask me please @betoayesa or betolopezayesa@gmail.com
โ ๏ธ Mind the gap
- Big sites takes security seriously, so it's different to try to scrape twitter or google than a wordpress site. Injecting jQuery to twitter for example, isn't easy
- It's different to scrape data one time than setting up a process for regular extraction
- All Scraping scenarios are different
- And as they are websites, "scenarios" get updated over time.
- It's never a good idea to have a business that depends on external sources like apis or scraping
- There are 3 big parts on any scraping project: (a) Research target URL, (b) Craft Crawl & Scrape scripts and run it successfuly first time (c) Plan to maintain the scraped data
๐ Limitations
- It will fail because all edge cases you need to manage (malformed urls, run time errors, ...)
- You need a browser, and maintain it open
- The iframe is slow,
- You cannot bypass their protections without using a HTTP Tunneling component
- You cannot rotate proxies
- You cannot manipulate http headers
๐ Benefits
It gives me the total flexibility I find missing in other solutions. You can execute UI actions pre or post scraping the content, while having access to exact same visitor version, window object, console, all html ...- All requests will be legit under any oauth or protocol. No issues with sites that needs login
- You can control everything in a full rendered version of the target page, ui actions, clicks... every thing inside, and most important, what to do next. You can fill forms automatically too.
- You can test out scraping scripts fast
- I'm testing the same concept inside an electron application with a native webview component and the speed improvement its impressive.
๐ด I made an app from this, Airovic.com
Airovic is:- An Iframe that manage error edge cases and element inspector skills
- A Crawler = An iframe url queue
- A handleDocument( ) function that encapsulates user's function
- An HTTP Tunneling component that gives you full control over response and headers
- Rotative Proxies
๐ Help me test the beta please! I would love to read your thoughts. www.airovic.com, @betoayesa or betolopezayesa@gmail.com
Scraping Process with Airovic, ยฟHow to use it?
- First, start by visiting the domain or target url. The url will be shown inside the iframe "previewer".
- Navigate through the page until you arrive to a page that has the data you need to extract
- Right click on all elements that you want to scrape. One by one, write down, or use the code editor, to store html selectors and what type of attribute you want to scrape (text? html? src attr? href attr?...).
- In fact you can just click to the selector, and new line of code will be added to code editor.
- Review the code that will process each page, when you are done, click on "Test on current page"
- If it says successfull, you can proceed, if not, you will need to fix your code
- If everything was fine, then you can click on Start Robot to open its menu
- You can click on list of urls on the right, or just set the url discovery option enabled
- Click the button to STart Crawling & Scraping
- Results will be added to "Results" tab. And you will be able to download everything as JSON
Yep, as I re-read this, I know I need to make it easier.
Please try airovic, and let me know your thoughts! ๐ค๐ค betolopezayesa@gmail.com @betoayesa
RoadMap
- First thing: validate interest & fix bugs
- More work on iFrame inspector component
- More work on Code editor component
- User-friendly way of adding selectors, attributes and actions, instead of code
- Full scraping recipes for popular platforms
- Electron standalone application
- Release Airovic under an open source license
- ยฟWhat happens if I use multiple iframes?
Author
betolopezayesa@gmail.comHi! I'm Beto, ask me anything on twitter, @betoayesa!
As a developer I got on several crawling & scraping projects. The big ones were e-comprice (price monitoring service) and sporteeze (a service to monitor app store & android play store), but I did a lot of small-medium scraping too. I used a lot of different languages to scrape content, but mostly python, php and javascript with CasperJs. With this crawling & scraping tool, airovic.com, I'm trying to workout another project, natzar.co, and been using it recently to visit websites automatically checking for errors in the console, so I can send an email from phpninja.info
Thanks for reading until here! Someone should create badges or something so we the people that reads until the end gets some recognition : )