I only use an iFrame to crawl and scrape content

(Draft) - Mon 23rd December, 2019 | By @betoayesa

Injecting an iframe to any production page gives you full capabilities to automate navigation through it, avoiding for example cross-domain blocking issues. Browser's developer tool gives you big part of all you need to complete a small crawling and scraping project.

Also a library like jQuery, that will allow you to manipulate and have access to the DOM, and an iframe, that will support pagination for example.

๐Ÿ”ฌSome examples using browser's developer tools

Using Dev console, you can have a scraper with few lines of code. You can inject jquery on any page. Of course, with this code, you will find issues with malformed urls, browser errors that will block everything else, speed, and you probably get banned after a few minutes depending on the target site... but its just to show the core of the idea.

Common lines of code

Work with jQuery or whatever you want. The thing is to access the DOM to manipulate it.

Injecting jQuery:
var urls = result = [];
$ = $ || jQuery;

if (typeof $ == "undefined"){
	var s = document.createElement("script");
	s.type = "text/javascript";
	s.src = "https://code.jquery.com/jquery-3.4.1.slim.min.js";
	document.body.appendChild((s);
}

var $iframe = $('<iframe id="iframe"  name="iframe" src="">').appendTo('body');
Saving, outputing results:
var result = []; // I will use result to store all scraped data
JSON.stringify(result); // will output JSON string in the developer's console

Seo audit (without iframe)

Simple code to get all urls from current page, then retrieve them all getting their title, h1 and h2 text and download all images. For example, quick seo audit of any site:
Thanks god, this will be 0 level deep scraping, and you will be visiting a lot of urls = '#', so it will finish soon.
var urls = result = [];
$('body a').each(function(){ urls.push($(this).attr("href")); });
urls.forEach(function(url){
	$.get(url,function(body){
		var scraped ={
			title: $(body).find('title').text(),
			h1: $(body).find('h1').text(),
			h2: $(body).find('h2')[0].text()			
		};
		result.push(scraped); 
	});
});


๐Ÿš€ Scraping All Craiglist's housing NYC listings (with an iframe)

With an iframe we can use pagination, you can keep a parent state while navigating through all site's pages. Target url: https://newyork.craigslist.org/d/apts-housing-for-rent/search/apa
var data = [];
var $iframe = $('<iframe id="iframe"  name="iframe" src="">').appendTo('body');

$iframe.on('load',function(){
	$iframe.contents().find('.result-row').each(function(){
    	data.push({
		title: $(this).find('.result-title').text(),
		img: $(this).find('img').attr("src"),
		price: $(this).find('.result-price:first').text()
	    });
	});
	
	setTimeout(function(){
		$iframe.prop("src",$iframe.contents().find('body').find('.next.button').attr("href"));
	},500);
});

// And everything starts running when you set first iframe's target url
$iframe.prop("src", "https://newyork.craigslist.org/d/apts-housing-for-rent/search/apa");

// data array will be collecting scraped data.
console.log(JSON.stringify(data)); // Will give you a JSON string that you can export
Attention this code won't stop, it will be going through all paginated results.

Scraping your own tweets on Twitter

With twitter you don't need an iframe, you just need to scroll down to get more tweets. A part from it, it's just not possible to inject jQuery here, you need the http tunneling component to modify repsonse headers.
var data = [];
function extract(){
	var els = document.querySelectorAll('article');
	els.forEach(function(el){
		data.push(el.innerText); // I'm storing all html inside it
	});
	
	window.scrollTo(0,document.body.scrollHeight);
};

setInterval(extract,2000);


๐ŸŒŸ Want more examples? Ask me please @betoayesa or betolopezayesa@gmail.com


โš ๏ธ Mind the gap

๐Ÿ‘Ž Limitations

๐Ÿ‘Œ Benefits

It gives me the total flexibility I find missing in other solutions. You can execute UI actions pre or post scraping the content, while having access to exact same visitor version, window object, console, all html ... Benefit of having http tunneling that at the same time use rotative proxies: you are unstoppable. As you can manipulate headers, and you are showing the content inside an iframe (but in the same domain!), you have the content perfectly ready to be scraped and automated.

๐Ÿšด I made an app from this, Airovic.com

Airovic is: It was born from my last Crawling & Scraping project I had, the easiest way to get the data, was to extract a javascript variable that was loaded via ajax after the page was rendered in the browser. As I had access to the variable from the developer console, I started to use the iframe to automatize the process, because this was the straight forward solution over the others. Think on how to store thousands of JS objects, so you can read them later. CasperJS was not helpful.

๐ŸŒŸ Help me test the beta please! I would love to read your thoughts. www.airovic.com, @betoayesa or betolopezayesa@gmail.com

Scraping Process with Airovic, ยฟHow to use it?

  1. First, start by visiting the domain or target url. The url will be shown inside the iframe "previewer".
  2. Navigate through the page until you arrive to a page that has the data you need to extract
  3. Right click on all elements that you want to scrape. One by one, write down, or use the code editor, to store html selectors and what type of attribute you want to scrape (text? html? src attr? href attr?...).
  4. In fact you can just click to the selector, and new line of code will be added to code editor.
  5. Review the code that will process each page, when you are done, click on "Test on current page"
  6. If it says successfull, you can proceed, if not, you will need to fix your code
  7. If everything was fine, then you can click on Start Robot to open its menu
  8. You can click on list of urls on the right, or just set the url discovery option enabled
  9. Click the button to STart Crawling & Scraping
  10. Results will be added to "Results" tab. And you will be able to download everything as JSON

Yep, as I re-read this, I know I need to make it easier.

Please try airovic, and let me know your thoughts! ๐Ÿคœ๐Ÿค› betolopezayesa@gmail.com @betoayesa


RoadMap


Author

betolopezayesa@gmail.com

Hi! I'm Beto, ask me anything on twitter, @betoayesa!

As a developer I got on several crawling & scraping projects. The big ones were e-comprice (price monitoring service) and sporteeze (a service to monitor app store & android play store), but I did a lot of small-medium scraping too. I used a lot of different languages to scrape content, but mostly python, php and javascript with CasperJs. With this crawling & scraping tool, airovic.com, I'm trying to workout another project, natzar.co, and been using it recently to visit websites automatically checking for errors in the console, so I can send an email from phpninja.info

Thanks for reading until here! Someone should create badges or something so we the people that reads until the end gets some recognition : )