Data and information on the web are growing day by day. Nowadays, most people use Google as the first source of knowledge, whether they are searching for reviews about a business or trying to understand a new term.
With the amount of data available on the web, it opens up a lot of opportunities for Data scientists. Unfortunately, most of the data on the web is not readily available. It is presented in an unstructured format referred to as HTML format that is not downloadable. Thus, it requires the knowledge and expertise of a data scientist to make use of it.
Web scraping is the process of converting data present in HTML format into a structured format that can be easily accessed and used. Almost all programming languages can be used for a proper web scrapping. However, in this article, we will be using the R language.
There are several ways in which data can be scraped from the web. Some of the most popular ones include:
1. Human Copy-Paste
This is a slow but very efficient technique of scraping data from the web. In this technique, a person analyses the data him/herself and then copies it to the local storage.
2. Text Pattern Matching
This is another simple but powerful approach to extract information from a web. It requires using regular expression matching facilities of programming languages.
3. API Interface
Lots of websites such as Twitter, Facebook, LinkedIn, etc. provide you with public or private APIs which may be called using standard codes to retrieve data in a prescribed format.
4. DOM Parsing
Note that some programs can retrieve dynamic content created by the client-side scripts. It is possible to parse pages into a DOM tree that is based on the programs you can use to retrieve some parts of these pages.
Before to embark on web scraping in R, you need to have a basic knowledge on R. If you are a beginner, there are many great sources that can help. Also, you are required to have knowledge of HTML and CSS. However, since most data scientists are not very sound with the technical knowledge of HTML and CSS, you can use an open software such as Selector Gadget.
For instance, if you are scraping data on the IMDB website for the 100 most popular films released in a given period, you need to scrape the following data from a site: description, runtime, genre, rating, votes, gross earning, director and cast. Once you have scrapped the data, you can analyze it in different ways. For instance, you can create a number of interesting visualizations. Now when you have a general idea of what a data scrapping is, you can make your way around it!
Post a comment