ScrapeGoat ScrapeGoat - 3 months ago 20
JSON Question

Scraping website that include JS/jquery code with R

I am currently attending a course in machine learning and web-scraping. As part of the final assignment we have to scrape a website of our own choice and apply statistical learning on the data. We use R for the analysis. Disclaimer: I am a total newb in this field.

The Problem: I want to extract the hyperlinks from this website with different searches (dont be scared that it is in Danish) . The hyperlinks can be found to the right (v15, v14, v13 etc) [example].
The website I try to scrape somehow uses the search results from some kind of a jquery/javascript. This is based is on my very limited knowledge in HTML and might be wrong.

I think this fact makes the following code unable to run (I use the "rvest"-package):

sdslink="http://karakterstatistik.stads.ku.dk/#searchText=&term=&block=&institute=null&faculty=&searchingCourses=true&page=1"
s_link = recs %>%
read_html(encoding = "UTF-8") %>%
html_nodes("#searchResults a") %>%
html_attr("href")


I have found a method that works but it requires me to download the pages manually with "right click"+"save as" for each page. This is however unfeasible as I want to scrape a total of 100 pages for hyperlinks.

I have tried to use the jsonlite package combined with httr but I am not able to find the right .json file it seems.

I hope you guys might have a solution, either to get the jsonlite to work, automate the "save as" solution or a third more clever path.

Answer

One approach is to use RSelenium. Here's some simple code to get you started. I assume you already have RSelenium and a webdriver installed. Navigate to your site of interest:

library(RSelenium)
startServer()
remDr <- remoteDriver(remoteServerAddr = "localhost", port = 4444, 
                      browserName = "chrome")
remDr$open(silent = TRUE)
remDr$navigate("http://karakterstatistik.stads.ku.dk/")

Find the submit button by inspecting the source:

webElem <- remDr$findElement("name", "submit")
webElem$clickElement()

Save the first 5 pages:

html_source <- vector("list", 5)
i <- 1
while (i <= 5) {
  html_source[[i]] <- remDr$getPageSource()
  webElem <- remDr$findElement("id", "next")
  webElem$clickElement()
  Sys.sleep(2)
  i <- i + 1
}
remDr$close()
Comments