I am new to webscraping in R and trying to run google search action using a search term from R and extract links automatically. I am partially successful in obtaining the links of google search results using RCurl and XML package. However, the href links I extract include unwanted information and are not in the format of a "url".
The code I use is:
html <- getURL(u)
links <- xpathApply(doc, "//h3//a[@href]", xmlGetAttr, 'href')
links <- grep("http://", links, fixed = TRUE, value=TRUE)
rvest package (which also uses
XML package but has a lot of handy features related to scraping)
library(rvest) ht <- read_html('https://www.google.co.in/search?q=guitar+repair+workshop') links <- ht %>% html_nodes(xpath='//h3/a') %>% html_attr('href') gsub('/url\\?q=','',sapply(strsplit(links[as.vector(grep('url',links))],split='&'),'[',1))
 "http://theguitarrepairworkshop.com/"  "http://www.justdial.com/Delhi-NCR/Guitar-Repair-Services/ct-134788"  "http://www.guitarrepairshop.com/"  "http://www.guitarworkshoponline.com/"  "http://www.guitarrepairbench.com/guitar-building-projects/guitar-workshop/guitar-workshop-project.html"  "http://www.guitarservices.com/"  "http://guitarworkshopglasgow.com/pages/repairs-1"  "http://brightonguitarworkshop.co.uk/"  "http://www.luth.org/resources/schools.html"
The fourth line in the code cleans the text. First splits the resulted url (that comes with garbage) wrt '&' and then takes the first element of the resulted split and replaces '/url?q=' with empty.
Hope it helps!