queens_living queens_living - 1 month ago 17
R Question

Trying to loop through HTML tables and create a data frame

I am trying to create a dynamic loop to run through multiple URLs and scrape data from each table, concatenating everything into a single data frame. I tried a few ideas, as illustrated below, but nothing has worked so far. This kind of stuff is not really in my wheelhouse, but I'm trying to learn how this works. If someone can help me get this done I would really appreciate it.

Thank you.

Static URL:
http://www.nfl.com/draft/2015/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:qb

library(rvest)

#create a master dataframe to store all of the results
complete<-data.frame()

yearsVector <- c("2010", "2011", "2012", "2013", "2014", "2015")
positionVector <- c("qb", "rb", "wr", "te", "ol", "dl", "lb", "cb", "s")
for (i in 1:length(yearsVector))
{
for (j in 1:length(positionVector))
{
# create a url template
URL.base<-"http://www.nfl.com/draft/"
URL.intermediate <- "/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:"
#create the dataframe with the dynamic values
URL <- paste0(URL.base, yearsVector, URL.intermediate, positionVector)
#print(URL)

#read the page - store the page to make debugging easier
page<- read_html(URL)

#This needs work since the page is dynamicly generated.
DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill=TRUE)
#About 530 names returned, may need to search and extracted requested info.



# to find the players last names
lastnames<-str_locate_all(page, "lastName")[[1]]
names<- str_sub(page, lastnames[,2]+4, lastnames[,2]+20)
names<-str_extract(names, "[A-Z][a-zA-Z]*")

length(names[-c(1:16)])
#Still need to delete the first 16 names (don't know if this is consistent across all years

#to find the players positions
positions<-str_locate_all(page, "pos")[[1]]
ppositions<- str_sub(page, positions[,2]+4, positions[,2]+10)
pos<-str_extract(ppositions, "[A-Z]*")

pos<- pos[pos !=""]
#Still need to clean delete the first 16 names (don't know if this is consistent across all years


#store the temp values into the master dataframe
complete<-rbind(complete, DF)
}
}


I edited my OP to incorporate your code Dave. I think I am almost there, but something is not quite right here. I'm getting this error.

Error in eval(substitute(expr), envir, enclos) : expecting a single value

I know the URL is right!

http://postimg.org/image/ccmvmnijr/

I think the problem is with this line:

page <- read_html(URL)


Or, maybe this line:

DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill = TRUE)


Can you help me get over the finish line here? Thanks!

Answer

Try this answer! I fixed the creation of the URL and set up a master data frame to store the requested information. The page is dynamically generated so using these the standard tools from rvest is not going to work. All of player (about 16 fields), college and pick information is stored on the page, it is a matter of searching for it and extracting it.

library(rvest)
library(stringr)
library(dplyr)

#create a master dataframe to store all of the results
complete<-data.frame()

yearsVector <- c( "2011", "2012", "2013", "2014", "2015")
#all position information is stored on each page no need to create sparate queries
#positionVector <- c("qb", "rb", "wr", "te", "ol", "dl", "lb", "cb", "s")
positionVector <- c("qb")
for (i in 1:length(yearsVector)) 
{
  for (j in 1:length(positionVector)) 
  {
    # create a url template 
    URL.base<-"http://www.nfl.com/draft/"
    URL.intermediate <- "/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:"
    #create the dataframe with the dynamic values
    URL <- paste0(URL.base, yearsVector[i], URL.intermediate, positionVector[j])
    print(yearsVector[i])
    print(URL)

    #read the page - store the page to make debugging easier
    page<- read_html(URL)

    #This needs work since the page is dynamicly generated.
    #DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill=TRUE)
    #About 539 names returned, may need to search and extracted requested info.
    #find records for each player
    playersloc<-str_locate_all(page, "\\{\"personId.*?\\}")[[1]]
    players<-str_sub(page, playersloc[,1]+1, playersloc[,2]-1)
    #fix the cases where the players are named Jr.
    players<-gsub(", ", "_", players  )

    #split and reshape the data in a data frame
    play2<-strsplit(gsub("\"", "", players), ',')
    data<-sapply(strsplit(unlist(play2), ":"), FUN=function(x){x[2]})
    df<-data.frame(matrix(data, ncol=16, byrow=TRUE))
    #name the column names
    names(df)<-sapply(strsplit(unlist(play2[1]), ":"), FUN=function(x){x[1]})

    #sort out the pick information
    picks<-str_locate_all(page, "\\{\"id.*?player.*?\\}")[[1]]
    picks<-str_sub(page, picks[,1]+1, picks[,2]-1)
    #fix the cases where there are commas in the notes section.
    picks<-gsub(", ", "_", picks  )
    picks<-strsplit(gsub("\"", "", picks), ',')
    data<-sapply(strsplit(unlist(picks), ":"), FUN=function(x){x[2]})
    picksdf<-data.frame(matrix(data, ncol=6, byrow=TRUE))
    names(picksdf)<-sapply(strsplit(unlist(picks[1]), ":"), FUN=function(x){x[1]})

    #sort out the college information
    schools<-str_locate_all(page, "\\{\"id.*?name.*?\\}")[[1]]
    schools<-str_sub(page, schools[,1]+1, schools[,2]-1)
    schools<-strsplit(gsub("\"", "", schools), ',')
    data<-sapply(strsplit(unlist(schools), ":"), FUN=function(x){x[2]})
    schoolsdf<-data.frame(matrix(data, ncol=3, byrow=TRUE))
    names(schoolsdf)<-sapply(strsplit(unlist(schools[1]), ":"), FUN=function(x){x[1]})

    #merge the 3 tables together
    df<-left_join(df, schoolsdf, by=c("college" =  "id"))
    df<-left_join(df, picksdf, by=c("pick" =  "id"))

    #store the temp values into the master dataframe
    complete<-rbind(complete, df)
  }
}

Figuring out the correct regular expression to find and locate the required information was tricky. It looks like the data from 2010 uses a different format use college information thus I left that year out. Also, please make sure you are not violating the terms of service on this site. Good luck