I'm trying to use the
html_content <- read_html("https://projects.fivethirtyeight.com/2016-election-forecast/washington/#now")
tables <- html_nodes(html_content, xpath = '//table')
 <table class="tippingpointroi unexpanded">\n <tbody>\n <tr data-state="FL" class=" "> ...
 <table class="tippingpointroi unexpanded">\n <tbody>\n <tr data-state="NV" class=" "> ...
 <table class="scenarios">\n <tbody/>\n <tr data-id="1">\n <td class="description">El ...
 <table class="t-desktop t-polls">\n <thead>\n <tr class="th-row">\n <th class="t ...
RSelenium to grab the text of the page after it's rendered and pass the page into
rvest OR grab a treasure trove of all the data by using
library(rvest) library(V8) URL <- "http://projects.fivethirtyeight.com/2016-election-forecast/washington/#now" pg <- read_html(URL) js <- html_nodes(pg, xpath=".//script[contains(., 'race.model')]") %>% html_text() ctx <- v8() ctx$eval(JS(js)) race <- ctx$get("race", simplifyVector=FALSE) str(race) ## output too large to paste here
RSelenium approach will be better provided they don't change the format of the table structure (again, unlikely, but you never know).