Asked  6 Months ago    Answers:  5   Viewed   32 times

How do I scrape html tables using the XML package?

Take, for example, this wikipedia page on the Brazilian soccer team. I would like to read it in R and get the "list of all matches Brazil have played against FIFA recognised teams" table as a data.frame. How can I do this?

 Answers

97

…or a shorter try:

library(XML)
library(RCurl)
library(rlist)
theurl <- getURL("https://en.wikipedia.org/wiki/Brazil_national_football_team",.opts = list(ssl.verifypeer = FALSE) )
tables <- readHTMLTable(theurl)
tables <- list.clean(tables, fun = is.null, recursive = FALSE)
n.rows <- unlist(lapply(tables, function(t) dim(t)[1]))

the picked table is the longest one on the page

tables[[which.max(n.rows)]]
Tuesday, June 1, 2021
 
Tak
answered 6 Months ago
Tak
32

Maybe like this

library(XML)
library(rvest)
html = html("http://en.wikipedia.org/wiki/List_of_Justices_of_the_Supreme_Court_of_the_United_States")
judges = html_table(html_nodes(html, "table")[[2]])
head(judges[,2])
# [1] "Wilson, JamesJames Wilson"       "Jay, JohnJohn Jay†"              "Cushing, WilliamWilliam Cushing" "Blair, JohnJohn Blair, Jr."     
# [5] "Rutledge, JohnJohn Rutledge"     "Iredell, JamesJames Iredel

removeNodes(getNodeSet(html, "//table/tr/td[2]/span"))
judges = html_table(html_nodes(html, "table")[[2]])
head(judges[,2])
# [1] "James Wilson"    "John Jay†"       "William Cushing" "John Blair, Jr." "John Rutledge"   "James Iredell" 
Wednesday, August 4, 2021
 
Noob_Programmer
answered 4 Months ago
11

I'd approach the issue in this manner:

First, there's an error message. What does it say?

Join results in 121229 rows; more than 100000 = max(nrow(x),nrow(i)). Check for duplicate key values in i, each of which join to the same group in x over and over again. If that's ok, try including j and dropping by (by-without-by) so that j runs for each group to avoid the large allocation. If you are sure you wish to proceed, rerun with allow.cartesian=TRUE. Otherwise, please search for this error message in the FAQ, Wiki, Stack Overflow and datatable-help for advice.

Great! But I've so many datasets I'm working with, and so many packages and so many functions. I've got to narrow this down to which data set produces this error.

Testing one by one:

ans1 = merge(as.data.table(dataSets[[1]]), as.data.table(dataSets[[2]]), 
                all.x=TRUE, all.y=FALSE, by="Project ID")
## works fine.

ans2 = merge(as.data.table(dataSets[[1]]), as.data.table(dataSets[[3]]), 
                all.x=TRUE, all.y=FALSE, by="Project ID")
## same error

Aha, got the same error.

Reading the second line of the error message:

So, something seems to happen with dataSets[[3]]. It says to check for duplicate key values in i. Let's do that:

dim(dataSets[[3]])
# [1] 81487     3
dim(unique(as.data.table(dataSets[[3]]), by="Project ID"))
# [1] 49999     3

So, dataSets[[3]] has duplicated 'Project ID' values, and so for each duplicated value, all the matching rows from dataSets[[1]] is returned - which is what the 2nd part of the 2nd line explains: each of which join to the same group in x over and over again.

Trying out allow.cartesian=TRUE:

I know that there are duplicate keys and still wish to proceed. But the error message mentions how we can proceed, add "allow.cartesian=TRUE".

ans2 = merge(as.data.table(dataSets[[1]]), as.data.table(dataSets[[3]]), 
                all.x=TRUE, all.y=FALSE, by="Project ID", allow.cartesian=TRUE)

Aha, now it works fine! So what does allow.cartesian = TRUE do? Or why was it added? The error message says to search for the message on stackoverflow (amidst other things).

Searching for allow.cartesian=TRUE on SO:

And the search lands me in on to this Why is allow.cartesian required at times when when joining data.tables with duplicate keys? question, which explains the purpose, and which also contains, under the comment, another link from @Roland: Merging data.tables uses more than 10 GB RAM which points to the initial issue that all started it. Let me read those posts now.


Is base::merge giving a different result?

Now, does base::merge return a different result (with 100,000 rows)?

dim(merge(dataSets[[1]], dataSets[[3]], all.x=TRUE, all.y=FALSE, by="Project ID"))
# [1] 121229      4

Not really. It's giving the same dimension as when using data.table, but it just doesn't care if there are duplicate keys, whereas data.table warns you of potential explosion of the merged results and allows you to make an informed decision.

Tuesday, August 24, 2021
 
ajay
answered 3 Months ago
12

The problem is - there are irrelevant li tags that don't contain the data you need.

Be more specific. For example, if you want to get the list of events from the "20th century", first find the header and get the list of events from it's parent's following ul sibling. Also, not every item in the list has the date in the %B %d, %Y format - you need to handle it via try/except block:

import urllib2
from datetime import datetime
from bs4 import BeautifulSoup


page1 = urllib2.urlopen("http://en.wikipedia.org/wiki/List_of_human_stampedes")
soup = BeautifulSoup(page1)

events = soup.find('span', id='20th_century').parent.find_next_sibling('ul')
for event in events.find_all('li'):
    try:
        date_string, rest = event.text.split(':', 1)
        print datetime.strptime(date_string, '%B %d, %Y').strftime('%d/%m/%Y')
    except ValueError:
        print event.text

Prints:

19/09/1902
30/12/1903
11/01/1908
24/12/1913
23/10/1942
09/03/1946
1954 500-800 killed at Kumbha Mela, Allahabad.
01/01/1956
02/01/1971
03/12/1979
20/10/1982
29/05/1985
13/03/1988
20/08/1988

Updated version (getting all ul groups under a century):

events = soup.find('span', id='20th_century').parent.find_next_siblings()
for tag in events:
    if tag.name == 'h2':
        break
    for event in tag.find_all('li'):
        try:
            date_string, rest = event.text.split(':', 1)
            print datetime.strptime(date_string, '%B %d, %Y').strftime('%d/%m/%Y')
        except ValueError:
            print event.text
Thursday, September 2, 2021
 
Remy Lebeau
answered 3 Months ago
37

I thought about using XSLT, which would let me build small XML document out of HTML and then read it easy.

Unlikely. Most HTML is not valid XML.

What other ways I could parse HTML without parsing content as simple string directly?

To parse HTML, you use an HTML parser. There are several open source ones, and it is reasonably likely that one or more will work on Android with few modifications if any.

Wednesday, October 20, 2021
 
Michael
answered 1 Month ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :
 
Share