Asked  7 Months ago    Answers:  5   Viewed   76 times

I have hundreds of medium sized Excel files (between 5000 and 50.0000 rows with about 100 columns) to load into R. They have a well-defined naming pattern, like x_1.xlsx, x_2.xlsx, etc.

How can I load these files into R in the fastest, most straightforward way?

 Answers

11

With list.files you can create a list of all the filenames in your workingdirectory. Next you can use lapply to loop over that list and read each file with the read_excel function from the readxl package:

library(readxl)
file.list <- list.files(pattern='*.xlsx')
df.list <- lapply(file.list, read_excel)

This method can off course also be used with other file reading functions like read.csv or read.table. Just replace read_excel with the appropriate file reading function and make sure you use the correct pattern in list.files.

If you also want to include the files in subdirectories, use:

file.list <- list.files(pattern='*.xlsx', recursive = TRUE)

Other possible packages for reading Excel-files: openxlsx & xlsx


Supposing the columns are the same for each file, you can bind them together in one dataframe with bind_rows from dplyr:

library(dplyr)
df <- bind_rows(df.list, .id = "id")

or with rbindlist from data.table:

library(data.table)
df <- rbindlist(df.list, idcol = "id")

Both have the option to add a id column for identifying the separate datasets.


Update: If you don't want a numeric identifier, just use sapply with simplify = FALSE to read the files in file.list:

df.list <- sapply(file.list, read.csv, simplify=FALSE)

When using bind_rows from dplyr or rbindlist from data.table, the id column now contains the filenames.

Even another approach is using the purrr-package:

library(purrr)
file.list <- list.files(pattern='*.csv')
file.list <- setNames(file.list, file.list) # only needed when you need an id-column with the file-names

df <- map_df(file.list, read.csv, .id = "id")

Other approaches to getting a named list: If you don't want just a numeric identifier, than you can assign the filenames to the dataframes in the list before you bind them together. There are several ways to do this:

# with the 'attr' function from base R
attr(df.list, "names") <- file.list
# with the 'names' function from base R
names(df.list) <- file.list
# with the 'setattr' function from the 'data.table' package
setattr(df.list, "names", file.list)

Now you can bind the list of dataframes together in one dataframe with rbindlist from data.table or bind_rows from dplyr. The id column will now contain the filenames instead of a numeric indentifier.

Tuesday, June 1, 2021
 
medhybrid
answered 7 Months ago
79

Thanks for all the answers!

In the meanwhile, I also hacked a method on my own. Let me know if it is any useful:

library(foreign)

setwd("/path/to/directory")

files <-list.files()

data <- 0


for (f in files) {

tempData = scan( f, what="character")

data <- c(data,tempData)    

} 
Tuesday, June 1, 2021
 
Asher
answered 7 Months ago
18

This should be a one-liner:

library('haven')
sas <- read_sas('nats2012.sas7bdat', 'formats.sas7bcat')

with(sas, table(SMOKSTATUS_R, RACEETHNIC))
#             RACEETHNIC
# SMOKSTATUS_R     1     2     3     4     5     6     7     8     9
#            1  4045   455    55     7    63     0   675   393   373
#            2  1183   222    38     2    26     0   217   255   154
#            3 14480   957   238    14    95     3  1112   950   369
#            4 23923  2532  1157    23   147     1  1755  3223   909
#            5    81    18     4     0     1     0    11    17     9

table(names(attr(sas[, 'SMOKSTATUS_R'], 'labels')[sas[, 'SMOKSTATUS_R']]),
      names(attr(sas[, 'RACEETHNIC'], 'labels')[sas[, 'RACEETHNIC']]))

#                          Amer. Indian, AK Nat. Only, Non-Hispanic
# Current everyday smoker                                        63
# Current some days smoker                                       26
# Former smoker                                                  95
# Never smoker                                                  147
# Unknown                                                         1

Use haven to read in the data, but that also gives you some useful attributes, namely the variable labels:

attributes(sas$SMOKSTATUS_R)
# $label
# [1] "SMOKER STATUS (4-level)"
# 
# $class
# [1] "labelled"
# 
# $labels
# Current everyday smoker Current some days smoker            Former smoker 
#                       1                        2                        3 
# Never smoker                  Unknown 
#            4                        5 
# 
# $is_na
# [1] FALSE FALSE FALSE FALSE FALSE

You can easily write this into a function to use more generally:

do_fmt <- function(x, fmt) {
  lbl <- if (!missing(fmt))
    unlist(unname(fmt)) else attr(x, 'labels')

  if (!is.null(lbl))
    tryCatch(names(lbl[match(unlist(x), lbl)]),
             error = function(e) {
               message(sprintf('formatting failed for %s', attr(x, 'label')),
                       domain = NA)
               x
             }) else x
}

table(do_fmt(sas[, 'SMOKSTATUS_R']),
      do_fmt(sas[, 'RACEETHNIC']))

#                          Amer. Indian, AK Nat. Only, Non-Hispanic
# Current everyday smoker                                        63
# Current some days smoker                                       26
# Former smoker                                                  95
# Never smoker                                                  147
# Unknown                                                         1

And apply to the entire data set

sas[] <- lapply(sas, do_fmt)
sas$SMOKSTATUS_R[1:4]
# [1] "Never smoker"  "Former smoker" "Former smoker" "Never smoker" 

Although sometimes this fails like below. This looks like something wrong with the haven package

attr(sas$SMOKTYPE, 'labels')
# INAPPLICABLE            REFUSED                 DK    NOT ASCERTAINED 
#     -4.00000           -0.62500           -0.50000           -0.46875 
# PREMADE CIGARETTES      ROLL-YOUR-OWN               BOTH 
#            1.00000            2.00000            3.00000 

So instead of this, you can parse the format.sas file with some simple regexes

locf <- function(x) {
  x <- data.frame(x, stringsAsFactors = FALSE)
  x[x == ''] <- NA
  indx <- !is.na(x)

  x[] <- lapply(seq_along(x), function(ii) {
    idx <- cumsum(indx[, ii])
    idx[idx == 0] <- NA
    x[, ii][indx[, ii]][idx]
  })
  x[, 1]
}

fmt <- readLines('~/desktop/2012-2013-NATS-Format/2012-2013-NATS-Format.sas')
## not sure if comments are allowed in the value definitions, but
## this will check for those in case
fmt <- gsub('\*.*;|\/\*.*\*\/', '', fmt)

vars <- gsub('(?i)value\W+(\w*)|.', '\1', fmt, perl = TRUE)
vars <- locf(vars)

regex <- '['"].*['"]|[\w\d-]+'
vals <- gsub(sprintf('(?i)\s*(%s)\s*(=)\s*(%s)|.', regex, regex),
               '\1\2\3', fmt, perl = TRUE)

View(dd <- na.omit(data.frame(values = vars, formats = vals,
                              stringsAsFactors = FALSE)))

sp <- split(dd$formats, dd$values)
sp <- lapply(sp, function(x) {
  x <- Filter(nzchar, x)
  x <- strsplit(x, '=')
  tw <- function(x) gsub('^\s+|\s+$', '', x)
  sapply(x, function(y)
    setNames(tw(y[1]), tw(y[2])))
})

So the smoke type formats (one of them that failed above), for example, gets parsed like this:

sp['A5_']
# $A5_
# 'INAPPLICABLE'            'REFUSED'                 'DK' 
#           "-1"                 "-7"                 "-8" 
# 'NOT ASCERTAINED' 'PREMADE CIGARETTES'      'ROLL-YOUR-OWN'  'BOTH' 
#              "-9"                  "1"                  "2"     "3" 

And then you can use the function again to apply to the data

table(do_fmt(sas['SMOKTYPE'], sp['A5_']))

# 'BOTH'                 'DK'       'INAPPLICABLE' 
#   736                   17                51857 
# 'PREMADE CIGARETTES'            'REFUSED'      'ROLL-YOUR-OWN' 
#                 7184                    2                  396 
Tuesday, August 3, 2021
 
Anders Andersen
answered 4 Months ago
64

Try this:
Sys.setlocale(category = "LC_ALL", locale = "Greek")

Wednesday, August 4, 2021
 
user505210
answered 4 Months ago
80

I posted this question after I found the answer just because this was such a disaster for me, and had plagued me for months. Hopefully this post will keep someone else from pulling their hair out like I did for so long.

The fix is pretty simple. What happens is every time excel imports an xml file, it stores an xmlmap in that spreadsheet. So if you use the same spreadsheet and keep saving it, these build up over time. The best solution I have found is to just delete these xml maps, or reuse the same one (the latter is not a good option many times).

Here is some code to delete all of them (I just run this before the code I included in my question):

Dim XmlMap as XmlMap
    For Each XmlMap In ActiveWorkbook.XmlMaps
        XmlMap.Delete
    Next
Monday, November 15, 2021
 
mfro
answered 2 Weeks ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :
 
Share