How to order factors by level frequency and level name

Intro

Quite frequently, factor variables are ordered by level frequency. However, factor levels having only a few observations are sometimes collapsed into one level usually named “others”. Since this level is usually not of particular interest, it may be a good idea to put this level in the last position of the plot rather than ordering it by level frequency. In this blog post, I’m going to show how to order a factor variable by level frequency and level name.

To replicate the R code I’m going to use in this post, four R packages must be loaded:

library(dplyr) # for data manipulation
library(ggplot2) # for plotting data
library(gghighlight) # ggplot2 extension for highlighting values

The dataset I’m going to use in this post (mtcars) is part of the datasets package.

head(mtcars)
##                    mpg cyl disp  hp drat    wt  qsec vs am gear carb
## Mazda RX4         21.0   6  160 110 3.90 2.620 16.46  0  1    4    4
## Mazda RX4 Wag     21.0   6  160 110 3.90 2.875 17.02  0  1    4    4
## Datsun 710        22.8   4  108  93 3.85 2.320 18.61  1  1    4    1
## Hornet 4 Drive    21.4   6  258 110 3.08 3.215 19.44  1  0    3    1
## Hornet Sportabout 18.7   8  360 175 3.15 3.440 17.02  0  0    3    2
## Valiant           18.1   6  225 105 2.76 3.460 20.22  1  0    3    1

In the first code chunk, we:

  • extract the first word of each car name and write it into a new variable called “brand”,
  • rename all car brands starting with “M” (Mazda, Merc, Maserati) to “Others” and
  • calculate the median miles per gallon (mpg) for each car brand.
df.mtcars %
  mutate(name = str_extract(rownames(.), "^\\w+\\b"),
         brand = str_replace(name, "^M\\w+", 'Others')) %>%
  group_by(brand) %>%
  summarize(mpg = median(mpg))
df.mtcars$brand
##  [1] "AMC"      "Cadillac" "Camaro"   "Chrysler" "Datsun"   "Dodge"   
##  [7] "Duster"   "Ferrari"  "Fiat"     "Ford"     "Honda"    "Hornet"  
## [13] "Lincoln"  "Lotus"    "Others"   "Pontiac"  "Porsche"  "Toyota"  
## [19] "Valiant"  "Volvo"

The following code chunk is to reorder the brand variable by level frequency using the reorder() function.

df.mtcars %
  mutate(brand = as.factor(brand),
         brand = reorder(brand, mpg))
levels(df.mtcars$brand)
##  [1] "Cadillac" "Lincoln"  "Camaro"   "Duster"   "Chrysler" "AMC"     
##  [7] "Dodge"    "Ford"     "Valiant"  "Others"   "Pontiac"  "Ferrari" 
## [13] "Hornet"   "Volvo"    "Datsun"   "Porsche"  "Toyota"   "Fiat"    
## [19] "Honda"    "Lotus"

As we can see, the bar representing the “Others” level is roughly in the middle of the plot.

ggplot(df.mtcars, aes(brand, mpg, fill = brand)) +
  coord_flip() +
  geom_col(width = 0.5) +
  gghighlight(brand == 'Others', unhighlighted_colour = "cornflowerblue") +
  scale_fill_manual(values = c("grey")) +
  theme_bw() +
  theme(legend.position = 'none') +
  labs(x = NULL, 
       y = 'Miles per Gallon',
       title = "Factor variable ordered by level frequency")

plot of chunk unnamed-chunk-4

To put the bar representing the “Others” level at the bottom of the plot, we have to set “Others” as reference category using the relevel() function.

df.mtcars %
  mutate(brand = relevel(brand, ref = "Others"))
levels(df.mtcars$brand)
##  [1] "Others"   "Cadillac" "Lincoln"  "Camaro"   "Duster"   "Chrysler"
##  [7] "AMC"      "Dodge"    "Ford"     "Valiant"  "Pontiac"  "Ferrari" 
## [13] "Hornet"   "Volvo"    "Datsun"   "Porsche"  "Toyota"   "Fiat"    
## [19] "Honda"    "Lotus"

Finally, the bar representing the “Others” level appears at the desired position.

ggplot(df.mtcars, aes(brand, mpg, fill = brand)) +
  coord_flip() +
  geom_col(width = 0.5) +
  gghighlight(brand == 'Others', unhighlighted_colour = "cornflowerblue") +
  scale_fill_manual(values = c("grey")) +
  theme_bw() +
  theme(legend.position = 'none') +
  labs(x = NULL, 
       y = 'Miles per Gallon',
       title = "Factor variable ordered by level frequency and level name")

plot of chunk unnamed-chunk-6

PS: In both plots, the gghighlight() function of the gghighlight package was used to highlight the desired factor level.

Advertisements
Posted in Tips & Tricks, Visualizing Data | Tagged | 2 Comments

Postleitzahlen mit führender Null richtig formatieren

Intro

Importiert man Postleitzahlen aus anderen Datenformaten (z.B. Excel, Access) in R, ist es nicht selten, dass Postleitzahlen programmintern tatsächlich auch als Zahlen abgespeichert werden. Wie ein Blick auf einen im Internet frei verfügbaren Datensatz zeigt, kann dies zu folgendem Problem führen:

library(dplyr)
library(readxl)
mydata <- readxl::read_xlsx("Liste-der-PLZ-in-Excel-Karte-Deutschland-Postleitzahlen.xlsx")
head(mydata)
## # A tibble: 6 x 4
##     PLZ Bundesland Kreis   Typ  
##   <dbl> <chr>      <chr>   <chr>
## 1  1067 Sachsen    Dresden Stadt
## 2  1069 Sachsen    Dresden Stadt
## 3  1097 Sachsen    Dresden Stadt
## 4  1099 Sachsen    Dresden Stadt
## 5  1108 Sachsen    Dresden Stadt
## 6  1109 Sachsen    Dresden Stadt

Die Postleitzahlen der Städte und Gemeinden in den Bundesländern Sachsen, Sachsen-Anhalt und Thüringen werden der führenden Null beraubt und als vierstellige Zahlen dargestellt.

Um den entsprechenden PLZ die führende Null zurückzugeben, habe ich die Funktion plz_repair() geschrieben.

plz_repair <- function(x){
  x = ifelse(nchar(x) == 4, paste0('0', x), as.character(x))
}

Die Funktion prüft zunächst, ob die PLZ vierstellig ist. Wenn diese Bedingung erfüllt ist, wird die PLZ um eine führende Null erweitert, sodass eine fünfstellige PLZ entsteht. Bereits fünfstellige PLZ bleiben unverändert. Die reparierte PLZ-Variable wird als character string abgespeichert.

mydata <- mydata %>%
  mutate(PLZ = plz_repair(PLZ))
head(mydata)
## # A tibble: 6 x 4
##   PLZ   Bundesland Kreis   Typ  
##   <chr> <chr>      <chr>   <chr>
## 1 01067 Sachsen    Dresden Stadt
## 2 01069 Sachsen    Dresden Stadt
## 3 01097 Sachsen    Dresden Stadt
## 4 01099 Sachsen    Dresden Stadt
## 5 01108 Sachsen    Dresden Stadt
## 6 01109 Sachsen    Dresden Stadt
tail(mydata)
## # A tibble: 6 x 4
##   PLZ   Bundesland Kreis                 Typ  
##   <chr> <chr>      <chr>                 <chr>
## 1 99986 Thüringen  Unstrut-Hainich-Kreis Kreis
## 2 99988 Thüringen  Unstrut-Hainich-Kreis Kreis
## 3 99991 Thüringen  Unstrut-Hainich-Kreis Kreis
## 4 99994 Thüringen  Unstrut-Hainich-Kreis Kreis
## 5 99996 Thüringen  Unstrut-Hainich-Kreis Kreis
## 6 99998 Thüringen  Unstrut-Hainich-Kreis Kreis
Posted in Tips & Tricks | Tagged , | 2 Comments

Scoring the PHQ-9 Questionnaire Using R

Intro

The PHQ-9 is the nine-item depression module of the Patient Health Questionnaire. Each of the items is scored on a 4-point Likert scale ranging from 0 (not at all) to 3 (nearly every day). The items are summed to obtain a total score ranging from 0 to 27 with higher scores indicating greater severity of depression. Based on the total score, different levels of severity can be evaluated with 0–4, 5–9, 10–14, 15–19 and 20–27 points indicating “minimal”, “mild”, “moderate”, “moderately severe” and “severe” depression.

The PHQ-9 questionnaire may be found under the following link.

In this blog post, I show how to calculate the PHQ-9 score and the PHQ-9 severety levels.

Packages and data

The dataset we are going to use was published in Plos One. The file has got a Digital Object Identifier (doi) and may be imported into R using the read_delim() function of the readr package.

library(readr)
library(dplyr)
library(ggplot2)

df.phq9 <- readr::read_delim("https://doi.org/10.1371/journal.pone.0156167.s001", 
                             delim = ";", 
                             escape_double = FALSE, 
                             trim_ws = TRUE) %>%
            select(starts_with('phq9'))

glimpse(df.phq9)
## Observations: 1,337
## Variables: 9
## $ phq9_1 <int> 1, 3, 2, 0, 0, 0, 1, 0, 0, 2, 1, 1, 0, 3, 0, 0, 0, 2, 0...
## $ phq9_2 <int> 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0...
## $ phq9_3 <int> 3, 2, 2, 2, 1, 0, 1, 3, 1, 0, 1, 1, 0, 3, 1, 0, 0, 0, 0...
## $ phq9_4 <int> 1, 1, 1, 1, 1, 1, 1, 1, 0, 2, 1, 3, 0, 1, 0, 0, 0, 1, 0...
## $ phq9_5 <int> 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 2, 0, 1, 0, 0, 0, 0, 0...
## $ phq9_6 <int> 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0...
## $ phq9_7 <int> 0, 1, 1, 1, 0, 1, 0, 0, 0, 3, 1, 1, 0, 1, 0, 0, 0, 0, 0...
## $ phq9_8 <int> 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0...
## $ phq9_9 <int> 0, 0, 0, 1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...

The Scoring Function

The scoring_phq9 function requires a data frame containing the PHQ-9 items (data) and a vector containing the items' names (items.phq9) as input parameters.

scoring_phq9 <- function(data, items.phq9) {
  data %>%
    mutate(nvalid.phq9 = rowSums(!is.na(select(., items.phq9))),
           nvalid.phq9 = as.integer(nvalid.phq9),
           mean.temp = rowSums(select(., items.phq9), na.rm = TRUE)/nvalid.phq9,
           phq.01.temp = as.integer(unlist(data[items.phq9[1]])),
           phq.02.temp = as.integer(unlist(data[items.phq9[2]])),
           phq.03.temp = as.integer(unlist(data[items.phq9[3]])),
           phq.04.temp = as.integer(unlist(data[items.phq9[4]])),
           phq.05.temp = as.integer(unlist(data[items.phq9[5]])),
           phq.06.temp = as.integer(unlist(data[items.phq9[6]])),
           phq.07.temp = as.integer(unlist(data[items.phq9[7]])),
           phq.08.temp = as.integer(unlist(data[items.phq9[8]])),
           phq.09.temp = as.integer(unlist(data[items.phq9[9]]))) %>%
    mutate_at(vars(phq.01.temp:phq.09.temp),
              funs(ifelse(is.na(.), round(mean.temp), .))) %>%
    mutate(score.temp = rowSums(select(., phq.01.temp:phq.09.temp), na.rm = TRUE),
           score.phq9 = ifelse(nvalid.phq9 >= 7, as.integer(round(score.temp)), NA),
           cutoff.phq9 = case_when(
             score.phq9 >= 20 ~ 'severe',
             score.phq9 >= 15 ~ 'moderately severe',
             score.phq9 >= 10 ~ 'moderate',
             score.phq9 >= 5 ~ 'mild',
             score.phq9 < 5 ~ 'minimal'),
             cutoff.phq9 = factor(cutoff.phq9, levels = c('minimal', 'mild',
                                                          'moderate', 'moderately severe',
                                                          'severe'))) %>%
    select(-ends_with("temp"))

}

Example

The function adds three variables to the original data frame:

  • nvalid.phq9: Number of variables with valid values,
  • score.phq9: PHQ-9 score (0 – 27),
  • cutoff.phq9: PHQ-9 severety levels (minimal, mild, moderate, moderately severe, severe)
items.phq9 <- paste0('phq9_', seq(1, 9, 1))
df.phq9 <- df.phq9 %>%
  scoring_phq9(., items.phq9)
glimpse(df.phq9)
## Observations: 1,337
## Variables: 12
## $ phq9_1      <int> 1, 3, 2, 0, 0, 0, 1, 0, 0, 2, 1, 1, 0, 3, 0, 0, 0,...
## $ phq9_2      <int> 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0,...
## $ phq9_3      <int> 3, 2, 2, 2, 1, 0, 1, 3, 1, 0, 1, 1, 0, 3, 1, 0, 0,...
## $ phq9_4      <int> 1, 1, 1, 1, 1, 1, 1, 1, 0, 2, 1, 3, 0, 1, 0, 0, 0,...
## $ phq9_5      <int> 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 2, 0, 1, 0, 0, 0,...
## $ phq9_6      <int> 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,...
## $ phq9_7      <int> 0, 1, 1, 1, 0, 1, 0, 0, 0, 3, 1, 1, 0, 1, 0, 0, 0,...
## $ phq9_8      <int> 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0,...
## $ phq9_9      <int> 0, 0, 0, 1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0,...
## $ nvalid.phq9 <int> 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,...
## $ score.phq9  <int> 7, 10, 7, 9, 3, 2, 3, 4, 5, 7, 7, 8, 0, 11, 1, 0, ...
## $ cutoff.phq9 <fct> mild, moderate, mild, mild, minimal, minimal, mini...

Visualization

PHQ-9 Score

ggplot(df.phq9, aes(score.phq9)) +
  geom_density(fill = 'blue', alpha = 0.2) +
  scale_x_continuous(limits = c(0, 27), breaks = c(0,5,10,15,20,27)) +
  labs(x = 'PHQ-9 Score', y = 'Density') +
  theme_bw()

plot of chunk unnamed-chunk-4

PHQ-9 Severety Levels

ggplot(df.phq9, aes(x = cutoff.phq9, fill = cutoff.phq9)) +
  geom_bar(colour = 'black') +
  scale_fill_brewer(type = 'seq') +
  labs(x = NULL, y = NULL, fill = NULL) +
  theme_bw()

plot of chunk unnamed-chunk-5

Posted in Indroduction | Tagged , | Leave a comment

Sample Size Calculation Using R

Preample

“Sample Size Calculation Using R” – the title of this blog post – sounds very comprehensive. However, this post only deals with the sample size collection for one particular test (two sample t-test). A collection of functions for basic sample size calculation can be found in the pwr package.

Introduction

A couple of months ago, I attended some statistics course about sample size calculation and sample size adjustment. Amongst others, we spoke about a randomized placebo-controlled trial with patients suffering of depression. The trial's primary end point was the difference between placebo and treatment group in the HAM-D total score (some depression score) between baseline and the end of therapy. During the course, the professor asked us to write a “two-liner” returning the sample size per group for a two-sided, two sample t-test with the following parameters:

  • alpha = 0.05,
  • sigma = 8 ,
  • delta = 4 and
  • power ≤ 0.90

Here is my solution:

In a first step, I wrote a loop returning the power rising from 0.1 to 0.9 and the corresponding sample sizes respectively. The loop may either be a repeat or a while loop.

While Loop

A while loop runs and repeats while a specified condition returns TRUE. The loop terminates only when the condition evaluates to FALSE. In our case, the loop stops evaluating as soon as p > 0.9.

p <- 0.05
while(p <= 0.9){
  p <- p + 0.05
  t <- power.t.test(sd = 8, delta = 4, sig.level=0.05, power = p, type="two.sample", alternative = "two.sided")
  print(paste(t$power, ceiling(t$n), sep = ' - '))
}

Repeat Loop

In a repeat loop it is required to define a so-called break declaration to stop repeating the code. Just like the while loop, our repeat loop stops evaluating as soon as p > 0.9.

Both, the while and the repeat loop return the same result.

p <- 0.05
repeat{
  p <- p + 0.05
  t <- power.t.test(sd = 8, delta = 4, sig.level=0.05, power = p, type="two.sample", alternative = "two.sided")
  print(paste(t$power, ceiling(t$n), sep = ' - '))
  if(p > 0.90){
    break
  }
}
## [1] "0.1 - 5"
## [1] "0.15 - 8"
## [1] "0.2 - 12"
## [1] "0.25 - 15"
## [1] "0.3 - 18"
## [1] "0.35 - 21"
## [1] "0.4 - 25"
## [1] "0.45 - 28"
## [1] "0.5 - 32"
## [1] "0.55 - 36"
## [1] "0.6 - 41"
## [1] "0.65 - 45"
## [1] "0.7 - 51"
## [1] "0.75 - 57"
## [1] "0.8 - 64"
## [1] "0.85 - 73"
## [1] "0.9 - 86"

For a very good introduction to loops, see Davies (2016): The Book of R. No Starch Press: San Francisco.

Storing the Results

In case we want to use the results for whatever purpose, we need to store them in a vector. The following example heavily borrows from StackOverflow. The loop returns a data frame containing two variables: pwr (power) and num (sample size).

p <- 0.05
pwr <- c()
num <- c()
while(p <= 0.9){
  p <- p + 0.05
  t <- power.t.test(sd = 8, delta = 4, sig.level=0.05, power = p, type="two.sample", alternative = "two.sided")
  pwr <- c(pwr, t$power)
  num <- c(num, ceiling(t$n))
  df <- data.frame(pwr = pwr, n = num)
}
rm(list = c(setdiff(ls(), c("df"))))
dplyr::glimpse(df)
## Observations: 17
## Variables: 2
## $ pwr <dbl> 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.55...
## $ n   <dbl> 5, 8, 12, 15, 18, 21, 25, 28, 32, 36, 41, 45, 51, 57, 64, ...

Plotting Power and Sample Size

Finally, we plot the relation between power and sample size using the ggplot2 package. The theme I use, stems from the ggpubr package.

ggplot(df, aes(pwr, n)) +
  geom_point() +
  scale_x_continuous(limits = c(0, 1), breaks = seq(0, 1, 0.1)) +
  scale_y_continuous(breaks = seq(0, max(df$n)+10, 10)) +
  geom_text(aes(label = n), nudge_y = 4) +
  ggpubr::theme_pubr() +
  labs(x = 'Power', y = 'Sample Size', 
       title = 'Power and Sample Size', 
        subtitle = expression('Two-sided, two sample t-test with' ~ sigma ~ '= 8,' ~ delta ~ '= 4,' ~ alpha ~ '= 0.05'))

plot of chunk unnamed-chunk-4

As we see, a power of 0.90 requires a sample size of 86 patients per treatment group for a two-sided, two sample t-test with sd = 8, delta = 4, and alpha = 0.05.

Posted in Indroduction | Tagged , , , | Leave a comment

How to Plot Venn Diagrams Using R, ggplot2 and ggforce

Intro

Venn diagrams – named after the English logician and philosopher John Venn – “illustrate the logical relationships between two or more sets of items” with overlapping circles.

In this tutorial, I'll show how to plot a three set venn diagram using R and the ggplot2 package.

Packages and Data

For the R code to run, we need to install and load three R packages. Unlike tidyverse and ggforce, the limma package must be installed from Bioconductor rather than from CRAN.

Moreover, we create a random data frame using the rbinom() function.

source("http://www.bioconductor.org/biocLite.R")
biocLite("limma")
library(limma)
library(tidyverse)
library(ggforce)
set.seed((123))
mydata <- data.frame(A = rbinom(100, 1, 0.8),
                     B = rbinom(100, 1, 0.7),
                     C = rbinom(100, 1, 0.6)) %>%
                       mutate_all(., as.logical)

Drawing the Circles

Next, we create a data frame defining the x and y coordinates for the three circles we want to draw and a variable defining the labels. For plotting the circles – the basic structure of our venn diagram – we need the geom_circle() function of the ggforce package. We employ the geom_circle()-function of the ggforce package to actually draw the circles. With the parameter r (= 1.5), we define the radius of the circles.

df.venn <- data.frame(x = c(0, 0.866, -0.866),
                      y = c(1, -0.5, -0.5),
                      labels = c('A', 'B', 'C'))
ggplot(df.venn, aes(x0 = x, y0 = y, r = 1.5, fill = labels)) +
    geom_circle(alpha = .3, size = 1, colour = 'grey') +
      coord_fixed() +
        theme_void()

plot of chunk unnamed-chunk-2

Furthermore, we need a data frame with the values we want the plot and the coordinates for plotting these values. The values can be obtained using the vennCounts() function of the limma package. Since ggplot2 requires data frames we need to first transform the vdc object (class VennCounts) into a matrix and then into a data frame. In addition, we need to add the x and y coordinates for plotting the values.

vdc <- vennCounts(mydata)
class(vdc) <- 'matrix'
df.vdc <- as.data.frame(vdc)[-1,] %>%
  mutate(x = c(0, 1.2, 0.8, -1.2, -0.8, 0, 0),
         y = c(1.2, -0.6, 0.5, -0.6, 0.5, -1, 0))

The final Plot

Finally, we'll customize the look of our venn diagram and plot the values.

ggplot(df.venn) +
  geom_circle(aes(x0 = x, y0 = y, r = 1.5, fill = labels), alpha = .3, size = 1, colour = 'grey') +
  coord_fixed() +
  theme_void() +
  theme(legend.position = 'bottom') +
  scale_fill_manual(values = c('cornflowerblue', 'firebrick',  'gold')) +
  scale_colour_manual(values = c('cornflowerblue', 'firebrick', 'gold'), guide = FALSE) +
  labs(fill = NULL) +
  annotate("text", x = df.vdc$x, y = df.vdc$y, label = df.vdc$Counts, size = 5)

plot of chunk unnamed-chunk-4

Posted in Visualizing Data | Tagged , | Leave a comment

How to plot GPS data using R, ggplot2 and ggmap

Intro

A couple of months ago, I bought a mobile GPS navigation device. The device makes it possible to record the routes we travel with our car. Since the routes can by exported as .gpx files, it is rather easy to plot them using R.

Packages

To replicate this blog post, three R packages are required: plotKML to import the data into R and ggplot2 and ggmap to plot the data.

library(plotKML)
library(ggplot2)
library(ggmap)

As a prerequisite for the installation of the plotKML package on my Linux Mint 18.3 OS (based on Ubuntu 16.04), I needed to add the ubuntugis PPA.

sudo add-apt-repository ppa:ubuntugis/ppa
sudo apt-get update
sudo apt-get dist-upgrade

Import

All what's required for importing the .gpx file into R is the readGPX() function of the plotKML package. After importing the file into R, we receive a list (lst.rd) conatining some meta information and one data frame. To be able to plot the GPS data using ggplot2, we need to subset this list and save the data frame into a new object (df).

lst.rd <- readGPX('29 Dez. 1413.gpx')
df <- lst.rd$tracks[[1]][[1]] 

Using the head function, we have a short look on the data:

head(df)
##        lon      lat                 time
## 1 12.98376 51.05915 2017-12-29T12:16:28Z
## 2 12.98377 51.05912 2017-12-29T12:16:29Z
## 3 12.98374 51.05912 2017-12-29T12:16:30Z
## 4 12.98372 51.05914 2017-12-29T12:16:31Z
## 5 12.98372 51.05914 2017-12-29T12:16:32Z
## 6 12.98372 51.05914 2017-12-29T12:16:33Z

As we can see, our data frame consists of three variables: longitude, latitude and time.

Plotting

Whithout much efford, we can plot our GPS data using ggplot2. Please note that GPS coordinates must be plotted as points.

ggplot(df, aes(x = lon, y = lat)) +
  coord_quickmap() +
  geom_point()

plot of chunk plot-1

In addition, the ggmap package offers some functionality to plot the data on maps.

The first example shows how to plot the data on a map provided by Google Maps.

mapImageData <- get_googlemap(center = c(lon = mean(df$lon), lat = mean(df$lat)),
                              zoom = 10,
                              color = 'bw',
                              scale = 1,
                              maptype = "terrain")
ggmap(mapImageData, extent = "device") + # removes axes, etc.
  geom_point(aes(x = lon,
                 y = lat),
             data = df,
             colour = "red3",
             alpha = .1,
             size = .1)

plot of chunk plot-2

The second example shows how to plot the data on a Stamen map.

mapImage <- get_map(location = c(lon = mean(df$lon) - 0.05, lat = mean(df$lat)),
                    source = "stamen",
                    maptype = "toner",
                    zoom = 10)

ggmap(mapImage, extent = "device") + 
  geom_point(aes(x = lon,
                 y = lat),
             data = df,
             colour = "red3",
             size = .2) 

plot of chunk plot-3

References

This blog post heavily borrows from the tutorial Mapping GPS Tracks in R. Thanks very much!

Posted in Visualizing Data | Tagged , | Leave a comment

R Markdown: How to place two tables side by side using ‘knitr’ and ‘kableExtra’

Intro

When I was recently writing some report using R Markdown, I wanted to place two rather small tables side by side. Since I usually use the kable()-function of the knitr package and the kableExtra package to print tables, I tried to find a solution for my problem using both packages.

Since my Google search (“two tables side by side with kableExtra” or something similar) did not return a helpful result, I experimented with some table formating options provided by the kableExtra package. Here is my solution.

Packages and data

For printing the tables we need to install and load two packages: knitr and kableExtra. The dplyr packages is required for some data manipulation. The data we want to put into the tables stem from the bundesligR package which contains final tables of Germany's highest football (soccer) league. We want to place the final tables of two seasons (1985/86 and 2015/16) side by side.

df <- bundesligR::bundesligR 
table.1985 <- df %>%
  filter(Season == 1985) %>%
    select(Position, Team, Points)
table.2015 <- df %>%
  filter(Season == 2015) %>%
    select(Position, Team, Points)

Now, we place both tables side by side using some functionality of the kableExtra package:

table.1985 %>%
  kable("html", align = 'clc', caption = 'Bundesliga, Season 1985/86') %>%
    kable_styling(full_width = F, position = "float_left")

table.2015 %>%
  kable("html", align = 'clc', caption = 'Bundesliga, Season 2015/16') %>%
    kable_styling(full_width = F, position = "right")
Bundesliga, Season 1985/86
Position Team Points
1 FC Bayern Muenchen 70
2 Werder Bremen 69
3 FC Bayer 05 Uerdingen 64
4 Borussia Moenchengladbach 57
5 VfB Stuttgart 58
6 TSV Bayer 04 Leverkusen 55
7 Hamburger SV 56
8 SV Waldhof Mannheim 44
9 VfL Bochum 46
10 FC Schalke 04 41
11 1. FC Kaiserslautern 40
12 1. FC Nuernberg 41
13 1. FC Koeln 38
14 Fortuna Duesseldorf 40
15 Eintracht Frankfurt 35
16 Borussia Dortmund 38
17 1. FC Saarbruecken 27
18 Hannover 96 23
Bundesliga, Season 2015/16
Position Team Points
1 FC Bayern Muenchen 88
2 Borussia Dortmund 78
3 Bayer 04 Leverkusen 60
4 Borussia Moenchengladbach 55
5 FC Schalke 04 52
6 1. FSV Mainz 05 50
7 Hertha BSC 50
8 VfL Wolfsburg 45
9 1. FC Koeln 43
10 Hamburger SV 41
11 FC Ingolstadt 04 40
12 FC Augsburg 38
13 Werder Bremen 38
14 SV Darmstadt 98 38
15 TSG 1899 Hoffenheim 37
16 Eintracht Frankfurt 36
17 VfB Stuttgart 33
18 Hannover 96 25

The trick is to set the position argument to float_left (left table) and right (right table). Furthermore, the argument full_width must be set to FALSE in both tables.

To Do

Unfortunately, the given example only works for rendering HTML documents. Does anyone know how to place two tables side by side when the output format is PDF/LaTeX?

Posted in Tips & Tricks | Tagged , | 3 Comments