AE 02: Wrangling college education metrics

Application exercise
Modified

September 13, 2024

Important

Go to the course GitHub organization and locate the repo titled ae-02-YOUR_GITHUB_USERNAME to get started.

This AE is due September 10 at 11:59pm.

To demonstrate data wrangling we will use data from College Scorecard.1 The subset we will analyze contains a small number of metrics for all four-year colleges and universities in the United States for the 2022-23 academic year. 2

1 College Scorecard is a product of the U.S. Department of Education and compiles detailed information about student completion, debt and repayment, earnings, and more for all degree-granting institutions across the country.

The data is stored in scorecard.csv. The variables are:

scorecard <- read_csv("data/scorecard.csv")

The data frame has over 1700 observations (rows), 1721 observations to be exact, so we will not view the entire data frame. Instead we’ll use the commands below to help us explore the data.

glimpse(scorecard)
Rows: 1,721
Columns: 14
$ unit_id     <dbl> 100654, 100663, 100706, 100724, 100751, 100830, 100858, 10…
$ name        <chr> "Alabama A & M University", "University of Alabama at Birm…
$ state       <chr> "AL", "AL", "AL", "AL", "AL", "AL", "AL", "AL", "AL", "AL"…
$ type        <chr> "Public", "Public", "Public", "Public", "Public", "Public"…
$ adm_rate    <dbl> 0.6840, 0.8668, 0.7810, 0.9660, 0.8006, 0.9223, 0.4374, 0.…
$ sat_avg     <dbl> 920, 1291, 1259, 963, 1304, 1051, 1292, 1218, 1021, NA, 10…
$ cost        <dbl> 23167, 26257, 25777, 21900, 31024, 19771, 33650, 35495, 36…
$ net_cost    <dbl> 14982, 16755, 18240, 13527, 20888, 12630, 24297, 19723, 19…
$ avg_fac_sal <dbl> 77859, 106533, 92403, 72639, 96993, 75294, 104472, 63261, …
$ pct_pell    <dbl> 0.6536, 0.3308, 0.2173, 0.6976, 0.1788, 0.4589, 0.1254, 0.…
$ comp_rate   <dbl> 0.2678, 0.6442, 0.6295, 0.2773, 0.7276, 0.3584, 0.8075, 0.…
$ first_gen   <dbl> 0.3658281, 0.3412237, 0.3101322, 0.3434343, 0.2257127, 0.3…
$ debt        <dbl> 16600, 15832, 13905, 17500, 17986, 13119, 17750, 16000, 15…
$ locale      <chr> "City", "City", "City", "City", "City", "City", "City", "C…
names(scorecard)
 [1] "unit_id"     "name"        "state"       "type"        "adm_rate"   
 [6] "sat_avg"     "cost"        "net_cost"    "avg_fac_sal" "pct_pell"   
[11] "comp_rate"   "first_gen"   "debt"        "locale"     
head(scorecard)
# A tibble: 6 × 14
  unit_id name  state type  adm_rate sat_avg  cost net_cost avg_fac_sal pct_pell
    <dbl> <chr> <chr> <chr>    <dbl>   <dbl> <dbl>    <dbl>       <dbl>    <dbl>
1  100654 Alab… AL    Publ…    0.684     920 23167    14982       77859    0.654
2  100663 Univ… AL    Publ…    0.867    1291 26257    16755      106533    0.331
3  100706 Univ… AL    Publ…    0.781    1259 25777    18240       92403    0.217
4  100724 Alab… AL    Publ…    0.966     963 21900    13527       72639    0.698
5  100751 The … AL    Publ…    0.801    1304 31024    20888       96993    0.179
6  100830 Aubu… AL    Publ…    0.922    1051 19771    12630       75294    0.459
# ℹ 4 more variables: comp_rate <dbl>, first_gen <dbl>, debt <dbl>,
#   locale <chr>

The head() function returns “A tibble: 6 x 14” and then the first six rows of the scorecard data.

Tibble vs. data frame

A tibble is an opinionated version of the R data frame. In other words, all tibbles are data frames, but not all data frames are tibbles!

There are two main differences between a tibble and a data frame:

  1. When you print a tibble, the first ten rows and all of the columns that fit on the screen will display, along with the type of each column.

    Let’s look at the differences in the output when we type scorecard (tibble) in the console versus typing cars (data frame) in the console.

  2. Second, tibbles are somewhat more strict than data frames when it comes to subsetting data. You will get a warning message if you try to access a variable that doesn’t exist in a tibble. You will get NULL if you try to access a variable that doesn’t exist in a data frame.

scorecard$apple
Warning: Unknown or uninitialised column: `apple`.
NULL
cars$apple
NULL

Data wrangling with dplyr

dplyr is the primary package in the tidyverse for data wrangling.

Helpful data wrangling resources

Quick summary of key dplyr functions3

Rows:

  • filter():chooses rows based on column values.
  • slice(): chooses rows based on location.
  • arrange(): changes the order of the rows
  • sample_n(): take a random subset of the rows

Columns:

  • select(): changes whether or not a column is included.
  • rename(): changes the name of columns.
  • mutate(): changes the values of columns and creates new columns.

Groups of rows:

  • summarize(): collapses a group into a single row.
  • count(): count unique values of one or more variables.
  • group_by(): perform calculations separately for each value of a variable

Operators

In order to make comparisons, we will use logical operators. These should be familiar from other programming languages. See below for a reference table for how to use these operators in R.

operator definition
< is less than?
<= is less than or equal to?
> is greater than?
>= is greater than or equal to?
== is exactly equal to?
!= is not equal to?
x & y is x AND y?
x | y is x OR y?
is.na(x) is x NA?
!is.na(x) is x not NA?
x %in% y is x in y?
!(x %in% y) is x not in y?
!x is not x?

The final operator only makes sense if x is logical (TRUE / FALSE).

The pipe

Before working with data wrangling functions, let’s formally introduce the pipe. The pipe, |>, is an operator (a tool) for passing information from one process to another. We will use |> mainly in data pipelines to pass the output of the previous line of code as the first input of the next line of code.

When reading code “in English”, say “and then” whenever you see a pipe.

  • Your turn (3 minutes): Run the following chunk and observe its output. Then, come up with a different way of obtaining the same output.
scorecard |>
  select(name, type) |>
  head()
# A tibble: 6 × 2
  name                                type  
  <chr>                               <chr> 
1 Alabama A & M University            Public
2 University of Alabama at Birmingham Public
3 University of Alabama in Huntsville Public
4 Alabama State University            Public
5 The University of Alabama           Public
6 Auburn University at Montgomery     Public
# add code here

Exercises

Single function transformations

Demo: Select the name column.

# add code here

Demo: Select all columns except unit_id.

# add code here

Demo: Filter the data frame to keep only schools with a greater than 40% share of first-generation students.

# add code here

Your turn: Filter the data frame to keep only public schools with a net cost of attendance below $12,000.

# add code here

Multiple function transformations

Your turn: How many public colleges and universities in each state have a net cost of attendance below $12,000?

# add code here

Your turn: Generate a data frame with the 10 most expensive colleges in 2022-23 based on net cost of attendance.

# add code here

Your turn: Generate a data frame with the average SAT score for each type of college.

Note that since the sat_avg column contains NAs (missing values), we need to explicitly exclude them from our mean calculation. Otherwise the resulting data frame contains NAs.

# add code here

Your turn: Calculate for each school how many students it takes to pay the average faculty member’s salary and generate a data frame with the school’s name, net cost of attendance, average faculty salary, and the calculated value. How many Cornell and Ithaca College students does it take to pay their average faculty member’s salary?

Note

You should use the net cost of attendance measure, not the sticker price.

# add code here

Your turn: Calculate how many private, nonprofit schools have a smaller net cost than Cornell University.

You will need to create a new column that ranks the schools by net cost of attendance. Look at the back of the dplyr cheatsheet for functions that can be used to calculate rankings.

# add code here