This Coding Notebook is the first in a series.
An Interactive version can be found here .
This colab and more can be found on our webpage.
Content covered in previous tutorials will be used in later tutorials.
New code and or information should have explanations and or descriptions attached.
Concepts or code covered in previous tutorials will be used without being explaining in entirety.
The Dataplay Handbook development techniques covered in the Datalabs Guidebook
If content can not be found in the current tutorial and is not covered in previous tutorials, please let me know.
This notebook has been optimized for Google Colabs ran on a Chrome Browser.
Statements found in the index page on view expressed, responsibility, errors and ommissions, use at risk, and licensing extend throughout the tutorial.
In this notebook, the basics of Colabs are introduced.
By the end of this tutorial users should have an understanding of:
Instructions: Read all text and execute all code in order.
How to execute code:
If you would like to see the code you are executing, double click the label 'Run: '. Code is accompanied with brief descriptions inlined.
Try It! Go ahead and try running the cell below. What you will be shown as a result is a flow chart of how this current tutorial may be used.
Census data comes in 2 flavors:
Census data can come in a variety of levels.
These levels define the specificity of the data.
Ie. Weather a data is reporting on individual communities, or entire cities is contingent on the data granularity.
The data we will be downloading in this tutorial, ACS Data, can be found at the Tract level and no closer.
Aggregating Tracts is the way BNIA calculates some of their yearly community indicators!
Each of the bolded words in the list below are levels that are identifiable through a 'Geographic Reference Code'.
For more information on Geographic Reference Codes, refer to the table of contents for the section on that matter.
Run the following code to see how these different levels nest into eachother!
State, County, and Tract ID's are called Geographic Reference Codes.
This information is crucial to know when accessing data.
In order to successfully pull data, Census State and County Codes must be provided.
The code herin is configured by default to pull data on Baltimore City, MD and its constituent Tracts.
In order to find your State and County code:
Either
A) Here: where upon entering a unique address you can locate state and county codes under the associated values 'Counties' and 'State'
OR
B) Conversly, click here
Searching for a dataset is the first step in the data processing pipeline.
In this tutorial we plan on processing ACS data in a programmatic fashion.
This tutorial will not just allow you to search/ explore ACS tables and inspect their contents (attributes), but also to download, format, and clean it!
Despite a table explorer section being provided, it is not suggested you use this approach, but rather, explore available data tables and retrieve their ID's using the dedicated websites provided below:
American Fact Finder may assist you in your data locating and download needs:
Fact Finder provides a nice interface to explore available datasets. From Fact Finder you can grab a Table's ID and continue the tutorial. Alternately, from Fact Finder, You can download the data for your community directly via an interface. From there, you may continue the tutorial by loading the downloaded dataset as an external resource, instructions on how to do this are provided further below in this tutorial.
Update : 12/18/2019 " American FactFinder (AFF) will remain as an "archive" system for accessing historical data until spring 2020. " - American Fact Finder Website
This new website is provided by the Census Org. Within its 'Advanced Search' feature exist all the filtering abilities of the older, depricated, (soon discontinued) American Fact Finder Website. It is still a bit buggy to date and may not apply all filters. Filters include years(can only pick on year at a time), geography(state county tract), topic, surveys and Table ID. The filters you apply are shown at the bottom of the query and submitting the search will yield data tables ready for download as well as table ID's that you may snag for use in this tutorial.
Tutorial Notes:
Details and Subject tables are derived using the 5 year ACS data.
These tables are created by the census and are pre-compiled views of the data.
The Detail Tables contain all possible ACS Data.
The Subjects Table contains ACS data in convenient groups
BNIA create their data mostly using Details table, but sometimes pulling the data from a Subject Table is more convenient (the data would otherwise be found along multiple details tables).
ACS Website Notes:
Detailed Tables contain the most detailed cross-tabulations, many of which are published down to block groups. The data are population counts. There are over 20,000 variables in this dataset.
Subject Tables provide an overview of the estimates available in a particular topic. The data are presented as population counts and percentages. There are over 18,000 variables in this dataset.
For more Information (via API) Please Visit This Link
Install these libraries onto the virtual environment.
! pip install -U -q ipywidgets
! pip install geopandas
# hide
# @title Run: Install Modules
# Install the Widgets Module.
# Colabs does not locally provide this Python Library
# The '!' is a special prefix used in colabs when talking to the terminal
! pip install -U -q ipywidgets
! pip install geopandas# Show entire column widths
pd.set_option('display.max_colwidth', -1)
pd.set_option('max_colwidth', 20)
pd.set_option('display.expand_frame_repr', False)
pd.set_option('display.precision', 2)# export
# @title Run: Import Modules
# Once installed we need to..
# import and configure the Widgets
import ipywidgets as widgets
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
# About importing data
import urllib.request as urllib
from urllib.parse import urlencode
# This Prevents Timeouts when Importing
import socket
socket.setdefaulttimeout(10.0)
# Pandas Data Manipulation Libraries
import pandas as pd
# Working with Json Data
import json
# Data Processing
import numpy as np
# Reading Json Data into Pandas
from pandas.io.json import json_normalize
# Export data as CSV
import csv
# Geo-Formatting
# Postgres-Conversion
import geopandas as gpd
from geopandas import GeoDataFrame
import psycopg2,pandas,numpy
from shapely import wkb
from shapely.wkt import loads
import os
import sys
# In case file is KML
# enable KML support; disabled by default
import fiona
fiona.drvsupport.supported_drivers['kml'] = 'rw'
fiona.drvsupport.supported_drivers['KML'] = 'rw'
# load libraries
# from shapely.wkt import loads
# from pandas import ExcelWriter
# from pandas import ExcelFile
import matplotlib.pyplot as plt
import glob
import imageio# hide
%matplotlib inline
!jupyter nbextension enable --py widgetsnbextensionPlease Note: The following section details a programmatic way to access and explore the census data catalogs. It is advised that rather than use this portion of the section of the tutorial, you read the section 'Searching For Data' --> 'Search Advice' above and which provide links to dedicated websites hosted by the census bureaue explicitly for your data exploration needs!
Retrieve and search available ACS datasets through the ACS's table directory.
The table directory contains TableId's and Descriptions for each datatable the ACS provides.
By running the next cell, an interactive searchbox will filter the directory for keywords within the description.
Be sure to grab the TableId once you find a table with a description of interest.
response = urllib.urlopen('https://api.census.gov/data/2017/acs/acs5/groups/')
metaDataTable = json_normalize( json.loads(response.read())['groups'] )
metaDataTable.set_index('name', drop=True, inplace=True)
description = input("Search ACS Table Directory by Keyword: ")
metaDataTable[ metaDataTable['description'].str.contains(description.upper()) ]
#hide
#@title Run: Import Dataset Directory
pd.set_option('display.max_columns', None)
url = 'https://api.census.gov/data/2017/acs/acs5/groups/'
response = urllib.urlopen(url)
data = json.loads(response.read())
data = data['groups']
metaDataTable = json_normalize(data)
metaDataTable.set_index('name', drop=True, inplace=True)
#--------------------
# SEARCH BOX 1: This reliably produces a searhbox.
# The ell must be reran for every query.
#--------------------
description = input("Search ACS Table Directory by Keyword: ")
metaDataTable[ metaDataTable['description'].str.contains(description.upper()) ]
#--------------------
# SEARCH BOX 2: FOR CHROME USERS:
# Commenting out the code above and running the code
# below will update the searchbox in real time.
#--------------------
# @interact
# def tableExplorer(description='family'):
# return metaDataTable[ metaDataTable['description'].str.contains(description.upper()) ]Once you a table from the explorer has been picked, you can inspect its column names in the next part.
This will help ensure it has the data you need!
tableId = input("Please enter a Table ID to inspect: ")
url = f'https://api.census.gov/data/2017/acs/acs5/groups/{tableId}.json'
metaDataTable = pd.read_json(url).reset_index(inplace = True, drop=False)
metaDataTable = pd.merge(
json_normalize(data=metaDataTable['variables']),
metaDataTable['index'] , left_index=True, right_index=True )
metaDataTable = metaDataTable[['index', 'concept']].dropna(subset=['concept'])
#hide
#@title Run: Interactive Table Lookup
import json
import pandas as pd
from pandas.io.json import json_normalize
pd.set_option('display.max_columns', None)
#--------------------
# SEARCH BOX 1: This reliably produces a searchbox.
# The ell must be reran for every query.
#--------------------
tableId = input("Please enter a Table ID to inspect: ")
url = f'https://api.census.gov/data/2017/acs/acs5/groups/{tableId}.json'
metaDataTable = pd.read_json(url)
metaDataTable.reset_index(inplace = True, drop=False)
metaDataTable = pd.merge(json_normalize(data=metaDataTable['variables']), metaDataTable['index'] , left_index=True, right_index=True)
metaDataTable = metaDataTable[['index', 'concept']]
metaDataTable = metaDataTable.dropna(subset=['concept'])
metaDataTable.head()The Data Structure we recieve is different than the prior table.
Intake and processing is different as a result.
Now lets explore what we got, just like before.
Only difference is that the column names are automatically included in this query.
url = 'https://api.census.gov/data/2017/acs/acs5/subject/variables.json'
data = json.loads(urllib.urlopen(url).read())['variables']
objArr = []
for key, value in data.items():
value['name'] = key
objArr.append(value)
metaDataTable = json_normalize(objArr).set_index('name', drop=True, inplace=True)
metaDataTable = metaDataTable[ ['attributes', 'concept', 'group', 'label', 'limit', 'predicateType' ] ]
concept = input("Search ACS Subject Table Directory by Keyword")
metaDataTable[ metaDataTable['concept'].str.contains(concept.upper(), na=False) ]
#hide
#@title Run: Interactive Dataset Directory
# Note the json representation
url = 'https://api.census.gov/data/2017/acs/acs5/subject/variables.json'
response = urllib.urlopen(url)
# Decode the url response as json
# https://docs.python.org/3/library/json.html
data = json.loads(response.read())
# the json object contains all its information within attribute 'variables'
data = data['variables']
# Process by flattening the raw json data
objArr = []
for key, value in data.items():
value['name'] = key
objArr.append(value)
# Normalize semi-structured JSON data into a flat table.
metaDataTable = json_normalize(objArr)
# Set the column 'name' as an index.
metaDataTable.set_index('name', drop=True, inplace=True)
# Reduce the directory to only contain these attributes
metaDataTable = metaDataTable[ ['attributes', 'concept', 'group', 'label', 'limit', 'predicateType' ] ]
#--------------------
# SEARCH BOX 1: This reliably produces a searhbox.
# The ell must be reran for every query.
#--------------------
concept = input("Search ACS Subject Table Directory by Keyword")
metaDataTable[ metaDataTable['concept'].str.contains(concept.upper(), na=False) ]
#--------------------
# SEARCH BOX 2: FOR CHROME USERS:
# Commenting out the code above and running the code
# below will update the searchbox in real time.
#--------------------
#@interact
#def subjectExplorer(concept='transport'):
# return metaDataTable[ metaDataTable['concept'].str.contains(concept.upper(), na=False) ]Intro
Hopefully, by now you know which datatable you would like to download!
The following Python function will do that for you.
Description: This function returns ACS data given appropriate params.
Purpose: Retrieves ACS data from the web
Services
Input:
Output:
How it works
Before our program retrieve the actual data, it will want the table's metadata.
The Function changes the URL it requests data from depending on if it is an S or B type table the user has requested
Multiple calls for data must be made as a single table may have several hundred columns in them.
Our program not just pulls tract level data but the aggregate for the county.
Finally, we will download the data in two different formats if desired.
If we choose to save the data, we save it with the Table IDs + ColumnNames, and once without the TableIDs.
Now use this function to Download the Data!
# Our download function will use Baltimore City's tract, county and state as internal paramters # Change these values in the cell below using different geographic reference codes will change those parameters tract = '*' county = '153' # '059' # 153 '510' state = '51' # Specify the download parameters the function will receieve here tableId = 'B19049' # 'B19001' year = '17' saveAcs = True# state, county, tract, tableId, year, saveOriginal, save df = retrieve_acs_data(state, county, tract, tableId, year, saveAcs) df.head()