Skip to content

Traffic Count Data Publishing

John Clary edited this page Jun 30, 2017 · 13 revisions

The traffic count data publishing tools load traffic count data collected from road tubes into a series of Esri ArcSDE feature classes as well as the City of Austin's Open Data Portal. The core components of the tool are described below.

Contents

Workflow

The general workflow of the processing is as follows:

  1. Traffic Engineer Technians export traffic count reports to a network fileshare.
  2. A series of Python scripts merge the traffic count reports into a database-friendly schema and publishes the data to Open Data Portal.
  3. An FME Server workspace loads the processed count reports into an Esri ArcSDE geodatabase.
  4. The CTM GEODATAPUSHER application synchronizes the file geodatabase with an enterprise ArcSDE feature class.

Source Data

Traffic engineer technicians process raw traffic count data collected from road tubes using the TimeMark VIAS softare. Using the VIAS software, the engineers export three traffic count reports in CSV format, each of which corresponds to a dataset available on Austin's open data portal and enterprise GIS servers. The three traffic count report types are:

  • Classification
  • Speed
  • Volume

In addition to the three report types, a fourth "locations" dataset is maintained by traffice engineer technicians on ArcGIS Online. Each traffic count report shares a common 'Data File' and 'Site Code' identifier, which uniquely identify the traffic study. Engineer technicians export each of the three traffic count reports and store them in a network fileshare at [path to unprocseed source files][study_year]

The filename of each CSV report follows the pattern "data file name" + "report type". For example, a traffic count with data file identifier "RambleLn802BlkBD" has three corresponding reports:

  • RambleLn802BlkBDCls.csv <= Traffic Classificaiton Report
  • RambleLn802BlkBDSpd.csv <= Traffic Speed Report
  • RambleLn802BlkBDVol.csv <= Traffic Volume Report

Python Data Translation

A series of Python scripts translate the TimeMark traffic count reports into a database-ready schema. Those are:

  1. traffic_count_cls.py <= translates classification reports
  2. traffic_count_spd.py <= translates speed reports
  3. traffic_count_vol.py <= translates volme reports
  4. traffic_count_loader.py <= merges processed reports of any type into a master table
  5. traffic_count_pub.py <= publishes master data to data.austintexas.gov (in-progress)

The source code for the translation scripts is available here: https://github.com/cityofaustin/transportation-data-publishing/

The translation scripts are run on a nightly basis from the Austin Transporation Arterial Management Divison scripting VM (ATDATMSSCRIPT).

Each traffic count report processed by the translation script is moved to: [path to unprocseed source files][study year]\processed\

The translated traffic count reports are stored at: [path to data-friendly traffic count root directory]\SOURCE_FILES[report_type]

Where report_type is one of CLASSIFICATION, VOLUME, or SPEED.

The translated traffic counts are merged into master tables located here: [path to data-friendly traffic count root directory]\MASTER_DATA\

Public Datasets

Traffic count data are available in four datasets, each of which share a common "data file" and "site code". These fields can be used to join related records from each of the datasets.