Divvy-Bikes-Scraper is a command-line tool that allows Divvy bike members to scrape their ride data from the Divvy website and export it to a CSV file. Divvy, which is operated by Lyft, does not have a public API for accessing ride data, so this tool utilizes reverse engineering techniques to make the necessary GraphQL requests.
To use the tool, users will need to have a Divvy account and provide their login credentials. The tool will then scrape the ride data from the Divvy website and export it to a CSV file, which can be opened with a spreadsheet program like Microsoft Excel or Google Sheets. The CSV file will include information such as the start and end dates of each ride, the start and end stations, the duration of the ride, as well as ride distance.
It was built in Python and uses the Requests library for web scraping. The GraphQL requests were reverse engineered using the Network tab in Chrome Dev Tools.
- Create a virtual environment by navigating into the project directory and running the following command:
python3 -m venv my_env
- Install the necessary Python libraries from
requirements.txt
by executing the following:
pip3 install -r requirements.txt
- Navigate to https://account.divvybikes.com/ride-history and log into your Divvy Bikes account
- Open up your browser's dev tools and navigate to the Network tab
- Copy the value of the
lyftAccessToken
stored in theCookie
request header and save it as yourAUTHORIZATION
secret it in the project's.env
file.
Note: you should be able to find this PUT
request by its domain address (account.divvybikes.com)
- Run the script by executing the following from your terminal
python3 script.py
- Verify that the script executed correctly by checking the print statements in your terminal. You should also have a file called
my_divvy_data.csv
in the project directory now
TODO
- Create
.env
file for authoirzation bearer token - Create
requirements.txt
file - Clean up
script.py
- Write up instructions to run with screenshots in
README.md