The SlugTools API provides a standards-compliant interface that developers can use to access UC Santa Cruz data. It serves scraped and organized data across university sites and categories. Report any bugs/errors/issues here.
We're scraping data from different sites, combining them, and serving them in a standardized format. This is done through BeautifulSoup and a few other libraries, and then hosted on Flask. Check out LEARN.md.
Some data is scraped and stored on startup, simply returned on request. Other data is scraped on request, and then returned. This is to ensure live up-to-date data for things like weather, Waitz, catalog, and other data. Other data may be scraped periodically, such as menus and food items.
Clone and set up virtual env (or use PDM):
git clone https://github.slug.tools/api
cd api && python -m venv venv
source venv/bin/activate
python -m pip install -r requirements.txt
Set up accounts and save keys in .env
:
- Deta
DETA_KEY
(database) - OpenWeather
OPENWEATHER_KEY
- Sentry
SENTRY_DSN
(analytics; optional)
Args:
--debug
(hot-reload)--noscrape
(no scrape on startup)
python main.py