This repository contains the materials for the D-Lab Python Text Analysis workshop. We recommend attending Python Fundamentals, Python Data Wrangling, and Python Machine Learning Fundamentals prior to this workshop.
Check D-Lab's Learning Pathways to figure out which of our workshops to take!
This 3-part workshop will prepare participants to move forward with research that uses text analysis, with a special focus on social science applications. We explore fundamental approaches to applying computational methods to text in Python. We cover some of the major packages used in natural language processing, including scikit-learn, NLTK, spaCy, and Gensim.
- Part 1: Preprocessing. How do we standardize and clean text documents? Text data is noisy, and we often need to develop a pipeline in order to standardize the data to better facilitate computational modeling. You will learn common and task-specific operations of preprocessing, becoming familiar with commonly used NLP packages and what they are capable of. You will also learn about tokenizers, and how they have changed since the advent of Large Language Models.
- Part 2: Bag-of-words. In order to do any computational analysis on the text data, we need to devise approaches to convert text into a numeric representation. You will learn how to convert text data to a frequency matrix, and how TF-IDF complements the Bag-of-Words representation. You will also learn about parameter settings of a vectorizer and apply sentiment classification to vectorized text data.
- Part 3: Word Embeddings. Word Embeddings underpin nearly all modern language models. In this workshop, you will learn the differences between a bag-of-words representation and word embeddings. You will be introduced to calculating cosine similarity between words, and learn how word embeddings can suffer from biases.
The materials for this workshop series are designed to build on each other. Part 2 assumes familiarity with the content from Part 1, and Part 3 similarly requires understanding of both preceding parts.
Anaconda is a useful package management software that allows you to run Python and Jupyter notebooks easily. Installing Anaconda is the easiest way to make sure you have all the necessary software to run the materials for this workshop. If you would like to run Python on your own computer, complete the following steps prior to the workshop:
-
Download and install Anaconda (Python 3.9 distribution). Click the "Download" button.
-
Download the Python Text Analysis workshop materials:
- Click the green "Code" button in the top right of the repository information.
- Click "Download Zip".
- Extract this file to a folder on your computer where you can easily access it (we recommend Desktop).
-
Optional: if you're familiar with
git
, you can instead clone this repository by opening a terminal and entering the commandgit clone git@github.com:dlab-berkeley/Python-Text-Analysis.git
.
If you do not have Anaconda installed and the materials loaded on your workshop by the time it starts, we strongly recommend using the D-Lab Datahub to run the materials for these lessons. You can access the DataHub by clicking the following button:
The DataHub downloads this repository, along with any necessary packages, and
allows you to run the materials in a Jupyter notebook that is stored on UC
Berkeley's servers. No installation is necessary from your end - you only need
an internet browser and a CalNet ID to log in. By using the DataHub, you can
save your work and come back to it at any time. When you want to return to your
saved work, just go straight to DataHub, sign
in, and you click on the Python-Text-Analysis
folder.
If you don't have a Berkeley CalNet ID, you can still run these lessons in the cloud, by clicking this button:
Binder operates similarly to the D-Lab DataHub, but on a different set of servers. By using Binder, however, you cannot save your work.
Now that you have all the required software and materials, you need to run the code.
-
Open the Anaconda Navigator application. You should see the green snake logo appear on your screen. Note that this can take a few minutes to load up the first time.
-
Click the "Launch" button under "JupyterLab" and navigate through your file system on the left hand pane to the
Python-Text-Analysis
folder you downloaded above. Note that, if you download the materials from GitHub, the folder name may instead bePython-Text-Analysis-main
. -
Go to the
lessons
folder and find the notebook corresponding to the workshop you are attending. -
Press Shift + Enter (or Ctrl + Enter) to run a cell.
-
You will need to install additional packages depending on which workshop you are attending. The install commands are performed in the notebooks, as you proceed through each part of the workshop.
Note that all of the above steps can be run from the terminal, if you're familiar with how to interact with Anaconda in that fashion. However, using Anaconda Navigator is the easiest way to get started if this is your first time working with Anaconda.
- Computational Text Analsysis Working Group (CTAWG)
- Info 256: Applied Natural Language Processing
- Speech and Language Processing by Jurafsky and Martin.
- Modern Deep Learning Techniques Applied to Natural Language Processing (online textbook)
D-Lab works with Berkeley faculty, research staff, and students to advance data-intensive social science and humanities research. Our goal at D-Lab is to provide practical training, staff support, resources, and space to enable you to use R for your own research applications. Our services cater to all skill levels and no programming, statistical, or computer science backgrounds are necessary. We offer these services in the form of workshops, one-to-one consulting, and working groups that cover a variety of research topics, digital tools, and programming languages.
Visit the D-Lab homepage to learn more about us. You can view our calendar for upcoming events, learn about how to utilize our consulting and data services, and check out upcoming workshops. Subscribe to our newsletter to stay up to date on D-Lab events, services, and opportunities.
D-Lab offers a variety of Python workshops, catered toward different levels of expertise.
- Python Geospatial Fundamentals
- Python Web Scraping and APIs
- Python Machine Learning
- Python Text Analysis
- Python Deep Learning
- Mingyu Yuan
- Pratik Sachdeva
- Ben Gebre-Medhin
- Laura Nelson
- Teddy Roland
- Geoff Bacon
- Caroline Le Pennec-Caldichoury
These materials have evolved over a number of years. They were first developed by Laura Nelson and Teddy Roland, with contributions and revisions made by Ben Gebre-Medhin, Geoff Bacon, and Caroline Le Pennec-Caldichoury and Pratik Sachdeva. They were revamped by Mingyu Yuan in the summer of 2024.