chart-recognizer is an image model specifically trained to recognize financial charts from social media sources. It's designed to recognize if an image posted on social media such as Twitter, is a financial chart or something else.
Social media users post a lot of useful financial information, including their predictions of financial assets. However, it is often hard to distinguish if the images that they post also contain useful information. This model was developed to fill this gap, to recognize if an image is a financial chart.
I use this model in combination with my two other projects FinTwit-bot and FinTwitBERT to track market sentiment accross Twitter.
chart-recognizer has been trained on three of my datasets. So far I have not found another image dataset about financial charts. The datasets that have been used to train these models are as follows:
- StephanAkkerman/crypto-charts: 4,880 images.
- StephanAkkerman/stock-charts: 5,203 images.
- StephanAkkerman/fintwit-images: 4,579 images.
I have implemented two approaches to train the model using these datasets. One, where the model loads the images in memory however this does not work for more than 10k images on 48GB of RAM. The second method unpacks all the downloaded images which does not put as much strain on the user's RAM however, this approach demands some extra storage.
The model is finetuned from Timm's efficientnet and has an accuracy of 97.8% on the test set.
These are the latest results on the 10% test set.
- Accuracy: 97.8
- F1-score: 96.9
# Clone this repository
git clone https://github.com/StephanAkkerman/chart-recognizer
# Install required packages
pip install -r requirements.txt
The model can be found on Huggingface. It can be used together with the transformers library.
import timm
import torch
from PIL import Image
from timm.data import resolve_data_config, create_transform
# Load and set model to eval mode
model = timm.create_model("hf_hub:StephanAkkerman/chart-recognizer", pretrained=True)
model.eval()
# Create transform and get labels
transform = create_transform(**resolve_data_config(model.pretrained_cfg, model=model))
labels = model.pretrained_cfg["label_names"]
# Load and preprocess image
image = Image.open("img/examples/tweet_example.png").convert("RGB")
x = transform(image).unsqueeze(0)
# Get model output and apply softmax
probabilities = torch.nn.functional.softmax(model(x)[0], dim=0)
# Map probabilities to labels
output = {label: prob.item() for label, prob in zip(labels, probabilities)}
# Print the predicted probabilities
print(output)
If you use chart-recognizer in your research, please cite as follows:
@misc{chart-recognizer,
author = {Stephan Akkerman},
title = {chart-recognizer: A Specialized Image Model for Financial Charts},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/StephanAkkerman/chart-recognizer}}
}
Contributions are welcome! If you have a feature request, bug report, or proposal for code refactoring, please feel free to open an issue on GitHub. We appreciate your help in improving this project.
This project is licensed under the GPL-3.0 License. See the LICENSE file for details.