Skip to content
This repository has been archived by the owner on Jun 23, 2023. It is now read-only.

Commit

Permalink
Cleanup for v2.0.0
Browse files Browse the repository at this point in the history
Use different pipeline, ditch genetic algorithms

Signed-off-by: Lester James V. Miranda <lj@thinkingmachin.es>
  • Loading branch information
ljvmiranda921 committed Dec 3, 2018
1 parent 7c4836d commit ade154c
Show file tree
Hide file tree
Showing 31 changed files with 52 additions and 2,998 deletions.
155 changes: 17 additions & 138 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,151 +1,30 @@
# christmAIs

christmAIs ("krees-ma-ees") is text-to-abstract art generation for the holidays!
**christmAIs** ("krees-ma-ees") is text-to-abstract art generation for the holidays!

This project takes inspiration from Tom White's [perception
engines](https://medium.com/artists-and-machine-intelligence/perception-engines-8a46bc598d57)
and his [drawing system](https://github.com/dribnet/dopes) to generate abstract
art.
This work converts any input string into an abstract art by:
- finding the most similar [Quick, Draw!](https://quickdraw.withgoogle.com/data) class using [GloVe](https://nlp.stanford.edu/projects/glove/)
- drawing the nearest class using a Variational Autoencoder (VAE) called [Sketch-RNN](https://arxiv.org/abs/1704.03477); and
- applying [neural style transfer](https://arxiv.org/abs/1508.06576) to the resulting image

Given a text input, a FastText model converts it into an 8-bit embedding, and
is used as a random seed for the drawing system. The generated images are then
fed to an ImageNet-trained classifier for prediction. The idea is that we keep
on perturbing the images until the classifier recognizes the target class
(tree, shopping cart, etc.).
This results to images that look like this:

## Requirements
![alt text](https://raw.githubusercontent.com/username/projectname/branch/path/to/img.png)
![alt text](https://raw.githubusercontent.com/username/projectname/branch/path/to/img.png)
![alt text](https://raw.githubusercontent.com/username/projectname/branch/path/to/img.png)

Please see `requirements.txt` for all dependencies and `requirements-dev.txt` for dev dependencies.
## Setup

## Set-up
Please see `requirements.txt` and `requirements-dev.txt`. In addition, see
`build.sh` to see the setup steps needed for building your environment.

First, clone this repository to your local machine:
In addition, we provided a `Makefile` to ease things up:

```shell
$ git clone https://github.com/thinkingmachines/christmAIs.git
```
You'll be prompted for your username and password. If you have 2FA enabled as you should, you need to [use a private token](https://github.com/settings/tokens) instead of your password.

It is highly-recommended to use a virtual environment to set this project up
and install the dependencies:

```shell
$ cd christmAIs
$ virtualenv venv
$ source venv/bin/activate
$ pip install -r requirements.txt # or requirements-dev.txt
```

## Usage

There are three important components for this perception engine to work:
- `christmais.FastTextWrapper`: maps a string into an 8-bit vector
- `christmais.Artist`: maps an 8-bit vector into an image
- `christmais.Predictor`: classifies an image into a particular object

In addition, there is also a `christmais.Trainer` class that takes all three
components, then performs a random walk in order to find the abstract art that
best resembles a target object.

### Map a string into an 8-bit vector

This module contains a wrapper for `gensim.FastText` to create word embeddings
for a given text.

```python
from christmais import FastTextWrapper
from nltk.corpus import brown # or any other corpus

# Train the model
model = FastTextWrapper(sentences=brown.sents())
# Embed a text
my_text = "Thinking Machines Data Science"
model.transform(my_text)
```

Or, you can simply use a pre-trained FastText model on the brown corpus (note:
if no `.model` is found in your `/tmp/` directory, then it trains as usual):

```python
from christmais import get_fasttext_pretrained
# Assuming that /tmp/brown_fasttext.model exists
model = get_fasttext_pretrained(load=True)
# Embed a text
my_text = 'Thinking Machines Data Science'
model.transform(my_text)
```

### Map an 8-bit vector into an image

Once you have generated a word embedding, you can then use it as a seed to the
drawing system, a.k.a. the `Artist` class:

```python
from christmais import (get_fasttext_pretrained, Artist)

model = get_fasttext_pretrained(load=True)
seed = model.transform('Thinking Machines Data Science')
artist = Artist(seed, dims=(224, 224))
artist.draw()
```

![](assets/artist1.png)
![](assets/artist2.png)
![](assets/artist3.png)

### Map an image into an ImageNet class

The generated image can then be used as an input to a classifier trained on ImageNet.
You can supply an ImageNet class and it will return the probability (or confidence)
that this image looks like this class.

```python
from christmais import Predictor

# Map text to seed
model = get_fasttext_pretrained(load=True)
seed = model.transform('Thinking Machines Data Science')

# Map seed to image
artist = Artist(seed, dims=(224, 224))
img = artist.draw()

# Map image to ImageNet class
# We want to check how well the class recognizes the image as an "iron"
predictor = Predictor()
score, results = predictor.predict(X=img, target="iron")
```

We can then print the results. Since there is no optimization happening yet,
don't expect that the score is high at the very start!

```python
>>> print(score)
0.0003216064
```

### The Trainer class

The three components above are integrated inside a `Trainer` class that enables
you to "grow" your images to look like ImageNet classes. In order to use this,
simply create an instance of `Trainer` by feeding it the input string, then
call the `train()` method with your own parameters:

```python
from christmais import Trainer

target = 'iron' # We want our abstract art to look like an iron
t = Trainer('Thinking Machines Data Science')
best_individual = t.train(target=target, steps=100)
```

The whole optimization scheme involves a genetic algorithm that improves the
population over time. At the end of training, the best individual after `N`
generations is returned.

```python
# after training
best_individual.artist.draw()
$ git clone git@github.com:thinkingmachines/christmAIs.git
$ cd christmaAIs
$ make venv
$ make build # or make dev
```

## It's christmAIs time!
Expand Down
Binary file removed assets/artist1.png
Binary file not shown.
Binary file removed assets/artist2.png
Binary file not shown.
Binary file removed assets/artist3.png
Binary file not shown.
Binary file added assets/book1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/book2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/sf1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 0 additions & 3 deletions christmais/README.md

This file was deleted.

13 changes: 0 additions & 13 deletions christmais/__init__.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,3 @@
# -*- coding: utf-8 -*-

"""Text-to-abstract art generation for the holidays!"""

from .embedder import get_fasttext_pretrained, FastTextWrapper
from .drawsys import Artist
from .predictor import Predictor
from .trainer import Trainer

__all__ = [
'get_fasttext_pretrained',
'FastTextWrapper',
'Artist',
'Predictor',
'Trainer',
]
Loading

0 comments on commit ade154c

Please sign in to comment.