We design a module that generates photo-realistic & high resolution images based on user-defined prompts. While preparing the module, we utilize the pretrained model Flux-schnell at Hugging Face provided by black forests labs.
- Install Conda, if not already installed.
- Clone the repository:
git clone https://github.com/byrkbrk/generating-by-prompt-flux-schnell.git
- Change the directory:
cd generating-by-prompt-flux-schnell
- Create the environment:
conda env create -f environment.yaml
- Activate the environment:
conda activate generating-by-prompt-flux-schnell
- Download & install Python (version==3.11)
- Clone the repository:
git clone https://github.com/byrkbrk/generating-by-prompt-flux-schnell.git
- Change the directory:
cd generating-by-prompt-flux-schnell
- Install packages using
pip
:pip install -r requirements.txt
Check it out how to use:
python3 generate.py --help
Output:
Generate images by prompt using Flux-schnell
positional arguments:
prompt Prompt that be used during inference
options:
-h, --help show this help message and exit
--num_inference_steps NUM_INFERENCE_STEPS
Number of inference steps used during generating
--device {cuda,mps,cpu}
The device used during inference. Default: `None`
--enable_sequential_cpu_offload
Enables sequential cpu offload during inference
Execute the following code blocks to generate the corresponding images displayed below. The results will be saved into the folder ./generated-images
.
python3 generate.py\
"an image of a turtle in Picasso style"\
--num_inference_steps 4\
--enable_sequential_cpu_offload
python3 generate.py\
"an image of a turtle in Camille Pissarro style"\
--num_inference_steps 4\
--enable_sequential_cpu_offload
python3 generate.py\
"an image of a turtle in Claude Monet style"\
--num_inference_steps 4\
--enable_sequential_cpu_offload
Check it out how to use:
python3 app.py --help
Output:
Generate image using Flux-schnell via Gradio
options:
-h, --help show this help message and exit
--enable_sequential_cpu_offload
Enables sequential cpu offload
--share Allows Gradio to produce public link
To run the app on your local device, execute the following:
python3 app.py\
--enable_sequential_cpu_offload
Then, visit the url http://127.0.0.1:7860 to open the interface displayed below: