Machine learning on CPUs without AVX: howto #300
Replies: 19 comments 47 replies
-
One line it the ML container's log is intriguing me though:
More to read here: https://www.intel.com/content/www/us/en/developer/articles/guide/optimization-for-tensorflow-installation-guide.html I don't have a spare 6 hours at the moment to rebuild the while thing so... to be continued. |
Beta Was this translation helpful? Give feedback.
-
Hey, thank you, this implementation will make a lot of people happy. |
Beta Was this translation helpful? Give feedback.
-
Edited: made an easier version. |
Beta Was this translation helpful? Give feedback.
-
Guard the file |
Beta Was this translation helpful? Give feedback.
-
To avoid the warnings (and improve performance), I should add a few options to my build for my J4105 cpu:
So
I don't know why |
Beta Was this translation helpful? Give feedback.
-
If there is interest, I might pursue crosscompiling for Raspberry Pi. |
Beta Was this translation helpful? Give feedback.
-
Sorry probably the wrong place for this, but couldn't find and answer searching online. podman build -t my-immich-machine-learning . STEP 1/4: FROM altran1502/immich-machine-learning:release Looks good, I think, but when I try to start my docker compose, I get this Error response from daemon: Get "http://localhost/v2/": dial tcp 127.0.0.1:80: connect: connection refused Any thought on what happening, how to fix? |
Beta Was this translation helpful? Give feedback.
-
Does anyone know what might be wrong? I tried to compile it (on the same CPU: J4105), but it keeps erroring:
|
Beta Was this translation helpful? Give feedback.
-
For some reason my ml container fails to start and throws the following error:
EDIT:
|
Beta Was this translation helpful? Give feedback.
-
Updated instructions to match tfjs version to Immich and added libomp-dev dependency (thanks @sbkg0002 ) |
Beta Was this translation helpful? Give feedback.
-
It doesn't work for me, I got this error when running the container: My Dockerfile for the FROM debian:bullseye-slim
RUN apt-get update && apt-get install -y curl git python3 python3-dev python3-pip apt-transport-https curl gnupg
RUN curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor >bazel-archive-keyring.gpg && \
mv bazel-archive-keyring.gpg /usr/share/keyrings && \
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/bazel-archive-keyring.gpg] https://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list && \
apt-get update && apt-get install -y bazel bazel-5.0.0
RUN pip3 install --upgrade pip && \
pip install -U --user pip numpy wheel packaging requests opt_einsum && \
pip install -U --user keras_preprocessing --no-deps
WORKDIR /tensorflow
VOLUME /tensorflow/output
COPY entrypoint.sh /tensorflow/entrypoint.sh
RUN chmod +x /tensorflow/entrypoint.sh
ENTRYPOINT ["/tensorflow/entrypoint.sh"]
CMD ["sh"]
#!/bin/sh
cd output
git clone https://github.com/tensorflow/tensorflow.git
if cd tensorflow
then git pull
else
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
fi
git checkout v3.19.0
./configure
bazel build --config=mkl -c opt --copt=-march=native --config=monolithic //tensorflow/tools/lib_package:libtensorflow
cp bazel-bin/tensorflow/tools/lib_package/libtensorflow.tar.gz /tensorflow/output/libtensorflow.tar.gz Docker build command: |
Beta Was this translation helpful? Give feedback.
-
So I got another error after letting the container run:
Run command: |
Beta Was this translation helpful? Give feedback.
-
@alextran1502 are you willing to officially provide this image to make it easier to use for everyone? |
Beta Was this translation helpful? Give feedback.
-
@bertmelis thank you so much for this guide, it worked perfectly for me! However, I#d suggest changing the seconf Dockerfile to this: FROM altran1502/immich-machine-learning:release
COPY libtensorflow.tar.gz node_modules/@tensorflow/tfjs-node/deps/libtensorflow.tar.gz
RUN apt-get update && \
apt-get -y --no-install-recommends install libomp-dev && \
apt-get clean && \
cd node_modules/@tensorflow/tfjs-node/deps && \
tar -xf libtensorflow.tar.gz && \
rm libtensorflow.tar.gz Like this, the libtensorflow.tar.gz is not contained in the final image :) |
Beta Was this translation helpful? Give feedback.
-
Just a link to the prebuild image. You can also find up-to-date files to do it yourself there: |
Beta Was this translation helpful? Give feedback.
-
I was impatient so I took bermelis' tfjs build and updated the docker image. You can find it here: https://hub.docker.com/r/snuupy/immich-machine-learning-noavx |
Beta Was this translation helpful? Give feedback.
-
Just a reminder: as an alternative for using ready-made images, you can also refer to a Containerfile/Dockerfile in your compose file. Of course, you'll still need the non-avx tfjs. |
Beta Was this translation helpful? Give feedback.
-
The current release works without requiring AVX |
Beta Was this translation helpful? Give feedback.
-
Hi all, |
Beta Was this translation helpful? Give feedback.
-
Prepare
Create the working directories
Create the container for building Tensorflow
Containerfile in build-tf' directory:
entrypoint.sh in 'build-tf' directory
Containerfile in 'my-immich-ml'
This is what my directory structure looks like:
Start building!
Build Tensorflow
Use Docker or Podman, inside the 'build-tf' directory
Running the container takes a long time. You can check the progress from time to time with
podman logs tfbuild
.When done, you will find a file called
libtensorflow.tar.gz
in themy-immich-ml
directory.Build the machine learning container
Use Docker or Podman, inside the 'my-immich-ml' directory
Launch Immich with your custom ML container
adjust the docker-compose file:
Launch as usual.
Beta Was this translation helpful? Give feedback.
All reactions