Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I find a way to execute both AWSIM and autoware within docker containers #81

Open
st9540808 opened this issue Jan 29, 2023 · 8 comments

Comments

@st9540808
Copy link

st9540808 commented Jan 29, 2023

I was trying to use AWSIM and autoware within docker containers for my research. It seems that some people also want to use AWSIM within docker. So I think it would be helpful if I write down the step I used.

The environment I used is shown in the table below

environment
OS Ubutnu 22.04
autoware branch awsim-stable
AWSIM version v1.1.0

First, I build a new docker image from dockerfile. This is because AWSIM seems to require an environment without the presence of ROS 2. However, the ghrc images that autoware shipped already source the ROS 2 setup.bash for you. Therefore, you will always end up with an environment with ROS 2 activated.

Autoware already provides instructions and scripts to build a docker image:
https://github.com/autowarefoundation/autoware/tree/main/docker#building-docker-images-on-your-local-machine

Next, I slightly modify the dockerfile in autoware/docker/autoware-universe/Dockerfile so that it will not source setup.bash when building image

autoware/docker/autoware-universe/Dockerfile

## Create entrypoint
# hadolint ignore=DL3059
# RUN echo "source /opt/ros/${ROS_DISTRO}/setup.bash" > /etc/bash.bashrc
CMD ["/bin/bash"]

FROM devel as prebuilt
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

## Build and change permission for runtime data conversion
# RUN source /opt/ros/"$ROS_DISTRO"/setup.bash \\
#   && colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release \\
#   && find /autoware/install -type d -exec chmod 777 {} \\;

## Create entrypoint
# RUN echo "source /autoware/install/setup.bash" > /etc/bash.bashrc
CMD ["/bin/bash"]

It might be helpful to change the image name and tag to another so that it will not conflict with the default image from ghrc. The image name and tag can be changed in autoware/docker/build.sh. For example, I set tag to devel.tags=st9540808:$rosdistro-latest$image_name_suffix and the name and tag will be st9540808:humble-latest-cuda. I also use the option --no-cache to not use cache when building the image. The part that I modified is shown below

autoware/docker/build.sh

docker buildx bake --no-cache --load --progress=plain -f "$SCRIPT_DIR/autoware-universe/docker-bake.hcl" \\
    --set "*.context=$WORKSPACE_ROOT" \\
    --set "*.ssh=default" \\
    --set "*.platform=$platform" \\
    --set "*.args.ROS_DISTRO=$rosdistro" \\
    --set "*.args.BASE_IMAGE=$base_image" \\
    --set "*.args.SETUP_ARGS=$setup_args" \\
    --set "devel.tags=st9540808:$rosdistro-latest$image_name_suffix"
    # --set "prebuilt.tags=st9540808:$rosdistro-latest-prebuilt$image_name_suffix"

Then simply follow the instruction in autoware to build the image

$ ./docker/build.sh --platform linux/amd64

Finally, after building the image, set the correct localhost setting within a container as described in https://tier4.github.io/AWSIM/GettingStarted/QuickStartDemo/
then run AWSIM_v1.1.0/AWSIM_demo.x86_64. The command will be:

$ rocker --nvidia --x11 --user --privileged --net host -- st9540808:humble-latest-cuda
...
# within container
$ export ROS_LOCALHOST_ONLY=1
$ export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
$ ./AWSIM_v1.1.0/AWSIM_demo.x86_64

After that, run another container and enable ROS 2 to check if it works. If all works correctly, you can see the topic from AWSIM.

$ ros2 topic list
/clock
/control/command/control_cmd
/control/command/emergency_cmd
/control/command/gear_cmd
/control/command/hazard_lights_cmd
/control/command/turn_indicators_cmd
/parameter_events
/rosout
/sensing/camera/traffic_light/camera_info
/sensing/camera/traffic_light/image_raw
/sensing/gnss/pose
/sensing/gnss/pose_with_covariance
/sensing/imu/tamagawa/imu_raw
/sensing/lidar/top/pointcloud_raw
/sensing/lidar/top/pointcloud_raw_ex
/vehicle/status/control_mode
/vehicle/status/gear_status
/vehicle/status/hazard_lights_status
/vehicle/status/steering_status
/vehicle/status/turn_indicators_status
/vehicle/status/velocity_status

If the topics correctly show up, the result will be the same as shown in the documentation. The difference is that both AWSIM and autoware are running within different containers.

Screenshot from 2023-01-29 16-29-39

@shmpwk
Copy link
Contributor

shmpwk commented Jan 30, 2023

Excellent!!! It works perfect for me!
We are discussing how to incorporate this into our tutorials.

@kasper-helm
Copy link

Great work providing a thorough write-up.

I followed your steps exactly and was able to launch AWSIM_demo.x86_64 in the modified container. However, in the second container, I only see the following topics. I tried running both a second instance of the same modified docker image, as well as the default ghcr.io/autowarefoundation/autoware-universe, with the same result. Did you run into any issues seeing the topics from AWSIM?

/parameter_events
/rosout

@st9540808
Copy link
Author

st9540808 commented Feb 10, 2023

@kasper-helm

I ran into this problem before. It turns out I need to set localhost to multicast on host and some other things before running containers, as described in the AWSIM quick start demo:

sudo sysctl -w net.core.rmem_max=2147483647
sudo ip link set lo multicast on
touch /tmp/cycloneDDS_configured

And the required environment variables should be set within both containers of AWSIM and autoware:

export ROS_LOCALHOST_ONLY=1
export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp

So the overall steps would be:

  1. Execute required commands on host
  2. Run a container with modified image, set environment variables, and execute AWSIM
  3. Run another container with modified image, set environment variables, source setup.bash of ROS 2 and autoware, and launch Autoware

@Sam827-r
Copy link

Sam827-r commented Apr 5, 2024

@st9540808 @shmpwk
Not worked for me.

FROM devel as prebuilted
SHELL ["/bin/bash", "-o", "pipefail", "-c"] getting error for this line in docker file.
Can please help me to update docker file modification.

@isouf
Copy link

isouf commented Apr 8, 2024

@Sam827-r I think this issue has been introduced since the re-organisation of Docker: https://github.com/autowarefoundation/autoware/tree/da22bdc9215edf188cdbf8ef34637b43956d9049

@oguzkaganozt what would be the most appropriate approach to avoid sourcing ROS2 in the container as recommended previously here.

@ShoukatM
Copy link

ShoukatM commented May 6, 2024

@st9540808
can you provide your docker file and build.sh that would be helpful.

@st9540808
Copy link
Author

@ShoukatM
I haven't used AWSIM for a long time and have not kept track of the current state for using Autoware along docker container. So the method may not be the best practice after all. However, after I tried the technique I used before, Autoware stack and AWSIM can still communicate with each other. Also, they are running in individual containers.

Screenshot from 2024-05-07 00-22-19

Below is my current environment

environment
OS Ubutnu 22.04
autoware branch tag: 2023.10
AWSIM version v1.1.0

The full Dockerfile, in autoware/docker/autoware-universe/Dockerfile

# Image args should come at the beginning.
ARG BASE_IMAGE
ARG PREBUILT_BASE_IMAGE
# hadolint ignore=DL3006
FROM $BASE_IMAGE as devel
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

ARG ROS_DISTRO
ARG SETUP_ARGS

## Install apt packages
# hadolint ignore=DL3008
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install --no-install-recommends \
  git \
  ssh \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Copy files
COPY autoware.repos simulator.repos setup-dev-env.sh ansible-galaxy-requirements.yaml amd64.env arm64.env /autoware/
COPY ansible/ /autoware/ansible/
WORKDIR /autoware
RUN ls /autoware

## Add GitHub to known hosts for private repositories
RUN mkdir -p ~/.ssh \
  && ssh-keyscan github.com >> ~/.ssh/known_hosts

## Set up development environment
RUN --mount=type=ssh \
  ./setup-dev-env.sh -y $SETUP_ARGS universe \
  && pip uninstall -y ansible ansible-core \
  && mkdir src \
  && vcs import src < autoware.repos \
  && vcs import src < simulator.repos \
  && rosdep update \
  && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO" \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Clean up unnecessary files
RUN rm -rf \
  "$HOME"/.cache \
  /etc/apt/sources.list.d/cuda*.list \
  /etc/apt/sources.list.d/docker.list \
  /etc/apt/sources.list.d/nvidia-docker.list

## Register Vulkan GPU vendors
RUN curl https://gitlab.com/nvidia/container-images/vulkan/raw/dc389b0445c788901fda1d85be96fd1cb9410164/nvidia_icd.json -o /etc/vulkan/icd.d/nvidia_icd.json \
  && chmod 644 /etc/vulkan/icd.d/nvidia_icd.json
RUN curl https://gitlab.com/nvidia/container-images/opengl/raw/5191cf205d3e4bb1150091f9464499b076104354/glvnd/runtime/10_nvidia.json -o /etc/glvnd/egl_vendor.d/10_nvidia.json \
  && chmod 644 /etc/glvnd/egl_vendor.d/10_nvidia.json

## Register OpenCL GPU vendors
RUN mkdir -p /etc/OpenCL/vendors \
  && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd \
  && chmod 644 /etc/OpenCL/vendors/nvidia.icd

## TODO: remove/re-evaluate after Ubuntu 24.04 is released
## Fix OpenGL issues (e.g. black screen in rviz2) due to old mesa lib in Ubuntu 22.04
## See https://github.com/autowarefoundation/autoware.universe/issues/2789
# hadolint ignore=DL3008
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y software-properties-common \
  && apt-add-repository ppa:kisak/kisak-mesa \
  && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
  libegl-mesa0 libegl1-mesa-dev libgbm-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri libglapi-mesa libglx-mesa0 \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Create entrypoint
# hadolint ignore=DL3059
# RUN echo "source /opt/ros/${ROS_DISTRO}/setup.bash" > /etc/bash.bashrc
CMD ["/bin/bash"]

FROM devel as builder
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

## Build and change permission for runtime data conversion
# RUN source /opt/ros/"$ROS_DISTRO"/setup.bash \
#   && colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release \
#   && find /autoware/install -type d -exec chmod 777 {} \;

# hadolint ignore=DL3006
FROM $PREBUILT_BASE_IMAGE as prebuilt

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

ARG ROS_DISTRO
ARG SETUP_ARGS

## Install apt packages
# hadolint ignore=DL3008
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install --no-install-recommends \
  git \
  ssh \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Copy files
# COPY autoware.repos setup-dev-env.sh ansible-galaxy-requirements.yaml amd64.env arm64.env /autoware/
# COPY ansible/ /autoware/ansible/
# WORKDIR /autoware
# RUN ls /autoware

## Add GitHub to known hosts for private repositories
RUN mkdir -p ~/.ssh \
  && ssh-keyscan github.com >> ~/.ssh/known_hosts

## Set up runtime environment
# RUN --mount=type=ssh \
#   ./setup-dev-env.sh -y $SETUP_ARGS --no-cuda-drivers --runtime universe \
#   && pip uninstall -y ansible ansible-core \
#   && mkdir src \
#   && vcs import src < autoware.repos \
#   && rosdep update \
#   && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO" \
#   && rm -rf src \
#   && apt-get clean \
#   && rm -rf /var/lib/apt/lists/*

## Copy install folder from builder
# COPY --from=builder /autoware/install/ /autoware/install/

## Clean up unnecessary files
RUN rm -rf \
  "$HOME"/.cache \
  /etc/apt/sources.list.d/cuda*.list \
  /etc/apt/sources.list.d/docker.list \
  /etc/apt/sources.list.d/nvidia-docker.list

## Register Vulkan GPU vendors
ADD "https://gitlab.com/nvidia/container-images/vulkan/raw/dc389b0445c788901fda1d85be96fd1cb9410164/nvidia_icd.json" /etc/vulkan/icd.d/nvidia_icd.json
RUN chmod 644 /etc/vulkan/icd.d/nvidia_icd.json
ADD "https://gitlab.com/nvidia/container-images/opengl/raw/5191cf205d3e4bb1150091f9464499b076104354/glvnd/runtime/10_nvidia.json" /etc/glvnd/egl_vendor.d/10_nvidia.json
RUN chmod 644 /etc/glvnd/egl_vendor.d/10_nvidia.json

## Create entrypoint
# hadolint ignore=DL3059
# RUN echo "source /autoware/install/setup.bash" > /etc/bash.bashrc
CMD ["/bin/bash"]

The build.sh file, in autoware/docker/build.sh

#!/usr/bin/env bash

set -e

SCRIPT_DIR=$(readlink -f "$(dirname "$0")")
WORKSPACE_ROOT="$SCRIPT_DIR/../"

# Parse arguments
args=()
while [ "$1" != "" ]; do
    case "$1" in
    --no-cuda)
        option_no_cuda=true
        ;;
    --platform)
        option_platform="$2"
        shift
        ;;
    --no-prebuilt)
        option_no_prebuilt=true
        ;;
    *)
        args+=("$1")
        ;;
    esac
    shift
done

# Set CUDA options
if [ "$option_no_cuda" = "true" ]; then
    setup_args="--no-nvidia"
    image_name_suffix=""
else
    setup_args="--no-cuda-drivers"
    image_name_suffix="-cuda"
fi

# Set prebuilt options
if [ "$option_no_prebuilt" = "true" ]; then
    targets=("devel")
else
    # default targets include devel and prebuilt
    targets=()
fi

# Set platform
if [ -n "$option_platform" ]; then
    platform="$option_platform"
else
    platform="linux/amd64"
    if [ "$(uname -m)" = "aarch64" ]; then
        platform="linux/arm64"
    fi
fi

# Load env
source "$WORKSPACE_ROOT/amd64.env"
if [ "$platform" = "linux/arm64" ]; then
    source "$WORKSPACE_ROOT/arm64.env"
fi

# https://github.com/docker/buildx/issues/484
export BUILDKIT_STEP_LOG_MAX_SIZE=10000000

set -x
docker buildx bake --no-cache --load --progress=plain -f "$SCRIPT_DIR/autoware-universe/docker-bake.hcl" \
    --set "*.context=$WORKSPACE_ROOT" \
    --set "*.ssh=default" \
    --set "*.platform=$platform" \
    --set "*.args.ROS_DISTRO=$rosdistro" \
    --set "*.args.BASE_IMAGE=$base_image" \
    --set "*.args.PREBUILT_BASE_IMAGE=$prebuilt_base_image" \
    --set "*.args.SETUP_ARGS=$setup_args" \
    --set "devel.tags=st9540808/aw-2023.10-amd64:$rosdistro-latest-2404$image_name_suffix" \
    # --set "prebuilt.tags=ghcr.io/autowarefoundation/autoware-universe:$rosdistro-latest-prebuilt$image_name_suffix" \
    "${targets[@]}"
set +x

After building the docker image, you still need to build the autoware stack. Then follow the instructions in Quick start demo from AWSIM, you should be able to run autoware with AWSIM just like the screenshot at the beginning.

@ShoukatM
Copy link

ShoukatM commented May 6, 2024

@ShoukatM I haven't used AWSIM for a long time and have not kept track of the current state for using Autoware along docker container. So the method may not be the best practice after all. However, after I tried the technique I used before, Autoware stack and AWSIM can still communicate with each other. Also, they are running in individual containers.

Screenshot from 2024-05-07 00-22-19

Below is my current environment

environment
OS Ubutnu 22.04
autoware branch tag: 2023.10
AWSIM version v1.1.0
The full Dockerfile, in autoware/docker/autoware-universe/Dockerfile

# Image args should come at the beginning.
ARG BASE_IMAGE
ARG PREBUILT_BASE_IMAGE
# hadolint ignore=DL3006
FROM $BASE_IMAGE as devel
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

ARG ROS_DISTRO
ARG SETUP_ARGS

## Install apt packages
# hadolint ignore=DL3008
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install --no-install-recommends \
  git \
  ssh \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Copy files
COPY autoware.repos simulator.repos setup-dev-env.sh ansible-galaxy-requirements.yaml amd64.env arm64.env /autoware/
COPY ansible/ /autoware/ansible/
WORKDIR /autoware
RUN ls /autoware

## Add GitHub to known hosts for private repositories
RUN mkdir -p ~/.ssh \
  && ssh-keyscan github.com >> ~/.ssh/known_hosts

## Set up development environment
RUN --mount=type=ssh \
  ./setup-dev-env.sh -y $SETUP_ARGS universe \
  && pip uninstall -y ansible ansible-core \
  && mkdir src \
  && vcs import src < autoware.repos \
  && vcs import src < simulator.repos \
  && rosdep update \
  && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO" \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Clean up unnecessary files
RUN rm -rf \
  "$HOME"/.cache \
  /etc/apt/sources.list.d/cuda*.list \
  /etc/apt/sources.list.d/docker.list \
  /etc/apt/sources.list.d/nvidia-docker.list

## Register Vulkan GPU vendors
RUN curl https://gitlab.com/nvidia/container-images/vulkan/raw/dc389b0445c788901fda1d85be96fd1cb9410164/nvidia_icd.json -o /etc/vulkan/icd.d/nvidia_icd.json \
  && chmod 644 /etc/vulkan/icd.d/nvidia_icd.json
RUN curl https://gitlab.com/nvidia/container-images/opengl/raw/5191cf205d3e4bb1150091f9464499b076104354/glvnd/runtime/10_nvidia.json -o /etc/glvnd/egl_vendor.d/10_nvidia.json \
  && chmod 644 /etc/glvnd/egl_vendor.d/10_nvidia.json

## Register OpenCL GPU vendors
RUN mkdir -p /etc/OpenCL/vendors \
  && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd \
  && chmod 644 /etc/OpenCL/vendors/nvidia.icd

## TODO: remove/re-evaluate after Ubuntu 24.04 is released
## Fix OpenGL issues (e.g. black screen in rviz2) due to old mesa lib in Ubuntu 22.04
## See https://github.com/autowarefoundation/autoware.universe/issues/2789
# hadolint ignore=DL3008
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y software-properties-common \
  && apt-add-repository ppa:kisak/kisak-mesa \
  && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
  libegl-mesa0 libegl1-mesa-dev libgbm-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri libglapi-mesa libglx-mesa0 \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Create entrypoint
# hadolint ignore=DL3059
# RUN echo "source /opt/ros/${ROS_DISTRO}/setup.bash" > /etc/bash.bashrc
CMD ["/bin/bash"]

FROM devel as builder
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

## Build and change permission for runtime data conversion
# RUN source /opt/ros/"$ROS_DISTRO"/setup.bash \
#   && colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release \
#   && find /autoware/install -type d -exec chmod 777 {} \;

# hadolint ignore=DL3006
FROM $PREBUILT_BASE_IMAGE as prebuilt

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

ARG ROS_DISTRO
ARG SETUP_ARGS

## Install apt packages
# hadolint ignore=DL3008
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install --no-install-recommends \
  git \
  ssh \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

## Copy files
# COPY autoware.repos setup-dev-env.sh ansible-galaxy-requirements.yaml amd64.env arm64.env /autoware/
# COPY ansible/ /autoware/ansible/
# WORKDIR /autoware
# RUN ls /autoware

## Add GitHub to known hosts for private repositories
RUN mkdir -p ~/.ssh \
  && ssh-keyscan github.com >> ~/.ssh/known_hosts

## Set up runtime environment
# RUN --mount=type=ssh \
#   ./setup-dev-env.sh -y $SETUP_ARGS --no-cuda-drivers --runtime universe \
#   && pip uninstall -y ansible ansible-core \
#   && mkdir src \
#   && vcs import src < autoware.repos \
#   && rosdep update \
#   && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO" \
#   && rm -rf src \
#   && apt-get clean \
#   && rm -rf /var/lib/apt/lists/*

## Copy install folder from builder
# COPY --from=builder /autoware/install/ /autoware/install/

## Clean up unnecessary files
RUN rm -rf \
  "$HOME"/.cache \
  /etc/apt/sources.list.d/cuda*.list \
  /etc/apt/sources.list.d/docker.list \
  /etc/apt/sources.list.d/nvidia-docker.list

## Register Vulkan GPU vendors
ADD "https://gitlab.com/nvidia/container-images/vulkan/raw/dc389b0445c788901fda1d85be96fd1cb9410164/nvidia_icd.json" /etc/vulkan/icd.d/nvidia_icd.json
RUN chmod 644 /etc/vulkan/icd.d/nvidia_icd.json
ADD "https://gitlab.com/nvidia/container-images/opengl/raw/5191cf205d3e4bb1150091f9464499b076104354/glvnd/runtime/10_nvidia.json" /etc/glvnd/egl_vendor.d/10_nvidia.json
RUN chmod 644 /etc/glvnd/egl_vendor.d/10_nvidia.json

## Create entrypoint
# hadolint ignore=DL3059
# RUN echo "source /autoware/install/setup.bash" > /etc/bash.bashrc
CMD ["/bin/bash"]

The build.sh file, in autoware/docker/build.sh

#!/usr/bin/env bash

set -e

SCRIPT_DIR=$(readlink -f "$(dirname "$0")")
WORKSPACE_ROOT="$SCRIPT_DIR/../"

# Parse arguments
args=()
while [ "$1" != "" ]; do
    case "$1" in
    --no-cuda)
        option_no_cuda=true
        ;;
    --platform)
        option_platform="$2"
        shift
        ;;
    --no-prebuilt)
        option_no_prebuilt=true
        ;;
    *)
        args+=("$1")
        ;;
    esac
    shift
done

# Set CUDA options
if [ "$option_no_cuda" = "true" ]; then
    setup_args="--no-nvidia"
    image_name_suffix=""
else
    setup_args="--no-cuda-drivers"
    image_name_suffix="-cuda"
fi

# Set prebuilt options
if [ "$option_no_prebuilt" = "true" ]; then
    targets=("devel")
else
    # default targets include devel and prebuilt
    targets=()
fi

# Set platform
if [ -n "$option_platform" ]; then
    platform="$option_platform"
else
    platform="linux/amd64"
    if [ "$(uname -m)" = "aarch64" ]; then
        platform="linux/arm64"
    fi
fi

# Load env
source "$WORKSPACE_ROOT/amd64.env"
if [ "$platform" = "linux/arm64" ]; then
    source "$WORKSPACE_ROOT/arm64.env"
fi

# https://github.com/docker/buildx/issues/484
export BUILDKIT_STEP_LOG_MAX_SIZE=10000000

set -x
docker buildx bake --no-cache --load --progress=plain -f "$SCRIPT_DIR/autoware-universe/docker-bake.hcl" \
    --set "*.context=$WORKSPACE_ROOT" \
    --set "*.ssh=default" \
    --set "*.platform=$platform" \
    --set "*.args.ROS_DISTRO=$rosdistro" \
    --set "*.args.BASE_IMAGE=$base_image" \
    --set "*.args.PREBUILT_BASE_IMAGE=$prebuilt_base_image" \
    --set "*.args.SETUP_ARGS=$setup_args" \
    --set "devel.tags=st9540808/aw-2023.10-amd64:$rosdistro-latest-2404$image_name_suffix" \
    # --set "prebuilt.tags=ghcr.io/autowarefoundation/autoware-universe:$rosdistro-latest-prebuilt$image_name_suffix" \
    "${targets[@]}"
set +x

After building the docker image, you still need to build the autoware stack. Then follow the instructions in Quick start demo from AWSIM, you should be able to run autoware with AWSIM just like the screenshot at the beginning.

hi Taiyou Kuo,
I was thinking by using your technique it is going to create prebuilt image for autoware stack. as you described above now its cleared that still need to build the autoware stack after building docker image.
Great work and quick responce from your side.

mitsudome-r pushed a commit to mitsudome-r/AWSIM that referenced this issue Jun 5, 2024
Signed-off-by: Alptuğ Cırıt <alptug@leodrive.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants