Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Docker environment to Ubuntu 22, Python 3.10 #824

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@
exclude: '^docs/conf.py'

default_language_version:
python: python3.8
python: python3.10

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.2.0
rev: v4.4.0
hooks:
- id: trailing-whitespace
exclude: '\.dat$'
Expand All @@ -56,13 +56,13 @@ repos:
- id: isort

- repo: https://github.com/psf/black
rev: 22.3.0
rev: 23.3.0
hooks:
- id: black
language_version: python3

- repo: https://github.com/PyCQA/flake8
rev: 3.9.2
rev: 6.0.0
hooks:
- id: flake8
# black-compatible flake-8 config
Expand Down
31 changes: 19 additions & 12 deletions docker/Dockerfile.finn
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

FROM pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime
FROM ubuntu:jammy-20230126
LABEL maintainer="Yaman Umuroglu <yamanu@xilinx.com>"

ARG XRT_DEB_VERSION="xrt_202210.2.13.466_18.04-amd64-xrt"
ARG XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"

WORKDIR /workspace

Expand Down Expand Up @@ -57,12 +57,15 @@ RUN apt-get update && \
unzip \
zip \
locales \
lsb-core
lsb-core \
python3 \
python-is-python3 \
python3-pip
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN locale-gen "en_US.UTF-8"

# install Verilator from source to get the right version
RUN apt-get install -y git perl python3 make autoconf g++ flex bison ccache libgoogle-perftools-dev numactl perl-doc libfl2 libfl-dev zlibc zlib1g zlib1g-dev
RUN apt-get install -y git perl make autoconf g++ flex bison ccache libgoogle-perftools-dev numactl perl-doc libfl2 libfl-dev zlib1g zlib1g-dev
RUN git clone https://github.com/verilator/verilator
RUN cd verilator && \
git checkout v4.224 && \
Expand All @@ -81,19 +84,23 @@ RUN rm /tmp/$XRT_DEB_VERSION.deb
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN rm requirements.txt

# install PyTorch
RUN pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

# extra Python package dependencies (for testing and interaction)
RUN pip install pygments==2.4.1
RUN pip install ipykernel==5.5.5
RUN pip install pygments==2.14.0
RUN pip install ipykernel==6.21.2
RUN pip install jupyter==1.0.0 --ignore-installed
RUN pip install markupsafe==2.0.1
RUN pip install matplotlib==3.3.1 --ignore-installed
RUN pip install matplotlib==3.7.0 --ignore-installed
RUN pip install pytest-dependency==0.5.1
RUN pip install pytest-xdist[setproctitle]==2.4.0
RUN pip install pytest-parallel==0.1.0
RUN pip install pytest-xdist[setproctitle]==3.2.0
RUN pip install pytest-parallel==0.1.1
RUN pip install "netron>=5.0.0"
RUN pip install pandas==1.1.5
RUN pip install scikit-learn==0.24.1
RUN pip install tqdm==4.31.1
RUN pip install pandas==1.5.3
RUN pip install scikit-learn==1.2.1
RUN pip install tqdm==4.64.1
RUN pip install -e git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading

# extra dependencies from other FINN deps
Expand Down
5 changes: 3 additions & 2 deletions docker/finn_entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,9 @@ recho () {
echo -e "${RED}ERROR: $1${NC}"
}

# qonnx
pip install --user -e ${FINN_ROOT}/deps/qonnx
# qonnx (using workaround for https://github.com/pypa/pip/issues/7953)
# to be fixed in future Ubuntu versions (https://bugs.launchpad.net/ubuntu/+source/setuptools/+bug/1994016)
pip install --no-build-isolation --no-warn-script-location -e ${FINN_ROOT}/deps/qonnx
# finn-experimental
pip install --user -e ${FINN_ROOT}/deps/finn-experimental
# brevitas
Expand Down
6 changes: 3 additions & 3 deletions fetch-repos.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

QONNX_COMMIT="20a34289cf2297d2b2bbbe75d6ac152ece86e3b4"
QONNX_COMMIT="0c980ef410c7c99b33c5b96486233f5a723ca1bc"
FINN_EXP_COMMIT="0aa7e1c44b20cf085b6fe42cff360f0a832afd2c"
BREVITAS_COMMIT="c65f9c13dc124971f14739349531bbcda5c2a4aa"
BREVITAS_COMMIT="d30ba0d6b3db4a333072624fa3d10827a686488d"
PYVERILATOR_COMMIT="766e457465f5c0dd315490d7b9cc5d74f9a76f4f"
CNPY_COMMIT="4e8810b1a8637695171ed346ce68f6984e585ef4"
HLSLIB_COMMIT="c17aa478ae574971d115afa9fa4d9c215857d1ac"
Expand All @@ -39,7 +39,7 @@ XIL_BDF_COMMIT="8cf4bb674a919ac34e3d99d8d71a9e60af93d14e"
KV260_BDF_COMMIT="98e0d3efc901f0b974006bc4370c2a7ad8856c79"
EXP_BOARD_FILES_MD5="30eecc497c31050bd46d10ea20eba232"

QONNX_URL="https://github.com/fastmachinelearning/qonnx.git"
QONNX_URL="https://github.com/iksnagreb/qonnx.git"
FINN_EXP_URL="https://github.com/Xilinx/finn-experimental.git"
BREVITAS_URL="https://github.com/Xilinx/brevitas.git"
PYVERILATOR_URL="https://github.com/maltanar/pyverilator.git"
Expand Down
14 changes: 6 additions & 8 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,17 @@ clize==4.1.1
dataclasses-json==0.5.7
gspread==3.6.0
ipython==8.12.2
numpy==1.22.0
numpy==1.24.1
onnx==1.13.0
onnxoptimizer
onnxruntime==1.11.1
pre-commit==2.9.2
onnxruntime==1.15.0
pre-commit==3.3.2
protobuf==3.20.3
psutil==5.9.4
pyscaffold==3.2.1
scipy==1.5.2
pyscaffold==4.4
scipy==1.10.1
setupext-janitor>=1.1.2
sigtools==2.0.3
sphinx==5.0.2
sphinx_rtd_theme==0.5.0
toposort==1.5
toposort==1.7.0
vcdvcd==1.0.5
wget==3.2
5 changes: 4 additions & 1 deletion run-docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ SCRIPTPATH=$(dirname "$SCRIPT")
: ${ALVEO_BOARD="U250"}
: ${ALVEO_TARGET_DIR="/tmp"}
: ${PLATFORM_REPO_PATHS="/opt/xilinx/platforms"}
: ${XRT_DEB_VERSION="xrt_202210.2.13.466_18.04-amd64-xrt"}
: ${XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"}
: ${FINN_HOST_BUILD_DIR="/tmp/$DOCKER_INST_NAME"}
: ${FINN_DOCKER_TAG="xilinx/finn:$(git describe --always --tags --dirty).$XRT_DEB_VERSION"}
: ${FINN_DOCKER_PREBUILT="0"}
Expand Down Expand Up @@ -201,6 +201,9 @@ DOCKER_EXEC+="-e PYNQ_PASSWORD=$PYNQ_PASSWORD "
DOCKER_EXEC+="-e PYNQ_TARGET_DIR=$PYNQ_TARGET_DIR "
DOCKER_EXEC+="-e OHMYXILINX=$OHMYXILINX "
DOCKER_EXEC+="-e NUM_DEFAULT_WORKERS=$NUM_DEFAULT_WORKERS "
# Workaround for FlexLM issue, see:
# https://community.flexera.com/t5/InstallAnywhere-Forum/Issues-when-running-Xilinx-tools-or-Other-vendor-tools-in-docker/m-p/245820#M10647
DOCKER_EXEC+="-e LD_PRELOAD=/lib/x86_64-linux-gnu/libudev.so.1 "
if [ "$FINN_DOCKER_RUN_AS_ROOT" = "0" ];then
DOCKER_EXEC+="-v /etc/group:/etc/group:ro "
DOCKER_EXEC+="-v /etc/passwd:/etc/passwd:ro "
Expand Down
8 changes: 4 additions & 4 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@
name = finn
description = A Framework for Fast, Scalable Quantized Neural Network Inference
author = Yaman Umuroglu
author-email = yamanu@xilinx.com
author_email = yamanu@xilinx.com
license = new-bsd
long-description = file: README.md
long-description-content-type = text/markdown
long_description = file: README.md
long_description_content_type = text/markdown
url = https://xilinx.github.io/finn/
project-urls =
project_urls =
Documentation = https://finn.readthedocs.io/
# Change if running only on Windows, Mac or Linux (comma-separated)
platforms = any
Expand Down
2 changes: 1 addition & 1 deletion src/finn/analysis/fpgadataflow/post_synth_res.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ def get_instance_stats(inst_name):
row = root.findall(".//*[@contents='%s']/.." % inst_name)
if row != []:
node_dict = {}
row = row[0].getchildren()
row = list(row[0])
for (restype, ind) in restype_to_ind.items():
node_dict[restype] = int(row[ind].attrib["contents"])
return node_dict
Expand Down
2 changes: 1 addition & 1 deletion src/finn/transformation/fpgadataflow/templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@

create_bd_design "top"
if {$ZYNQ_TYPE == "zynq_us+"} {
create_bd_cell -type ip -vlnv xilinx.com:ip:zynq_ultra_ps_e:3.4 zynq_ps
create_bd_cell -type ip -vlnv xilinx.com:ip:zynq_ultra_ps_e:3.5 zynq_ps
apply_bd_automation -rule xilinx.com:bd_rule:zynq_ultra_ps_e -config {apply_board_preset "1" } [get_bd_cells zynq_ps]
#activate one slave port, deactivate the second master port
set_property -dict [list CONFIG.PSU__USE__S_AXI_GP2 {1}] [get_bd_cells zynq_ps]
Expand Down
4 changes: 2 additions & 2 deletions tests/brevitas/test_brevitas_avg_pool_export.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
import os
import torch
from brevitas.export import export_qonnx
from brevitas.nn import QuantAvgPool2d, QuantIdentity, QuantReLU
from brevitas.nn import TruncAvgPool2d, QuantIdentity, QuantReLU
from qonnx.core.datatype import DataType
from qonnx.core.modelwrapper import ModelWrapper
from qonnx.transformation.infer_datatypes import InferDataTypes
Expand Down Expand Up @@ -73,7 +73,7 @@ def test_brevitas_avg_pool_export(
bit_width=input_bit_width,
return_quant_tensor=True,
)
quant_avgpool = QuantAvgPool2d(
quant_avgpool = TruncAvgPool2d(
kernel_size=kernel_size,
stride=stride,
bit_width=bit_width,
Expand Down
6 changes: 3 additions & 3 deletions tests/end2end/test_end2end_bnn_pynq.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,13 +328,13 @@ def test_export(self, topology, wbits, abits, QONNX_export):
(model, ishape) = get_trained_network_and_ishape(topology, wbits, abits)
chkpt_name = get_checkpoint_name(topology, wbits, abits, QONNX_export, "export")
if QONNX_export:
export_qonnx(model, torch.randn(ishape), chkpt_name)
export_qonnx(model, torch.randn(ishape), chkpt_name, opset_version=13)
qonnx_cleanup(chkpt_name, out_file=chkpt_name)
model = ModelWrapper(chkpt_name)
model = model.transform(ConvertQONNXtoFINN())
model.save(chkpt_name)
else:
export_finn_onnx(model, torch.randn(ishape), chkpt_name)
export_finn_onnx(model, torch.randn(ishape), chkpt_name, opset_version=13)
nname = "%s_w%da%d" % (topology, wbits, abits)
update_dashboard_data(topology, wbits, abits, "network", nname)
dtstr = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
Expand Down Expand Up @@ -374,7 +374,7 @@ def test_add_pre_and_postproc(self, topology, wbits, abits, QONNX_export):
chkpt_preproc_name = get_checkpoint_name(
topology, wbits, abits, QONNX_export, "preproc"
)
export_finn_onnx(totensor_pyt, torch.randn(ishape), chkpt_preproc_name)
export_finn_onnx(totensor_pyt, torch.randn(ishape), chkpt_preproc_name, opset_version=13)
assert os.path.isfile(chkpt_preproc_name)
# join preprocessing and core model
pre_model = ModelWrapper(chkpt_preproc_name)
Expand Down
2 changes: 1 addition & 1 deletion tests/end2end/test_end2end_cybsec_mlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ def test_end2end_cybsec_mlp_export(QONNX_export):
model.save(export_onnx_path)
else:
export_finn_onnx(
model_for_export, export_path=export_onnx_path, input_t=input_qt
model_for_export, export_path=export_onnx_path, input_t=input_qt, input_names=["onnx::Mul_0"]
)
assert os.path.isfile(export_onnx_path)
# fix input datatype
Expand Down
3 changes: 2 additions & 1 deletion tests/fpgadataflow/test_convert_to_hls_layers_cnv.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
from qonnx.custom_op.registry import getCustomOp
from qonnx.transformation.bipolar_to_xnor import ConvertBipolarMatMulToXnorPopcount
from qonnx.transformation.fold_constants import FoldConstants
from qonnx.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames
from qonnx.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames, GiveUniqueParameterTensors
from qonnx.transformation.infer_data_layouts import InferDataLayouts
from qonnx.transformation.infer_shapes import InferShapes
from qonnx.transformation.lower_convs_to_matmul import LowerConvsToMatMul
Expand Down Expand Up @@ -67,6 +67,7 @@ def test_convert_to_hls_layers_cnv_w1a1(fused_activation):
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveUniqueParameterTensors())
model = model.transform(GiveReadableTensorNames())
model = model.transform(Streamline())
model = model.transform(LowerConvsToMatMul())
Expand Down
4 changes: 3 additions & 1 deletion tests/fpgadataflow/test_convert_to_hls_layers_fc.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
from qonnx.custom_op.registry import getCustomOp
from qonnx.transformation.bipolar_to_xnor import ConvertBipolarMatMulToXnorPopcount
from qonnx.transformation.fold_constants import FoldConstants
from qonnx.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames
from qonnx.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames, GiveUniqueParameterTensors
from qonnx.transformation.infer_shapes import InferShapes

import finn.core.onnx_exec as oxe
Expand All @@ -64,6 +64,7 @@ def test_convert_to_hls_layers_tfc_w1a1():
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveUniqueParameterTensors())
model = model.transform(GiveReadableTensorNames())
model = model.transform(Streamline())
model = model.transform(ConvertBipolarMatMulToXnorPopcount())
Expand Down Expand Up @@ -135,6 +136,7 @@ def test_convert_to_hls_layers_tfc_w1a2():
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveUniqueParameterTensors())
model = model.transform(GiveReadableTensorNames())
model = model.transform(Streamline())
from finn.transformation.fpgadataflow.convert_to_hls_layers import (
Expand Down
2 changes: 2 additions & 0 deletions tests/transformation/streamline/test_streamline_cnv.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
from qonnx.transformation.general import (
GiveReadableTensorNames,
GiveUniqueNodeNames,
GiveUniqueParameterTensors,
RemoveStaticGraphInputs,
RemoveUnusedTensors,
)
Expand Down Expand Up @@ -69,6 +70,7 @@ def test_streamline_cnv(size, wbits, abits):
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveUniqueParameterTensors())
model = model.transform(GiveReadableTensorNames())
model = model.transform(RemoveStaticGraphInputs())
# load one of the test vectors
Expand Down
2 changes: 2 additions & 0 deletions tests/transformation/streamline/test_streamline_fc.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
from qonnx.transformation.general import (
GiveReadableTensorNames,
GiveUniqueNodeNames,
GiveUniqueParameterTensors,
RemoveStaticGraphInputs,
RemoveUnusedTensors,
)
Expand Down Expand Up @@ -72,6 +73,7 @@ def test_streamline_fc(size, wbits, abits):
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveUniqueParameterTensors())
model = model.transform(GiveReadableTensorNames())
model = model.transform(RemoveStaticGraphInputs())
# load one of the test vectors
Expand Down
3 changes: 2 additions & 1 deletion tests/transformation/test_infer_data_layouts_cnv.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
from qonnx.core.modelwrapper import ModelWrapper
from qonnx.transformation.bipolar_to_xnor import ConvertBipolarMatMulToXnorPopcount
from qonnx.transformation.fold_constants import FoldConstants
from qonnx.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames
from qonnx.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames, GiveUniqueParameterTensors
from qonnx.transformation.infer_data_layouts import InferDataLayouts
from qonnx.transformation.infer_shapes import InferShapes
from qonnx.transformation.lower_convs_to_matmul import LowerConvsToMatMul
Expand All @@ -57,6 +57,7 @@ def test_infer_data_layouts_cnv():
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveUniqueParameterTensors())
model = model.transform(GiveReadableTensorNames())
model = model.transform(Streamline())
model = model.transform(InferDataLayouts())
Expand Down