Skip to content

Internal issue #56 XML schema for validating to prevent DoS via large… #81

Internal issue #56 XML schema for validating to prevent DoS via large…

Internal issue #56 XML schema for validating to prevent DoS via large… #81

# This workflow will install Python dependencies, run tests the specified Python version
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Benchmark for pandas 2.2
on:
push:
branches:
- develop
- develop-ref
- feature_*
- main_*
- bugfix_*
- issue_*
- gha_*
paths-ignore:
- 'docs/**'
- '.github/pull_request_template.md'
- '.github/ISSUE_TEMPLATE/**'
- '**/README.md'
- '**/LICENSE.md'
pull_request:
types: [ opened, reopened, synchronize ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: [ "3.10" ]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} for pandas2.2 testing
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Retrieve METcalcpy repository develop branch
run: |
/usr/bin/git clone https://github.com/dtcenter/METcalcpy
cd METcalcpy
/usr/bin/git checkout develop
python -m pip install -e .
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install pytest>=7.1.1
python -m pip install pytest_benchmark
python -m pip install netcdf4==1.6.2
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
python -m pip install pandas==2.2
# Checking the branch name, not necessary but useful when setting things up.
# - name: Extract branch name
# shell: bash
# run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
# id: extract_branch
- name: Test with pytest with pandas 2.2
run: |
echo "GITHUB wspace: $GITHUB_WORKSPACE"
export PYTHONPATH=$GITHUB_WORKSPACE/:$GITHUB_WORKSPACE/METdbLoad:$GITHUB_WORKSPACE/METdbLoad/ush:$GITHUB_WORKSPACE/METreformat:$GITHUB_WORKSPACE/METreadnc
echo "PYTHONPATH is $PYTHONPATH"
cd $GITHUB_WORKSPACE/METreformat
cd test
py.test run_benchmark.py -v -s --benchmark-autosave
echo "Finished benchmark for pandas 2.2"
# - name: Archive benchmark results
# uses: actions/upload-artifact@v4
# with:
# name: benchmark-report
# path: /home/runner/work/METdataio/METdatio/METreformat/test/**