Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Autotuner] Plotting utility + test #2394

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
13 changes: 13 additions & 0 deletions docs/user/InstructionsForAutoTuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,19 @@ python3 distributed.py --design gcd --platform sky130hd \
sweep
```

#### Plot images

After running an AutoTuner experiment, you can generate a graph to understand the results better.
The graph will show the progression of one metric (see list below) over the execution of the experiment.

- QoR
- Runtime per trial
- Clock Period
- Worst slack

```shell
python3 utils/plot.py --results_dir <your-autotuner-result-path>
```

### Google Cloud Platform (GCP) distribution with Ray

Expand Down
33 changes: 15 additions & 18 deletions flow/test/test_autotuner.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
#!/usr/bin/env bash
DESIGN_NAME=${1:-gcd}
PLATFORM=${2:-nangate45}

# run the commands in ORFS root dir
echo "[INFO FLW-0029] Installing dependencies in virtual environment."
Expand All @@ -20,28 +22,23 @@ python3 -m unittest tools.AutoTuner.test.smoke_test_sweep.${PLATFORM}SweepSmokeT
echo "Running Autotuner smoke tests for --sample and --iteration."
python3 -m unittest tools.AutoTuner.test.smoke_test_sample_iteration.${PLATFORM}SampleIterationSmokeTest.test_sample_iteration

if [ "$PLATFORM" == "asap7" ] && [ "$DESIGN" == "gcd" ]; then
if [ "$PLATFORM" == "asap7" ] && [ "$DESIGN_NAME" == "gcd" ]; then
echo "Running Autotuner ref file test (only once)"
python3 -m unittest tools.AutoTuner.test.ref_file_check.RefFileCheck.test_files
fi

echo "Running Autotuner smoke algorithm & evaluation test"
python3 -m unittest tools.AutoTuner.test.smoke_test_algo_eval.${PLATFORM}AlgoEvalSmokeTest.test_algo_eval

# run this test last (because it modifies current path)
echo "Running Autotuner remote test"
if [ "$PLATFORM" == "asap7" ] && [ "$DESIGN" == "gcd" ]; then
# Get the directory of the current script
script_dir="$(dirname "${BASH_SOURCE[0]}")"
cd "$script_dir"/../../
latest_image=$(./etc/DockerTag.sh -dev)
echo "ORFS_VERSION=$latest_image" > ./tools/AutoTuner/.env
cd ./tools/AutoTuner
docker compose up --wait
docker compose exec ray-worker bash -c "cd /OpenROAD-flow-scripts/tools/AutoTuner/src/autotuner && \
python3 distributed.py --design gcd --platform asap7 --server 127.0.0.1 --port 10001 \
--config ../../../../flow/designs/asap7/gcd/autotuner.json tune --samples 1"
docker compose down -v --remove-orphans
echo "Running Autotuner plotting smoke test"
all_experiments=$(ls -d ./flow/logs/${PLATFORM}/${DESIGN_NAME}/*/)
if [ -z "$all_experiments" ]; then
echo "No experiments found for plotting"
exit 0
fi
all_experiments=$(basename -a $all_experiments)
for expt in $all_experiments; do
python3 tools/AutoTuner/src/autotuner/utils/plot.py \
--platform ${PLATFORM} \
--design ${DESIGN_NAME} \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the beginning of this script, you are casting these values to be uppercase, which does not match the path since Linux is case-sensitive. From the CI:

02:32:36  Running Autotuner plotting smoke test
02:32:36  ls: cannot access './flow/logs/SKY130HD/gcd/*/': No such file or directory
02:32:36  No experiments found for plotting
02:32:36  + exit 0

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we call the plot script and do not find data to plot, this should be an error, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, these are 2 separate errors, will fix them.

--experiment $expt
done

exit $ret
2 changes: 1 addition & 1 deletion flow/test/test_helper.sh
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ fi
if [ "${RUN_AUTOTUNER}" == "true" ]; then
set +x
echo "Start AutoTuner test."
./test/test_autotuner.sh
./test/test_autotuner.sh $DESIGN_NAME $PLATFORM
set -x
fi

Expand Down
1 change: 1 addition & 0 deletions tools/AutoTuner/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ tensorboard>=2.14.0,<=2.16.2
protobuf==3.20.3
SQLAlchemy==1.4.17
urllib3<=1.26.15
matplotlib==3.10.0
192 changes: 192 additions & 0 deletions tools/AutoTuner/src/autotuner/utils/plot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
import glob
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import re
import os
import argparse
import sys

# Only does plotting for AutoTunerBase variants
AT_REGEX = r"variant-AutoTunerBase-([\w-]+)-\w+"

cur_dir = os.path.dirname(os.path.abspath(__file__))
root_dir = os.path.join(cur_dir, "../../../../../")
os.chdir(root_dir)


def load_dir(dir: str) -> pd.DataFrame:
"""
Load and merge progress, parameters, and metrics data from a specified directory.
This function searches for `progress.csv`, `params.json`, and `metrics.json` files within the given directory,
concatenates the data, and merges them into a single pandas DataFrame.
Args:
dir (str): The directory path containing the subdirectories with `progress.csv`, `params.json`, and `metrics.json` files.
Returns:
pd.DataFrame: A DataFrame containing the merged data from the progress, parameters, and metrics files.
"""

# Concatenate progress DFs
progress_csvs = glob.glob(f"{dir}/*/progress.csv")
if len(progress_csvs) == 0:
print("No progress.csv files found.")
sys.exit(0)
progress_df = pd.concat([pd.read_csv(f) for f in progress_csvs])

# Concatenate params.json & metrics.json file
params = []
failed = []
for params_fname in glob.glob(f"{dir}/*/params.json"):
metrics_fname = params_fname.replace("params.json", "metrics.json")
try:
with open(params_fname, "r") as f:
_dict = json.load(f)
_dict["trial_id"] = re.search(AT_REGEX, params_fname).group(1)
with open(metrics_fname, "r") as f:
metrics = json.load(f)
ws = metrics["finish"]["timing__setup__ws"]
metrics["worst_slack"] = ws
_dict.update(metrics)
params.append(_dict)
except Exception as e:
failed.append(metrics_fname)
continue

# Merge all dataframe
params_df = pd.DataFrame(params)
try:
progress_df = progress_df.merge(params_df, on="trial_id")
except KeyError:
print(
"Unable to merge DFs due to missing trial_id in params.json (possibly due to failed trials.)"
)
sys.exit(0)

# Print failed, if any
if failed:
print(f"Failed to load {len(failed)} files:\n{'\n'.join(failed)}")
return progress_df


def preprocess(df: pd.DataFrame) -> pd.DataFrame:
"""
Preprocess the input DataFrame by renaming columns, removing unnecessary columns,
filtering out invalid rows, and normalizing the timestamp.
Args:
df (pd.DataFrame): The input DataFrame to preprocess.
Returns:
pd.DataFrame: The preprocessed DataFrame with renamed columns, removed columns,
filtered rows, and normalized timestamp.
"""

cols_to_remove = [
"done",
"training_iteration",
"date",
"pid",
"hostname",
"node_ip",
"time_since_restore",
"time_total_s",
"iterations_since_restore",
]
rename_dict = {
"time_this_iter_s": "runtime",
"_SDC_CLK_PERIOD": "clk_period",
"minimum": "qor",
}
try:
df = df.rename(columns=rename_dict)
df = df.drop(columns=cols_to_remove)
df = df[df["qor"] != 9e99]
df["timestamp"] -= df["timestamp"].min()
return df
except KeyError as e:
print(
f"KeyError: {e} in the DataFrame. Dataframe does not contain necessary columns."
)
sys.exit(0)


def plot(df: pd.DataFrame, key: str, dir: str):
"""
Plots a scatter plot with a linear fit and a box plot for a specified key from a DataFrame.
Args:
df (pd.DataFrame): The DataFrame containing the data to plot.
key (str): The column name in the DataFrame to plot.
dir (str): The directory where the plots will be saved. The directory must exist.
Returns:
None
"""

assert os.path.exists(dir), f"Directory {dir} does not exist."
# Plot box plot and time series plot for key
fig, ax = plt.subplots(1, figsize=(15, 10))
ax.scatter(df["timestamp"], df[key])
ax.set_xlabel("Time (s)")
ax.set_ylabel(key)
ax.set_title(f"{key} vs Time")

try:
coeff = np.polyfit(df["timestamp"], df[key], 1)
poly_func = np.poly1d(coeff)
ax.plot(
df["timestamp"],
poly_func(df["timestamp"]),
"r--",
label=f"y={coeff[0]:.2f}x+{coeff[1]:.2f}",
)
ax.legend()
except np.linalg.LinAlgError:
print("Cannot fit a line to the data, plotting only scatter plot.")

fig.savefig(f"{dir}/{key}.png")

plt.figure(figsize=(15, 10))
plt.boxplot(df[key])
plt.ylabel(key)
plt.title(f"{key} Boxplot")
plt.savefig(f"{dir}/{key}-boxplot.png")


def main(platform: str, design: str, experiment: str):
"""
Main function to process results from a specified directory and plot the results.
Args:
platform (str): The platform name.
design (str): The design name.
experiment (str): The experiment name.
Returns:
None
"""

results_dir = os.path.join(
root_dir, f"./flow/logs/{platform}/{design}/{experiment}"
)
img_dir = os.path.join(
root_dir, f"./flow/reports/images/{platform}/{design}/{experiment}"
)
print("Processing results from", results_dir)
os.makedirs(img_dir, exist_ok=True)
df = load_dir(results_dir)
df = preprocess(df)
keys = ["qor", "runtime", "clk_period", "worst_slack"]

# Plot only if more than one entry
if len(df) < 2:
print("Less than 2 entries, skipping plotting.")
sys.exit(0)
for key in keys:
plot(df, key, img_dir)


if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Plot AutoTuner results.")
parser.add_argument("--platform", type=str, help="Platform name.", required=True)
parser.add_argument("--design", type=str, help="Design name.", required=True)
parser.add_argument(
"--experiment", type=str, help="Experiment name.", required=True
)
args = parser.parse_args()
main(platform=args.platform, design=args.design, experiment=args.experiment)
Loading