From c3381c0f8241833f0e4bace50066316693a5ed25 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 22:59:51 -0400 Subject: [PATCH] remove redundant doc and add tutorial (#1004) * remove redundant doc and add tutorial * add demos for pydata2023 * Update pydata23 docs * remove redundant notebooks * Move tutorial notebooks to notebook folder * update readme and notebook links * update notebook links * update links * update readme --------- Co-authored-by: Li Jiang Co-authored-by: Li Jiang --- docs/Makefile | 20 - docs/conf.py | 60 - docs/index.rst | 45 - docs/make.bat | 35 - notebook/automl_bankrupt_synapseml.ipynb | 2674 ++++++++++++++++++++++ notebook/automl_flight_delays.ipynb | 2443 ++++++++++++++++++++ notebook/tune_synapseml.ipynb | 1109 +++++++++ tutorials/README.md | 4 + tutorials/flaml-tutorial-aaai-23.md | 67 + tutorials/flaml-tutorial-kdd-22.md | 48 + tutorials/flaml-tutorial-pydata-23.md | 40 + 11 files changed, 6385 insertions(+), 160 deletions(-) delete mode 100644 docs/Makefile delete mode 100644 docs/conf.py delete mode 100644 docs/index.rst delete mode 100644 docs/make.bat create mode 100644 notebook/automl_bankrupt_synapseml.ipynb create mode 100644 notebook/automl_flight_delays.ipynb create mode 100644 notebook/tune_synapseml.ipynb create mode 100644 tutorials/README.md create mode 100644 tutorials/flaml-tutorial-aaai-23.md create mode 100644 tutorials/flaml-tutorial-kdd-22.md create mode 100644 tutorials/flaml-tutorial-pydata-23.md diff --git a/docs/Makefile b/docs/Makefile deleted file mode 100644 index d4bb2cbb9ed..00000000000 --- a/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/conf.py b/docs/conf.py deleted file mode 100644 index 53fcf51cb30..00000000000 --- a/docs/conf.py +++ /dev/null @@ -1,60 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -# import os -# import sys -# sys.path.insert(0, os.path.abspath('.')) - - -# -- Project information ----------------------------------------------------- - -project = "FLAML" -copyright = "2020-2021, FLAML Team" -author = "FLAML Team" - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "sphinx.ext.autodoc", - "sphinx.ext.napoleon", - "sphinx.ext.doctest", - "sphinx.ext.coverage", - "sphinx.ext.mathjax", - "sphinx.ext.viewcode", - "sphinx.ext.githubpages", - "sphinx_rtd_theme", -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] - - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = "sphinx_rtd_theme" - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ["_static"] diff --git a/docs/index.rst b/docs/index.rst deleted file mode 100644 index 1cc75f09a88..00000000000 --- a/docs/index.rst +++ /dev/null @@ -1,45 +0,0 @@ -.. FLAML documentation master file, created by - sphinx-quickstart on Mon Dec 14 23:33:24 2020. - You can adapt this file completely to your liking, but it should at least - contain the root `toctree` directive. - -.. Welcome to FLAML's documentation! -.. ================================= - -.. .. toctree:: -.. :maxdepth: 2 -.. :caption: Contents: - - -FLAML API Documentation -======================= - -AutoML ------- - -.. autoclass:: flaml.AutoML - :members: - - -Tune ----- - -.. autofunction:: flaml.tune.run - -.. autofunction:: flaml.tune.report - -.. autoclass:: flaml.BlendSearch - :members: - -.. autoclass:: flaml.CFO - :members: - -.. autoclass:: flaml.FLOW2 - :members: - - -Online AutoML -------------- - -.. autoclass:: flaml.AutoVW - :members: diff --git a/docs/make.bat b/docs/make.bat deleted file mode 100644 index 2119f51099b..00000000000 --- a/docs/make.bat +++ /dev/null @@ -1,35 +0,0 @@ -@ECHO OFF - -pushd %~dp0 - -REM Command file for Sphinx documentation - -if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=sphinx-build -) -set SOURCEDIR=. -set BUILDDIR=_build - -if "%1" == "" goto help - -%SPHINXBUILD% >NUL 2>NUL -if errorlevel 9009 ( - echo. - echo.The 'sphinx-build' command was not found. Make sure you have Sphinx - echo.installed, then set the SPHINXBUILD environment variable to point - echo.to the full path of the 'sphinx-build' executable. Alternatively you - echo.may add the Sphinx directory to PATH. - echo. - echo.If you don't have Sphinx installed, grab it from - echo.http://sphinx-doc.org/ - exit /b 1 -) - -%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% -goto end - -:help -%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% - -:end -popd diff --git a/notebook/automl_bankrupt_synapseml.ipynb b/notebook/automl_bankrupt_synapseml.ipynb new file mode 100644 index 00000000000..52b76a63fd1 --- /dev/null +++ b/notebook/automl_bankrupt_synapseml.ipynb @@ -0,0 +1,2674 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# FLAML AutoML on Apache Spark \n", + "\n", + "| | | | | |\n", + "|-----|-----|--------|--------|--------|\n", + "|![synapse](https://microsoft.github.io/SynapseML/img/logo.svg)| \"drawing\" | ![image-alt-text](https://th.bing.com/th/id/OIP.5aNnFabBKoYIYhoTrNc_CAHaHa?w=174&h=180&c=7&r=0&o=5&pid=1.7)| \n", + "\n", + "\n", + "\n", + "### Goal\n", + "\n", + "\n", + "## 1. Introduction\n", + "\n", + "### FLAML\n", + "FLAML is a Python library (https://github.com/microsoft/FLAML) designed to automatically produce accurate machine learning models \n", + "with low computational cost. It is fast and economical. The simple and lightweight design makes it easy \n", + "to use and extend, such as adding new learners. FLAML can \n", + "- serve as an economical AutoML engine,\n", + "- be used as a fast hyperparameter tuning tool, or \n", + "- be embedded in self-tuning software that requires low latency & resource in repetitive\n", + " tuning tasks.\n", + "\n", + "In this notebook, we demonstrate how to use FLAML library to do AutoML for SynapseML models and Apache Spark dataframes. We also compare the results between FLAML AutoML and the default SynapseML. \n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "jupyter": { + "outputs_hidden": true, + "source_hidden": false + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:49:35.7617208Z", + "execution_start_time": "2023-04-19T00:49:35.7615143Z", + "livy_statement_state": "available", + "parent_msg_id": "aada545e-b4b9-4f61-b8f0-0921580f4c4c", + "queued_time": "2023-04-19T00:41:29.8670317Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "finished", + "statement_id": -1 + }, + "text/plain": [ + "StatementMeta(, 27, -1, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": {}, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting flaml[synapse]@ git+https://github.com/microsoft/FLAML.git\n", + " Cloning https://github.com/microsoft/FLAML.git to /tmp/pip-install-9bp9bnbp/flaml_f9ddffb8b30b4c1aaffd650b9b9ac29a\n", + " Running command git clone --filter=blob:none --quiet https://github.com/microsoft/FLAML.git /tmp/pip-install-9bp9bnbp/flaml_f9ddffb8b30b4c1aaffd650b9b9ac29a\n", + " Resolved https://github.com/microsoft/FLAML.git to commit 99bb0a8425a58a537ae34347c867b4bc05310471\n", + " Preparing metadata (setup.py) ... \u001b[?25l-\b \b\\\b \bdone\n", + "\u001b[?25hCollecting xgboost==1.6.1\n", + " Downloading xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl (192.9 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m192.9/192.9 MB\u001b[0m \u001b[31m22.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting pandas==1.5.1\n", + " Downloading pandas-1.5.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m12.2/12.2 MB\u001b[0m \u001b[31m96.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting numpy==1.23.4\n", + " Downloading numpy-1.23.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m17.1/17.1 MB\u001b[0m \u001b[31m98.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting scipy\n", + " Downloading scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m34.5/34.5 MB\u001b[0m \u001b[31m82.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting pytz>=2020.1\n", + " Downloading pytz-2023.3-py2.py3-none-any.whl (502 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m502.3/502.3 KB\u001b[0m \u001b[31m125.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting python-dateutil>=2.8.1\n", + " Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m247.7/247.7 KB\u001b[0m \u001b[31m104.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting lightgbm>=2.3.1\n", + " Downloading lightgbm-3.3.5-py3-none-manylinux1_x86_64.whl (2.0 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.0/2.0 MB\u001b[0m \u001b[31m137.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting scikit-learn>=0.24\n", + " Downloading scikit_learn-1.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m9.8/9.8 MB\u001b[0m \u001b[31m148.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0ma \u001b[36m0:00:01\u001b[0m\n", + "\u001b[?25hCollecting joblibspark>=0.5.0\n", + " Downloading joblibspark-0.5.1-py3-none-any.whl (15 kB)\n", + "Collecting optuna==2.8.0\n", + " Downloading optuna-2.8.0-py3-none-any.whl (301 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m302.0/302.0 KB\u001b[0m \u001b[31m107.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting pyspark>=3.2.0\n", + " Downloading pyspark-3.4.0.tar.gz (310.8 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m310.8/310.8 MB\u001b[0m \u001b[31m18.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25hCollecting colorlog\n", + " Downloading colorlog-6.7.0-py2.py3-none-any.whl (11 kB)\n", + "Collecting cmaes>=0.8.2\n", + " Downloading cmaes-0.9.1-py3-none-any.whl (21 kB)\n", + "Collecting cliff\n", + " Downloading cliff-4.2.0-py3-none-any.whl (81 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m81.0/81.0 KB\u001b[0m \u001b[31m44.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting packaging>=20.0\n", + " Downloading packaging-23.1-py3-none-any.whl (48 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m48.9/48.9 KB\u001b[0m \u001b[31m27.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting alembic\n", + " Downloading alembic-1.10.3-py3-none-any.whl (212 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m212.3/212.3 KB\u001b[0m \u001b[31m70.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting sqlalchemy>=1.1.0\n", + " Downloading SQLAlchemy-2.0.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.8 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.8/2.8 MB\u001b[0m \u001b[31m123.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting tqdm\n", + " Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.1/77.1 KB\u001b[0m \u001b[31m34.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting joblib>=0.14\n", + " Downloading joblib-1.2.0-py3-none-any.whl (297 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m298.0/298.0 KB\u001b[0m \u001b[31m114.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting wheel\n", + " Downloading wheel-0.40.0-py3-none-any.whl (64 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m64.5/64.5 KB\u001b[0m \u001b[31m27.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting py4j==0.10.9.7\n", + " Downloading py4j-0.10.9.7-py2.py3-none-any.whl (200 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m200.5/200.5 KB\u001b[0m \u001b[31m84.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting six>=1.5\n", + " Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)\n", + "Collecting threadpoolctl>=2.0.0\n", + " Downloading threadpoolctl-3.1.0-py3-none-any.whl (14 kB)\n", + "Collecting greenlet!=0.4.17\n", + " Downloading greenlet-2.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (618 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m618.5/618.5 KB\u001b[0m \u001b[31m131.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting typing-extensions>=4.2.0\n", + " Downloading typing_extensions-4.5.0-py3-none-any.whl (27 kB)\n", + "Collecting importlib-metadata\n", + " Downloading importlib_metadata-6.5.0-py3-none-any.whl (22 kB)\n", + "Collecting Mako\n", + " Downloading Mako-1.2.4-py3-none-any.whl (78 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m78.7/78.7 KB\u001b[0m \u001b[31m39.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting importlib-resources\n", + " Downloading importlib_resources-5.12.0-py3-none-any.whl (36 kB)\n", + "Collecting cmd2>=1.0.0\n", + " Downloading cmd2-2.4.3-py3-none-any.whl (147 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m147.2/147.2 KB\u001b[0m \u001b[31m68.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting autopage>=0.4.0\n", + " Downloading autopage-0.5.1-py3-none-any.whl (29 kB)\n", + "Collecting PrettyTable>=0.7.2\n", + " Downloading prettytable-3.7.0-py3-none-any.whl (27 kB)\n", + "Collecting stevedore>=2.0.1\n", + " Downloading stevedore-5.0.0-py3-none-any.whl (49 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.6/49.6 KB\u001b[0m \u001b[31m23.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting PyYAML>=3.12\n", + " Downloading PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (701 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m701.2/701.2 KB\u001b[0m \u001b[31m121.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting wcwidth>=0.1.7\n", + " Downloading wcwidth-0.2.6-py2.py3-none-any.whl (29 kB)\n", + "Collecting attrs>=16.3.0\n", + " Downloading attrs-23.1.0-py3-none-any.whl (61 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m61.2/61.2 KB\u001b[0m \u001b[31m33.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting pyperclip>=1.6\n", + " Downloading pyperclip-1.8.2.tar.gz (20 kB)\n", + " Preparing metadata (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25hCollecting zipp>=0.5\n", + " Downloading zipp-3.15.0-py3-none-any.whl (6.8 kB)\n", + "Collecting pbr!=2.1.0,>=2.0.0\n", + " Downloading pbr-5.11.1-py2.py3-none-any.whl (112 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m112.7/112.7 KB\u001b[0m \u001b[31m51.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting MarkupSafe>=0.9.2\n", + " Downloading MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)\n", + "Building wheels for collected packages: pyspark, flaml, pyperclip\n", + " Building wheel for pyspark (setup.py) ... \u001b[?25l-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \bdone\n", + "\u001b[?25h Created wheel for pyspark: filename=pyspark-3.4.0-py2.py3-none-any.whl size=311317145 sha256=27ed3d6841f2401a2d7018b6b56c164357334e10761228b12c0e5294db8985a4\n", + " Stored in directory: /home/trusted-service-user/.cache/pip/wheels/27/3e/a7/888155c6a7f230b13a394f4999b90fdfaed00596c68d3de307\n", + " Building wheel for flaml (setup.py) ... \u001b[?25l-\b \b\\\b \bdone\n", + "\u001b[?25h Created wheel for flaml: filename=FLAML-1.2.1-py3-none-any.whl size=248482 sha256=01f9d2f101b46c0104ad8919d4a65470ce54f23ef8b3671ac4bb12c2ba6db7dd\n", + " Stored in directory: /tmp/pip-ephem-wheel-cache-o_3986sn/wheels/5c/1a/48/c07dfe482b630f96d7258700d361a971759465895f9dd768ee\n", + " Building wheel for pyperclip (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25h Created wheel for pyperclip: filename=pyperclip-1.8.2-py3-none-any.whl size=11107 sha256=e1d85f669e71af3e8f45ffedf4e41257741b841bef852247b94ba8bfff3162ba\n", + " Stored in directory: /home/trusted-service-user/.cache/pip/wheels/7f/1a/65/84ff8c386bec21fca6d220ea1f5498a0367883a78dd5ba6122\n", + "Successfully built pyspark flaml pyperclip\n", + "Installing collected packages: wcwidth, pytz, pyperclip, py4j, zipp, wheel, typing-extensions, tqdm, threadpoolctl, six, PyYAML, pyspark, PrettyTable, pbr, packaging, numpy, MarkupSafe, joblib, greenlet, colorlog, autopage, attrs, stevedore, sqlalchemy, scipy, python-dateutil, Mako, joblibspark, importlib-resources, importlib-metadata, cmd2, cmaes, xgboost, scikit-learn, pandas, cliff, alembic, optuna, lightgbm, flaml\n", + " Attempting uninstall: wcwidth\n", + " Found existing installation: wcwidth 0.2.5\n", + " Not uninstalling wcwidth at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'wcwidth'. No files were found to uninstall.\n", + " Attempting uninstall: pytz\n", + " Found existing installation: pytz 2021.1\n", + " Not uninstalling pytz at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'pytz'. No files were found to uninstall.\n", + " Attempting uninstall: pyperclip\n", + " Found existing installation: pyperclip 1.8.2\n", + " Not uninstalling pyperclip at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'pyperclip'. No files were found to uninstall.\n", + " Attempting uninstall: py4j\n", + " Found existing installation: py4j 0.10.9.3\n", + " Not uninstalling py4j at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'py4j'. No files were found to uninstall.\n", + " Attempting uninstall: zipp\n", + " Found existing installation: zipp 3.5.0\n", + " Not uninstalling zipp at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'zipp'. No files were found to uninstall.\n", + " Attempting uninstall: wheel\n", + " Found existing installation: wheel 0.36.2\n", + " Not uninstalling wheel at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'wheel'. No files were found to uninstall.\n", + " Attempting uninstall: typing-extensions\n", + " Found existing installation: typing-extensions 3.10.0.0\n", + " Not uninstalling typing-extensions at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'typing-extensions'. No files were found to uninstall.\n", + " Attempting uninstall: tqdm\n", + " Found existing installation: tqdm 4.61.2\n", + " Not uninstalling tqdm at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'tqdm'. No files were found to uninstall.\n", + " Attempting uninstall: threadpoolctl\n", + " Found existing installation: threadpoolctl 2.1.0\n", + " Not uninstalling threadpoolctl at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'threadpoolctl'. No files were found to uninstall.\n", + " Attempting uninstall: six\n", + " Found existing installation: six 1.16.0\n", + " Not uninstalling six at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'six'. No files were found to uninstall.\n", + " Attempting uninstall: PyYAML\n", + " Found existing installation: PyYAML 5.4.1\n", + " Not uninstalling pyyaml at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'PyYAML'. No files were found to uninstall.\n", + " Attempting uninstall: pyspark\n", + " Found existing installation: pyspark 3.2.1\n", + " Not uninstalling pyspark at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'pyspark'. No files were found to uninstall.\n", + " Attempting uninstall: PrettyTable\n", + " Found existing installation: prettytable 2.4.0\n", + " Not uninstalling prettytable at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'prettytable'. No files were found to uninstall.\n", + " Attempting uninstall: packaging\n", + " Found existing installation: packaging 21.0\n", + " Not uninstalling packaging at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'packaging'. No files were found to uninstall.\n", + " Attempting uninstall: numpy\n", + " Found existing installation: numpy 1.19.4\n", + " Not uninstalling numpy at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'numpy'. No files were found to uninstall.\n", + " Attempting uninstall: MarkupSafe\n", + " Found existing installation: MarkupSafe 2.0.1\n", + " Not uninstalling markupsafe at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'MarkupSafe'. No files were found to uninstall.\n", + " Attempting uninstall: joblib\n", + " Found existing installation: joblib 1.0.1\n", + " Not uninstalling joblib at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'joblib'. No files were found to uninstall.\n", + " Attempting uninstall: greenlet\n", + " Found existing installation: greenlet 1.1.0\n", + " Not uninstalling greenlet at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'greenlet'. No files were found to uninstall.\n", + " Attempting uninstall: attrs\n", + " Found existing installation: attrs 21.2.0\n", + " Not uninstalling attrs at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'attrs'. No files were found to uninstall.\n", + " Attempting uninstall: sqlalchemy\n", + " Found existing installation: SQLAlchemy 1.4.20\n", + " Not uninstalling sqlalchemy at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'SQLAlchemy'. No files were found to uninstall.\n", + " Attempting uninstall: scipy\n", + " Found existing installation: scipy 1.5.3\n", + " Not uninstalling scipy at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'scipy'. No files were found to uninstall.\n", + " Attempting uninstall: python-dateutil\n", + " Found existing installation: python-dateutil 2.8.1\n", + " Not uninstalling python-dateutil at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'python-dateutil'. No files were found to uninstall.\n", + " Attempting uninstall: importlib-resources\n", + " Found existing installation: importlib-resources 5.10.0\n", + " Not uninstalling importlib-resources at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'importlib-resources'. No files were found to uninstall.\n", + " Attempting uninstall: importlib-metadata\n", + " Found existing installation: importlib-metadata 4.6.1\n", + " Not uninstalling importlib-metadata at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'importlib-metadata'. No files were found to uninstall.\n", + " Attempting uninstall: xgboost\n", + " Found existing installation: xgboost 1.4.0\n", + " Not uninstalling xgboost at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'xgboost'. No files were found to uninstall.\n", + " Attempting uninstall: scikit-learn\n", + " Found existing installation: scikit-learn 0.23.2\n", + " Not uninstalling scikit-learn at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'scikit-learn'. No files were found to uninstall.\n", + " Attempting uninstall: pandas\n", + " Found existing installation: pandas 1.2.3\n", + " Not uninstalling pandas at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'pandas'. No files were found to uninstall.\n", + " Attempting uninstall: lightgbm\n", + " Found existing installation: lightgbm 3.2.1\n", + " Not uninstalling lightgbm at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f\n", + " Can't uninstall 'lightgbm'. No files were found to uninstall.\n", + "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", + "tensorflow 2.4.1 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.\n", + "tensorflow 2.4.1 requires typing-extensions~=3.7.4, but you have typing-extensions 4.5.0 which is incompatible.\n", + "pmdarima 1.8.2 requires numpy~=1.19.0, but you have numpy 1.23.4 which is incompatible.\n", + "koalas 1.8.0 requires numpy<1.20.0,>=1.14, but you have numpy 1.23.4 which is incompatible.\n", + "gevent 21.1.2 requires greenlet<2.0,>=0.4.17; platform_python_implementation == \"CPython\", but you have greenlet 2.0.2 which is incompatible.\u001b[0m\u001b[31m\n", + "\u001b[0mSuccessfully installed Mako-1.2.4 MarkupSafe-2.1.2 PrettyTable-3.7.0 PyYAML-6.0 alembic-1.10.3 attrs-23.1.0 autopage-0.5.1 cliff-4.2.0 cmaes-0.9.1 cmd2-2.4.3 colorlog-6.7.0 flaml-1.2.1 greenlet-2.0.2 importlib-metadata-6.5.0 importlib-resources-5.12.0 joblib-1.2.0 joblibspark-0.5.1 lightgbm-3.3.5 numpy-1.23.4 optuna-2.8.0 packaging-23.1 pandas-1.5.1 pbr-5.11.1 py4j-0.10.9.7 pyperclip-1.8.2 pyspark-3.4.0 python-dateutil-2.8.2 pytz-2023.3 scikit-learn-1.2.2 scipy-1.10.1 six-1.16.0 sqlalchemy-2.0.9 stevedore-5.0.0 threadpoolctl-3.1.0 tqdm-4.65.0 typing-extensions-4.5.0 wcwidth-0.2.6 wheel-0.40.0 xgboost-1.6.1 zipp-3.15.0\n", + "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.1 is available.\n", + "You should consider upgrading via the '/nfs4/pyenv-8895058f-cb80-488b-b82d-c341dcde311f/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", + "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" + ] + }, + { + "data": {}, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Warning: PySpark kernel has been restarted to use updated packages.\n", + "\n" + ] + } + ], + "source": [ + "%pip install flaml[synapse]==1.2.1 xgboost==1.6.1 pandas==1.5.1 numpy==1.23.4 --force-reinstall" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Uncomment `_init_spark()` if run in local spark env." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def _init_spark():\n", + " import pyspark\n", + "\n", + " spark = (\n", + " pyspark.sql.SparkSession.builder.appName(\"MyApp\")\n", + " .master(\"local[2]\")\n", + " .config(\n", + " \"spark.jars.packages\",\n", + " (\n", + " \"com.microsoft.azure:synapseml_2.12:0.10.2,\"\n", + " \"org.apache.hadoop:hadoop-azure:3.3.5,\"\n", + " \"com.microsoft.azure:azure-storage:8.6.6\"\n", + " ),\n", + " )\n", + " .config(\"spark.jars.repositories\", \"https://mmlspark.azureedge.net/maven\")\n", + " .config(\"spark.sql.debug.maxToStringFields\", \"100\")\n", + " .getOrCreate()\n", + " )\n", + " return spark\n", + "\n", + "# spark = _init_spark()" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:49:38.7324858Z", + "execution_start_time": "2023-04-19T00:49:38.4750792Z", + "livy_statement_state": "available", + "parent_msg_id": "fa770a66-05ff-46d0-81b3-3f21c6be1ecd", + "queued_time": "2023-04-19T00:41:29.8741671Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 8 + }, + "text/plain": [ + "StatementMeta(automl, 27, 8, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "spark.conf.set(\"spark.sql.execution.arrow.pyspark.enabled\", \"false\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Demo overview\n", + "In this example, we use FLAML & Apache Spark to build a classification model in order to predict bankruptcy.\n", + "1. **Tune**: Given an Apache Spark dataframe, we can use FLAML to tune a SynapseML Spark-based model.\n", + "2. **AutoML**: Given an Apache Spark dataframe, we can run AutoML to find the best classification model given our constraints.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2. Load data and preprocess" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:50:12.8686555Z", + "execution_start_time": "2023-04-19T00:49:39.0071841Z", + "livy_statement_state": "available", + "parent_msg_id": "f4fddcb8-daa9-4e51-82df-a026ad09848d", + "queued_time": "2023-04-19T00:41:29.8758509Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 9 + }, + "text/plain": [ + "StatementMeta(automl, 27, 9, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "records read: 6819\n" + ] + } + ], + "source": [ + "df = (\n", + " spark.read.format(\"csv\")\n", + " .option(\"header\", True)\n", + " .option(\"inferSchema\", True)\n", + " .load(\n", + " \"wasbs://publicwasb@mmlspark.blob.core.windows.net/company_bankruptcy_prediction_data.csv\"\n", + " )\n", + ")\n", + "# print dataset size\n", + "print(\"records read: \" + str(df.count()))" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:50:17.1147492Z", + "execution_start_time": "2023-04-19T00:50:13.1478957Z", + "livy_statement_state": "available", + "parent_msg_id": "c3124278-a1fc-4678-ab90-8c1c61b252ed", + "queued_time": "2023-04-19T00:41:29.8770146Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 10 + }, + "text/plain": [ + "StatementMeta(automl, 27, 10, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "application/vnd.synapse.widget-view+json": { + "widget_id": "27e3f6a9-6707-4f94-93cf-05ea98845414", + "widget_type": "Synapse.DataFrame" + }, + "text/plain": [ + "SynapseWidget(Synapse.DataFrame, 27e3f6a9-6707-4f94-93cf-05ea98845414)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "display(df)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Split the dataset into train and test" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:55:34.297498Z", + "execution_start_time": "2023-04-19T00:55:34.0061545Z", + "livy_statement_state": "available", + "parent_msg_id": "b7b9be0c-e8cb-4229-a2fb-95f5e0a9bd8f", + "queued_time": "2023-04-19T00:55:33.7779796Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 25 + }, + "text/plain": [ + "StatementMeta(automl, 27, 25, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "train_raw, test_raw = df.randomSplit([0.8, 0.2], seed=41)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Add featurizer to convert features to vector" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:55:49.7837815Z", + "execution_start_time": "2023-04-19T00:55:49.5176322Z", + "livy_statement_state": "available", + "parent_msg_id": "faa6ab52-b98d-4e32-b569-ee27c282ff6e", + "queued_time": "2023-04-19T00:55:49.2823774Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 26 + }, + "text/plain": [ + "StatementMeta(automl, 27, 26, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from pyspark.ml.feature import VectorAssembler\n", + "\n", + "feature_cols = df.columns[1:]\n", + "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", + "train_data = featurizer.transform(train_raw)[\"Bankrupt?\", \"features\"]\n", + "test_data = featurizer.transform(test_raw)[\"Bankrupt?\", \"features\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Default SynapseML LightGBM" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:56:14.2639565Z", + "execution_start_time": "2023-04-19T00:55:53.757847Z", + "livy_statement_state": "available", + "parent_msg_id": "29d11dfb-a2ef-4a1e-9dc6-d41d832e83ed", + "queued_time": "2023-04-19T00:55:53.5050188Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 27 + }, + "text/plain": [ + "StatementMeta(automl, 27, 27, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from synapse.ml.lightgbm import LightGBMClassifier\n", + "\n", + "model = LightGBMClassifier(\n", + " objective=\"binary\", featuresCol=\"features\", labelCol=\"Bankrupt?\", isUnbalance=True\n", + ")\n", + "\n", + "model = model.fit(train_data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Model Prediction" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:56:19.165521Z", + "execution_start_time": "2023-04-19T00:56:14.5127236Z", + "livy_statement_state": "available", + "parent_msg_id": "27aa0ad6-99e5-489f-ab26-b26b1f10834e", + "queued_time": "2023-04-19T00:55:56.0549337Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 28 + }, + "text/plain": [ + "StatementMeta(automl, 27, 28, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "+---------------+--------------------+------------------+-------------------+------------------+------------------+\n", + "|evaluation_type| confusion_matrix| accuracy| precision| recall| AUC|\n", + "+---------------+--------------------+------------------+-------------------+------------------+------------------+\n", + "| Classification|1253.0 20.0 \\n2...|0.9627942293090357|0.42857142857142855|0.3409090909090909|0.6625990859101621|\n", + "+---------------+--------------------+------------------+-------------------+------------------+------------------+\n", + "\n" + ] + } + ], + "source": [ + "def predict(model, test_data=test_data):\n", + " from synapse.ml.train import ComputeModelStatistics\n", + "\n", + " predictions = model.transform(test_data)\n", + " \n", + " metrics = ComputeModelStatistics(\n", + " evaluationMetric=\"classification\",\n", + " labelCol=\"Bankrupt?\",\n", + " scoredLabelsCol=\"prediction\",\n", + " ).transform(predictions)\n", + " return metrics\n", + "\n", + "default_metrics = predict(model)\n", + "default_metrics.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Run FLAML Tune" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:56:19.7604089Z", + "execution_start_time": "2023-04-19T00:56:19.4650633Z", + "livy_statement_state": "available", + "parent_msg_id": "22ff4c92-83c4-433e-8525-4ecb193c7d4e", + "queued_time": "2023-04-19T00:55:59.6397744Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 29 + }, + "text/plain": [ + "StatementMeta(automl, 27, 29, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "train_data_sub, val_data_sub = train_data.randomSplit([0.8, 0.2], seed=41)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:50:56.2968207Z", + "execution_start_time": "2023-04-19T00:50:56.0058549Z", + "livy_statement_state": "available", + "parent_msg_id": "f0106eec-a889-4e51-86b2-ea899afb7612", + "queued_time": "2023-04-19T00:41:29.8989617Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 16 + }, + "text/plain": [ + "StatementMeta(automl, 27, 16, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "def train(lambdaL1, learningRate, numLeaves, numIterations, train_data=train_data_sub, val_data=val_data_sub):\n", + " \"\"\"\n", + " This train() function:\n", + " - takes hyperparameters as inputs (for tuning later)\n", + " - returns the AUC score on the validation dataset\n", + "\n", + " Wrapping code as a function makes it easier to reuse the code later for tuning.\n", + " \"\"\"\n", + "\n", + " lgc = LightGBMClassifier(\n", + " objective=\"binary\",\n", + " lambdaL1=lambdaL1,\n", + " learningRate=learningRate,\n", + " numLeaves=numLeaves,\n", + " labelCol=\"Bankrupt?\",\n", + " numIterations=numIterations,\n", + " isUnbalance=True,\n", + " featuresCol=\"features\",\n", + " )\n", + "\n", + " model = lgc.fit(train_data)\n", + "\n", + " # Define an evaluation metric and evaluate the model on the validation dataset.\n", + " eval_metric = predict(model, val_data)\n", + " eval_metric = eval_metric.toPandas()['AUC'][0]\n", + "\n", + " return model, eval_metric" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": { + "jupyter": { + "outputs_hidden": true, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:56:20.3156028Z", + "execution_start_time": "2023-04-19T00:56:20.0366204Z", + "livy_statement_state": "available", + "parent_msg_id": "c5c60e40-1edf-4d4f-a106-77ac86ba288c", + "queued_time": "2023-04-19T00:56:07.4221398Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 30 + }, + "text/plain": [ + "StatementMeta(automl, 27, 30, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import flaml\n", + "import time\n", + "\n", + "# define the search space\n", + "params = {\n", + " \"lambdaL1\": flaml.tune.uniform(0.001, 1),\n", + " \"learningRate\": flaml.tune.uniform(0.001, 1),\n", + " \"numLeaves\": flaml.tune.randint(30, 100),\n", + " \"numIterations\": flaml.tune.randint(100, 300),\n", + "}\n", + "\n", + "# define the tune function\n", + "def flaml_tune(config):\n", + " _, metric = train(**config)\n", + " return {\"auc\": metric}" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:57:20.6355868Z", + "execution_start_time": "2023-04-19T00:56:20.5770855Z", + "livy_statement_state": "available", + "parent_msg_id": "ea4962b9-33e8-459b-8b6f-acb4ae7a13d8", + "queued_time": "2023-04-19T00:56:10.1336409Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 31 + }, + "text/plain": [ + "StatementMeta(automl, 27, 31, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[flaml.tune.tune: 04-19 00:56:20] {508} INFO - Using search algorithm BlendSearch.\n", + "No low-cost partial config given to the search algorithm. For cost-frugal search, consider providing low-cost values for cost-related hps via 'low_cost_partial_config'. More info can be found at https://microsoft.github.io/FLAML/docs/FAQ#about-low_cost_partial_config-in-tune\n", + "You passed a `space` parameter to OptunaSearch that contained unresolved search space definitions. OptunaSearch should however be instantiated with fully configured search spaces only. To use Ray Tune's automatic search space conversion, pass the space definition as part of the `config` argument to `tune.run()` instead.\n", + "[flaml.tune.tune: 04-19 00:56:20] {777} INFO - trial 1 config: {'lambdaL1': 0.09833464080607023, 'learningRate': 0.64761881525086, 'numLeaves': 30, 'numIterations': 172}\n", + "[flaml.tune.tune: 04-19 00:56:46] {197} INFO - result: {'auc': 0.7350263891359782, 'training_iteration': 0, 'config': {'lambdaL1': 0.09833464080607023, 'learningRate': 0.64761881525086, 'numLeaves': 30, 'numIterations': 172}, 'config/lambdaL1': 0.09833464080607023, 'config/learningRate': 0.64761881525086, 'config/numLeaves': 30, 'config/numIterations': 172, 'experiment_tag': 'exp', 'time_total_s': 25.78124713897705}\n", + "[flaml.tune.tune: 04-19 00:56:46] {777} INFO - trial 2 config: {'lambdaL1': 0.7715493226234792, 'learningRate': 0.021731197410042098, 'numLeaves': 74, 'numIterations': 249}\n", + "[flaml.tune.tune: 04-19 00:57:19] {197} INFO - result: {'auc': 0.7648994840775662, 'training_iteration': 0, 'config': {'lambdaL1': 0.7715493226234792, 'learningRate': 0.021731197410042098, 'numLeaves': 74, 'numIterations': 249}, 'config/lambdaL1': 0.7715493226234792, 'config/learningRate': 0.021731197410042098, 'config/numLeaves': 74, 'config/numIterations': 249, 'experiment_tag': 'exp', 'time_total_s': 33.43822383880615}\n", + "[flaml.tune.tune: 04-19 00:57:19] {777} INFO - trial 3 config: {'lambdaL1': 0.49900850529028784, 'learningRate': 0.2255718488853168, 'numLeaves': 43, 'numIterations': 252}\n", + "\n" + ] + } + ], + "source": [ + "analysis = flaml.tune.run(\n", + " flaml_tune,\n", + " params,\n", + " time_budget_s=60,\n", + " num_samples=100,\n", + " metric=\"auc\",\n", + " mode=\"max\",\n", + " verbose=5,\n", + " force_cancel=True,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "Best config and metric on validation data" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:57:21.2098285Z", + "execution_start_time": "2023-04-19T00:57:20.9439827Z", + "livy_statement_state": "available", + "parent_msg_id": "e99f17e0-cd3e-4292-bc10-180386aaf810", + "queued_time": "2023-04-19T00:56:15.0604124Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 32 + }, + "text/plain": [ + "StatementMeta(automl, 27, 32, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Best config: {'lambdaL1': 0.7715493226234792, 'learningRate': 0.021731197410042098, 'numLeaves': 74, 'numIterations': 249}\n", + "Best metrics on validation data: {'auc': 0.7648994840775662, 'training_iteration': 0, 'config': {'lambdaL1': 0.7715493226234792, 'learningRate': 0.021731197410042098, 'numLeaves': 74, 'numIterations': 249}, 'config/lambdaL1': 0.7715493226234792, 'config/learningRate': 0.021731197410042098, 'config/numLeaves': 74, 'config/numIterations': 249, 'experiment_tag': 'exp', 'time_total_s': 33.43822383880615}\n" + ] + } + ], + "source": [ + "tune_config = analysis.best_config\n", + "tune_metrics_val = analysis.best_result\n", + "print(\"Best config: \", tune_config)\n", + "print(\"Best metrics on validation data: \", tune_metrics_val)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "Retrain model on whole train_data and check metrics on test_data" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:58:23.0787571Z", + "execution_start_time": "2023-04-19T00:57:21.4709435Z", + "livy_statement_state": "available", + "parent_msg_id": "35edd709-9c68-4646-8a8f-e757fae8a919", + "queued_time": "2023-04-19T00:56:18.2245009Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 33 + }, + "text/plain": [ + "StatementMeta(automl, 27, 33, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "+---------------+--------------------+------------------+------------------+-------------------+------------------+\n", + "|evaluation_type| confusion_matrix| accuracy| precision| recall| AUC|\n", + "+---------------+--------------------+------------------+------------------+-------------------+------------------+\n", + "| Classification|1247.0 26.0 \\n2...|0.9597570235383447|0.3953488372093023|0.38636363636363635|0.6829697207741198|\n", + "+---------------+--------------------+------------------+------------------+-------------------+------------------+\n", + "\n" + ] + } + ], + "source": [ + "tune_model, tune_metrics = train(train_data=train_data, val_data=test_data, **tune_config)\n", + "tune_metrics = predict(tune_model)\n", + "tune_metrics.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run FLAML AutoML\n", + "In the FLAML AutoML run configuration, users can specify the task type, time budget, error metric, learner list, whether to subsample, resampling strategy type, and so on. All these arguments have default values which will be used if users do not provide them. " + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:58:23.596951Z", + "execution_start_time": "2023-04-19T00:58:23.3265305Z", + "livy_statement_state": "available", + "parent_msg_id": "339c4992-4670-4593-a297-e08970e8ef34", + "queued_time": "2023-04-19T00:56:23.3561861Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 34 + }, + "text/plain": [ + "StatementMeta(automl, 27, 34, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "''' import AutoML class from the FLAML package '''\n", + "from flaml import AutoML\n", + "from flaml.automl.spark.utils import to_pandas_on_spark\n", + "\n", + "automl = AutoML()" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:58:24.1706079Z", + "execution_start_time": "2023-04-19T00:58:23.8891255Z", + "livy_statement_state": "available", + "parent_msg_id": "ab1eeb7b-d8fc-4917-9b0d-0e9e05778e6b", + "queued_time": "2023-04-19T00:56:26.0836197Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 35 + }, + "text/plain": [ + "StatementMeta(automl, 27, 35, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import os\n", + "settings = {\n", + " \"time_budget\": 60, # total running time in seconds\n", + " \"metric\": 'roc_auc',\n", + " \"task\": 'classification', # task type\n", + " \"log_file_name\": 'flaml_experiment.log', # flaml log file\n", + " \"seed\": 42, # random seed\n", + " \"force_cancel\": True, # force stop training once time_budget is used up\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:58:24.6581809Z", + "execution_start_time": "2023-04-19T00:58:24.4054632Z", + "livy_statement_state": "available", + "parent_msg_id": "fad5e330-6ea9-4387-9da0-72090ee12857", + "queued_time": "2023-04-19T00:56:56.6277279Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 36 + }, + "text/plain": [ + "StatementMeta(automl, 27, 36, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "pyspark.pandas.frame.DataFrame" + ] + }, + "execution_count": 61, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "df = to_pandas_on_spark(train_data)\n", + "\n", + "type(df)" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:59:23.5292768Z", + "execution_start_time": "2023-04-19T00:58:24.9037573Z", + "livy_statement_state": "available", + "parent_msg_id": "e85fc33c-0a39-4ec5-a18f-625e4e5991da", + "queued_time": "2023-04-19T00:57:11.2416765Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 37 + }, + "text/plain": [ + "StatementMeta(automl, 27, 37, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[flaml.automl.logger: 04-19 00:58:37] {1682} INFO - task = classification\n", + "[flaml.automl.logger: 04-19 00:58:37] {1689} INFO - Data split method: stratified\n", + "[flaml.automl.logger: 04-19 00:58:37] {1692} INFO - Evaluation method: cv\n", + "[flaml.automl.logger: 04-19 00:58:38] {1790} INFO - Minimizing error metric: 1-roc_auc\n", + "[flaml.automl.logger: 04-19 00:58:38] {1900} INFO - List of ML learners in AutoML Run: ['lgbm_spark']\n", + "[flaml.automl.logger: 04-19 00:58:38] {2210} INFO - iteration 0, current learner lgbm_spark\n", + "[flaml.automl.logger: 04-19 00:58:48] {2336} INFO - Estimated sufficient time budget=104269s. Estimated necessary time budget=104s.\n", + "[flaml.automl.logger: 04-19 00:58:48] {2383} INFO - at 23.9s,\testimator lgbm_spark's best error=0.1077,\tbest estimator lgbm_spark's best error=0.1077\n", + "[flaml.automl.logger: 04-19 00:58:48] {2210} INFO - iteration 1, current learner lgbm_spark\n", + "[flaml.automl.logger: 04-19 00:58:56] {2383} INFO - at 32.0s,\testimator lgbm_spark's best error=0.0962,\tbest estimator lgbm_spark's best error=0.0962\n", + "[flaml.automl.logger: 04-19 00:58:56] {2210} INFO - iteration 2, current learner lgbm_spark\n", + "[flaml.automl.logger: 04-19 00:59:05] {2383} INFO - at 40.2s,\testimator lgbm_spark's best error=0.0943,\tbest estimator lgbm_spark's best error=0.0943\n", + "[flaml.automl.logger: 04-19 00:59:05] {2210} INFO - iteration 3, current learner lgbm_spark\n", + "[flaml.automl.logger: 04-19 00:59:13] {2383} INFO - at 48.4s,\testimator lgbm_spark's best error=0.0760,\tbest estimator lgbm_spark's best error=0.0760\n", + "[flaml.automl.logger: 04-19 00:59:13] {2210} INFO - iteration 4, current learner lgbm_spark\n", + "[flaml.automl.logger: 04-19 00:59:21] {2383} INFO - at 56.5s,\testimator lgbm_spark's best error=0.0760,\tbest estimator lgbm_spark's best error=0.0760\n", + "[flaml.automl.logger: 04-19 00:59:22] {2619} INFO - retrain lgbm_spark for 0.9s\n", + "[flaml.automl.logger: 04-19 00:59:22] {2622} INFO - retrained model: LightGBMClassifier_b4bfafdbcfc1\n", + "[flaml.automl.logger: 04-19 00:59:22] {1930} INFO - fit succeeded\n", + "[flaml.automl.logger: 04-19 00:59:22] {1931} INFO - Time taken to find the best model: 48.424041748046875\n" + ] + } + ], + "source": [ + "'''The main flaml automl API'''\n", + "automl.fit(dataframe=df, label='Bankrupt?', labelCol=\"Bankrupt?\", isUnbalance=True, **settings)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Best model and metric" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:59:24.0559557Z", + "execution_start_time": "2023-04-19T00:59:23.7839019Z", + "livy_statement_state": "available", + "parent_msg_id": "211f9184-8589-414a-a39e-33478b83aa4b", + "queued_time": "2023-04-19T00:57:13.8241448Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 38 + }, + "text/plain": [ + "StatementMeta(automl, 27, 38, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Best hyperparmeter config: {'numIterations': 12, 'numLeaves': 6, 'minDataInLeaf': 17, 'learningRate': 0.1444074361218993, 'log_max_bin': 6, 'featureFraction': 0.9006280463830675, 'lambdaL1': 0.0021638671012090007, 'lambdaL2': 0.8181940184285643}\n", + "Best roc_auc on validation data: 0.924\n", + "Training duration of best run: 0.8982 s\n" + ] + } + ], + "source": [ + "''' retrieve best config'''\n", + "print('Best hyperparmeter config:', automl.best_config)\n", + "print('Best roc_auc on validation data: {0:.4g}'.format(1-automl.best_loss))\n", + "print('Training duration of best run: {0:.4g} s'.format(automl.best_config_train_time))" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T00:59:26.6061075Z", + "execution_start_time": "2023-04-19T00:59:24.3019256Z", + "livy_statement_state": "available", + "parent_msg_id": "eb0a6089-adb2-4061-bf64-4e5c4cc228eb", + "queued_time": "2023-04-19T00:57:15.1750669Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 39 + }, + "text/plain": [ + "StatementMeta(automl, 27, 39, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "+---------------+--------------------+------------------+-------------------+------------------+------------------+\n", + "|evaluation_type| confusion_matrix| accuracy| precision| recall| AUC|\n", + "+---------------+--------------------+------------------+-------------------+------------------+------------------+\n", + "| Classification|1106.0 167.0 \\n...|0.8686408504176157|0.18536585365853658|0.8636363636363636|0.8662250946225809|\n", + "+---------------+--------------------+------------------+-------------------+------------------+------------------+\n", + "\n" + ] + } + ], + "source": [ + "automl_metrics = predict(automl.model.estimator)\n", + "automl_metrics.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Use Apache Spark to Parallelize AutoML trials and tuning" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T01:10:17.2334202Z", + "execution_start_time": "2023-04-19T01:10:16.938071Z", + "livy_statement_state": "available", + "parent_msg_id": "380652fc-0702-4dff-ba1b-2a74237b414e", + "queued_time": "2023-04-19T01:10:16.7003095Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 44 + }, + "text/plain": [ + "StatementMeta(automl, 27, 44, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "settings = {\n", + " \"time_budget\": 60, # total running time in seconds\n", + " \"metric\": 'roc_auc', # primary metrics for regression can be chosen from: ['mae','mse','r2','rmse','mape']\n", + " \"task\": 'classification', # task type \n", + " \"seed\": 7654321, # random seed\n", + " \"use_spark\": True,\n", + " \"n_concurrent_trials\": 2,\n", + " \"force_cancel\": True,\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T01:10:18.9486035Z", + "execution_start_time": "2023-04-19T01:10:17.4782718Z", + "livy_statement_state": "available", + "parent_msg_id": "9729f077-c1b9-402e-96b9-4fcd9bc960b4", + "queued_time": "2023-04-19T01:10:16.7818706Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 45 + }, + "text/plain": [ + "StatementMeta(automl, 27, 45, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
Bankrupt?ROA(C) before interest and depreciation before interestROA(A) before interest and % after taxROA(B) before interest and depreciation after taxOperating Gross MarginRealized Sales Gross MarginOperating Profit RatePre-tax net Interest RateAfter-tax net Interest RateNon-industry income and expenditure/revenue...Net Income to Total AssetsTotal assets to GNP priceNo-credit IntervalGross Profit to SalesNet Income to Stockholder's EquityLiability to EquityDegree of Financial Leverage (DFL)Interest Coverage Ratio (Interest expense to EBIT)Net Income FlagEquity to Liability
000.08280.06930.08840.64680.64680.99710.79580.80780.3047...0.00000.000000e+000.62370.64680.74830.28470.02680.56521.00.0199
100.16060.17880.18320.58970.58970.99860.79690.80880.3034...0.59174.370000e+090.62360.58970.80230.29470.02680.56511.00.0151
200.20400.26380.25980.44830.44830.99590.79370.80630.3034...0.68163.000000e-040.62210.44830.81170.30380.02680.56511.00.0136
300.21700.18810.24510.59920.59920.99620.79400.80610.3034...0.61961.100000e-030.62360.59920.63460.43590.02680.56501.00.0108
400.23140.16280.20680.60010.60010.99880.79600.80780.3015...0.52693.000000e-040.62410.60010.79850.29030.02680.56511.00.0164
\n", + "

5 rows × 96 columns

\n", + "
" + ], + "text/plain": [ + " Bankrupt? ROA(C) before interest and depreciation before interest \\\n", + "0 0 0.0828 \n", + "1 0 0.1606 \n", + "2 0 0.2040 \n", + "3 0 0.2170 \n", + "4 0 0.2314 \n", + "\n", + " ROA(A) before interest and % after tax \\\n", + "0 0.0693 \n", + "1 0.1788 \n", + "2 0.2638 \n", + "3 0.1881 \n", + "4 0.1628 \n", + "\n", + " ROA(B) before interest and depreciation after tax \\\n", + "0 0.0884 \n", + "1 0.1832 \n", + "2 0.2598 \n", + "3 0.2451 \n", + "4 0.2068 \n", + "\n", + " Operating Gross Margin Realized Sales Gross Margin \\\n", + "0 0.6468 0.6468 \n", + "1 0.5897 0.5897 \n", + "2 0.4483 0.4483 \n", + "3 0.5992 0.5992 \n", + "4 0.6001 0.6001 \n", + "\n", + " Operating Profit Rate Pre-tax net Interest Rate \\\n", + "0 0.9971 0.7958 \n", + "1 0.9986 0.7969 \n", + "2 0.9959 0.7937 \n", + "3 0.9962 0.7940 \n", + "4 0.9988 0.7960 \n", + "\n", + " After-tax net Interest Rate Non-industry income and expenditure/revenue \\\n", + "0 0.8078 0.3047 \n", + "1 0.8088 0.3034 \n", + "2 0.8063 0.3034 \n", + "3 0.8061 0.3034 \n", + "4 0.8078 0.3015 \n", + "\n", + " ... Net Income to Total Assets Total assets to GNP price \\\n", + "0 ... 0.0000 0.000000e+00 \n", + "1 ... 0.5917 4.370000e+09 \n", + "2 ... 0.6816 3.000000e-04 \n", + "3 ... 0.6196 1.100000e-03 \n", + "4 ... 0.5269 3.000000e-04 \n", + "\n", + " No-credit Interval Gross Profit to Sales \\\n", + "0 0.6237 0.6468 \n", + "1 0.6236 0.5897 \n", + "2 0.6221 0.4483 \n", + "3 0.6236 0.5992 \n", + "4 0.6241 0.6001 \n", + "\n", + " Net Income to Stockholder's Equity Liability to Equity \\\n", + "0 0.7483 0.2847 \n", + "1 0.8023 0.2947 \n", + "2 0.8117 0.3038 \n", + "3 0.6346 0.4359 \n", + "4 0.7985 0.2903 \n", + "\n", + " Degree of Financial Leverage (DFL) \\\n", + "0 0.0268 \n", + "1 0.0268 \n", + "2 0.0268 \n", + "3 0.0268 \n", + "4 0.0268 \n", + "\n", + " Interest Coverage Ratio (Interest expense to EBIT) Net Income Flag \\\n", + "0 0.5652 1.0 \n", + "1 0.5651 1.0 \n", + "2 0.5651 1.0 \n", + "3 0.5650 1.0 \n", + "4 0.5651 1.0 \n", + "\n", + " Equity to Liability \n", + "0 0.0199 \n", + "1 0.0151 \n", + "2 0.0136 \n", + "3 0.0108 \n", + "4 0.0164 \n", + "\n", + "[5 rows x 96 columns]" + ] + }, + "execution_count": 79, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "pandas_df = train_raw.toPandas()\n", + "pandas_df.head()" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T01:11:21.5981973Z", + "execution_start_time": "2023-04-19T01:10:19.220622Z", + "livy_statement_state": "available", + "parent_msg_id": "e496aa47-0677-4bec-a07d-d8d5cca778d1", + "queued_time": "2023-04-19T01:10:16.850107Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 46 + }, + "text/plain": [ + "StatementMeta(automl, 27, 46, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[flaml.automl.logger: 04-19 01:10:19] {1682} INFO - task = classification\n", + "[flaml.automl.logger: 04-19 01:10:19] {1689} INFO - Data split method: stratified\n", + "[flaml.automl.logger: 04-19 01:10:19] {1692} INFO - Evaluation method: holdout\n", + "[flaml.automl.logger: 04-19 01:10:19] {1790} INFO - Minimizing error metric: 1-roc_auc\n", + "[flaml.automl.logger: 04-19 01:10:19] {1900} INFO - List of ML learners in AutoML Run: ['lgbm', 'rf', 'xgboost', 'extra_tree', 'xgb_limitdepth', 'lrl1']\n", + "[flaml.tune.tune: 04-19 01:10:19] {701} INFO - Number of trials: 2/1000000, 2 RUNNING, 0 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:22] {721} INFO - Brief result: {'pred_time': 2.9629555301389834e-06, 'wall_clock_time': 2.9545514583587646, 'metric_for_logging': {'pred_time': 2.9629555301389834e-06}, 'val_loss': 0.04636121259998027, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:22] {721} INFO - Brief result: {'pred_time': 3.1378822050232817e-06, 'wall_clock_time': 3.278108596801758, 'metric_for_logging': {'pred_time': 3.1378822050232817e-06}, 'val_loss': 0.07953984398143588, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:22] {701} INFO - Number of trials: 4/1000000, 2 RUNNING, 2 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:22] {721} INFO - Brief result: {'pred_time': 2.1473221156908117e-05, 'wall_clock_time': 3.69093656539917, 'metric_for_logging': {'pred_time': 2.1473221156908117e-05}, 'val_loss': 0.07958921694480114, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:22] {721} INFO - Brief result: {'pred_time': 2.9629555301389834e-06, 'wall_clock_time': 3.3738858699798584, 'metric_for_logging': {'pred_time': 2.9629555301389834e-06}, 'val_loss': 0.16322701688555352, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:22] {701} INFO - Number of trials: 6/1000000, 2 RUNNING, 4 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:26] {721} INFO - Brief result: {'pred_time': 1.2473351713539898e-05, 'wall_clock_time': 5.134864568710327, 'metric_for_logging': {'pred_time': 1.2473351713539898e-05}, 'val_loss': 0.07889799545768739, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:26] {721} INFO - Brief result: {'pred_time': 3.4497267958046733e-06, 'wall_clock_time': 7.101134300231934, 'metric_for_logging': {'pred_time': 3.4497267958046733e-06}, 'val_loss': 0.44030808729139925, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:26] {701} INFO - Number of trials: 8/1000000, 2 RUNNING, 6 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:29] {721} INFO - Brief result: {'pred_time': 3.0635923579119253e-06, 'wall_clock_time': 9.885382890701294, 'metric_for_logging': {'pred_time': 3.0635923579119253e-06}, 'val_loss': 0.13049274217438533, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:29] {721} INFO - Brief result: {'pred_time': 4.074711730514747e-06, 'wall_clock_time': 7.192638874053955, 'metric_for_logging': {'pred_time': 4.074711730514747e-06}, 'val_loss': 0.0882294855337219, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:29] {701} INFO - Number of trials: 10/1000000, 2 RUNNING, 8 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:29] {721} INFO - Brief result: {'pred_time': 8.28418178834777e-06, 'wall_clock_time': 10.542565107345581, 'metric_for_logging': {'pred_time': 8.28418178834777e-06}, 'val_loss': 0.44030808729139925, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:29] {721} INFO - Brief result: {'pred_time': 2.766001051750736e-06, 'wall_clock_time': 9.972064971923828, 'metric_for_logging': {'pred_time': 2.766001051750736e-06}, 'val_loss': 0.1094598597807841, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:29] {701} INFO - Number of trials: 12/1000000, 2 RUNNING, 10 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:30] {721} INFO - Brief result: {'pred_time': 2.672274907430013e-06, 'wall_clock_time': 11.087923765182495, 'metric_for_logging': {'pred_time': 2.672274907430013e-06}, 'val_loss': 0.44030808729139925, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:30] {721} INFO - Brief result: {'pred_time': 3.64966150643169e-05, 'wall_clock_time': 11.1082124710083, 'metric_for_logging': {'pred_time': 3.64966150643169e-05}, 'val_loss': 0.44030808729139925, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:30] {701} INFO - Number of trials: 14/1000000, 2 RUNNING, 12 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:30] {721} INFO - Brief result: {'pred_time': 2.7305837990581123e-06, 'wall_clock_time': 11.226593255996704, 'metric_for_logging': {'pred_time': 2.7305837990581123e-06}, 'val_loss': 0.11671768539547744, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:30] {721} INFO - Brief result: {'pred_time': 1.1010878327964008e-05, 'wall_clock_time': 11.672830581665039, 'metric_for_logging': {'pred_time': 1.1010878327964008e-05}, 'val_loss': 0.44030808729139925, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:30] {701} INFO - Number of trials: 16/1000000, 2 RUNNING, 14 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:30] {721} INFO - Brief result: {'pred_time': 3.0679115350695625e-06, 'wall_clock_time': 11.811484813690186, 'metric_for_logging': {'pred_time': 3.0679115350695625e-06}, 'val_loss': 0.06685099239656356, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:30] {721} INFO - Brief result: {'pred_time': 2.525422884070355e-06, 'wall_clock_time': 11.753840208053589, 'metric_for_logging': {'pred_time': 2.525422884070355e-06}, 'val_loss': 0.051347881899871606, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:30] {701} INFO - Number of trials: 18/1000000, 2 RUNNING, 16 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 2.8243099433788355e-06, 'wall_clock_time': 11.905105590820312, 'metric_for_logging': {'pred_time': 2.8243099433788355e-06}, 'val_loss': 0.05124913597314107, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 5.950530370076497e-06, 'wall_clock_time': 11.948493957519531, 'metric_for_logging': {'pred_time': 5.950530370076497e-06}, 'val_loss': 0.056778907870050355, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {701} INFO - Number of trials: 20/1000000, 2 RUNNING, 18 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 2.7772309123605923e-06, 'wall_clock_time': 12.081507682800293, 'metric_for_logging': {'pred_time': 2.7772309123605923e-06}, 'val_loss': 0.04611434778315393, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 9.349722793136818e-06, 'wall_clock_time': 12.140351295471191, 'metric_for_logging': {'pred_time': 9.349722793136818e-06}, 'val_loss': 0.06334551199763017, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {701} INFO - Number of trials: 22/1000000, 2 RUNNING, 20 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 2.8087609056113423e-06, 'wall_clock_time': 12.278619527816772, 'metric_for_logging': {'pred_time': 2.8087609056113423e-06}, 'val_loss': 0.11923570652710569, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 2.4744565936102383e-06, 'wall_clock_time': 12.490124225616455, 'metric_for_logging': {'pred_time': 2.4744565936102383e-06}, 'val_loss': 0.05603831341957144, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {701} INFO - Number of trials: 24/1000000, 2 RUNNING, 22 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 6.302543308423913e-06, 'wall_clock_time': 12.612251281738281, 'metric_for_logging': {'pred_time': 6.302543308423913e-06}, 'val_loss': 0.051644119680063216, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {721} INFO - Brief result: {'pred_time': 2.673570660577304e-06, 'wall_clock_time': 12.566608667373657, 'metric_for_logging': {'pred_time': 2.673570660577304e-06}, 'val_loss': 0.0813172706625852, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:31] {701} INFO - Number of trials: 26/1000000, 2 RUNNING, 24 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 6.157850873643073e-06, 'wall_clock_time': 12.828747272491455, 'metric_for_logging': {'pred_time': 6.157850873643073e-06}, 'val_loss': 0.07173891576972447, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 1.0999648467354152e-05, 'wall_clock_time': 12.764892816543579, 'metric_for_logging': {'pred_time': 1.0999648467354152e-05}, 'val_loss': 0.07252888318356865, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {701} INFO - Number of trials: 28/1000000, 2 RUNNING, 26 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 6.410090819649074e-06, 'wall_clock_time': 13.341551542282104, 'metric_for_logging': {'pred_time': 6.410090819649074e-06}, 'val_loss': 0.11864323096672269, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 7.317118022752845e-06, 'wall_clock_time': 13.118256092071533, 'metric_for_logging': {'pred_time': 7.317118022752845e-06}, 'val_loss': 0.05806260491754711, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {701} INFO - Number of trials: 30/1000000, 2 RUNNING, 28 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 2.767296804898027e-06, 'wall_clock_time': 13.454796552658081, 'metric_for_logging': {'pred_time': 2.767296804898027e-06}, 'val_loss': 0.06240742569369018, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 2.6109425917915674e-06, 'wall_clock_time': 13.412111759185791, 'metric_for_logging': {'pred_time': 2.6109425917915674e-06}, 'val_loss': 0.050508541522662154, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {701} INFO - Number of trials: 32/1000000, 2 RUNNING, 30 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 5.6373900261478145e-06, 'wall_clock_time': 13.58346176147461, 'metric_for_logging': {'pred_time': 5.6373900261478145e-06}, 'val_loss': 0.1298015206872717, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {721} INFO - Brief result: {'pred_time': 5.983788034190303e-06, 'wall_clock_time': 13.700432062149048, 'metric_for_logging': {'pred_time': 5.983788034190303e-06}, 'val_loss': 0.11484151278759747, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:32] {701} INFO - Number of trials: 34/1000000, 2 RUNNING, 32 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:33] {721} INFO - Brief result: {'pred_time': 8.459972298663596e-06, 'wall_clock_time': 13.909964561462402, 'metric_for_logging': {'pred_time': 8.459972298663596e-06}, 'val_loss': 0.055593956749284024, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:33] {721} INFO - Brief result: {'pred_time': 5.493129509082739e-06, 'wall_clock_time': 13.925570249557495, 'metric_for_logging': {'pred_time': 5.493129509082739e-06}, 'val_loss': 0.055939567492841014, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:33] {701} INFO - Number of trials: 36/1000000, 2 RUNNING, 34 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:33] {721} INFO - Brief result: {'pred_time': 2.6143979335176772e-06, 'wall_clock_time': 14.180267810821533, 'metric_for_logging': {'pred_time': 2.6143979335176772e-06}, 'val_loss': 0.08348968105065668, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:33] {721} INFO - Brief result: {'pred_time': 2.6411768318950264e-06, 'wall_clock_time': 14.71433973312378, 'metric_for_logging': {'pred_time': 2.6411768318950264e-06}, 'val_loss': 0.4402093413646687, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:33] {701} INFO - Number of trials: 38/1000000, 2 RUNNING, 36 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 5.601972773455191e-06, 'wall_clock_time': 14.794866561889648, 'metric_for_logging': {'pred_time': 5.601972773455191e-06}, 'val_loss': 0.10427569862743158, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 9.106985036877619e-06, 'wall_clock_time': 14.92939567565918, 'metric_for_logging': {'pred_time': 9.106985036877619e-06}, 'val_loss': 0.0732201046706824, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {701} INFO - Number of trials: 40/1000000, 2 RUNNING, 38 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 1.1574530947035637e-05, 'wall_clock_time': 15.093894243240356, 'metric_for_logging': {'pred_time': 1.1574530947035637e-05}, 'val_loss': 0.12525920805766755, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 2.6105106740758037e-06, 'wall_clock_time': 15.01662564277649, 'metric_for_logging': {'pred_time': 2.6105106740758037e-06}, 'val_loss': 0.07914486027451362, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {701} INFO - Number of trials: 42/1000000, 2 RUNNING, 40 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 6.18549360745195e-06, 'wall_clock_time': 15.247915506362915, 'metric_for_logging': {'pred_time': 6.18549360745195e-06}, 'val_loss': 0.11627332872519003, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 5.855508472608483e-06, 'wall_clock_time': 15.360023498535156, 'metric_for_logging': {'pred_time': 5.855508472608483e-06}, 'val_loss': 0.07346696948750864, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {701} INFO - Number of trials: 44/1000000, 2 RUNNING, 42 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 2.6701153188511944e-06, 'wall_clock_time': 15.488085269927979, 'metric_for_logging': {'pred_time': 2.6701153188511944e-06}, 'val_loss': 0.05534709193245779, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {721} INFO - Brief result: {'pred_time': 9.4831853673078e-06, 'wall_clock_time': 15.555660009384155, 'metric_for_logging': {'pred_time': 9.4831853673078e-06}, 'val_loss': 0.07218327244001177, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:34] {701} INFO - Number of trials: 46/1000000, 2 RUNNING, 44 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:35] {721} INFO - Brief result: {'pred_time': 6.73402910647185e-06, 'wall_clock_time': 15.730143547058105, 'metric_for_logging': {'pred_time': 6.73402910647185e-06}, 'val_loss': 0.08077416806556736, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:35] {721} INFO - Brief result: {'pred_time': 2.6541343633679375e-06, 'wall_clock_time': 16.115678787231445, 'metric_for_logging': {'pred_time': 2.6541343633679375e-06}, 'val_loss': 0.4402093413646687, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:35] {701} INFO - Number of trials: 48/1000000, 2 RUNNING, 46 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:35] {721} INFO - Brief result: {'pred_time': 8.3088010981463e-06, 'wall_clock_time': 16.22883939743042, 'metric_for_logging': {'pred_time': 8.3088010981463e-06}, 'val_loss': 0.12920904512688847, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:35] {721} INFO - Brief result: {'pred_time': 2.6359938193058623e-06, 'wall_clock_time': 16.646353244781494, 'metric_for_logging': {'pred_time': 2.6359938193058623e-06}, 'val_loss': 0.44030808729139925, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:35] {701} INFO - Number of trials: 50/1000000, 2 RUNNING, 48 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 3.0307234197423078e-05, 'wall_clock_time': 16.778428554534912, 'metric_for_logging': {'pred_time': 3.0307234197423078e-05}, 'val_loss': 0.06798657055396462, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 2.4200781531955886e-05, 'wall_clock_time': 16.88268756866455, 'metric_for_logging': {'pred_time': 2.4200781531955886e-05}, 'val_loss': 0.07435568282808336, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {701} INFO - Number of trials: 52/1000000, 2 RUNNING, 50 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 2.8074651524640513e-06, 'wall_clock_time': 16.974034309387207, 'metric_for_logging': {'pred_time': 2.8074651524640513e-06}, 'val_loss': 0.05658141601658939, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 2.6446321736211362e-06, 'wall_clock_time': 17.52650499343872, 'metric_for_logging': {'pred_time': 2.6446321736211362e-06}, 'val_loss': 0.4402093413646687, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {701} INFO - Number of trials: 54/1000000, 2 RUNNING, 52 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 6.419593009395876e-06, 'wall_clock_time': 17.642486095428467, 'metric_for_logging': {'pred_time': 6.419593009395876e-06}, 'val_loss': 0.09765972153648661, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 2.5258548017861187e-06, 'wall_clock_time': 17.6002094745636, 'metric_for_logging': {'pred_time': 2.5258548017861187e-06}, 'val_loss': 0.2373852078601758, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {701} INFO - Number of trials: 56/1000000, 2 RUNNING, 54 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 8.018552393153094e-06, 'wall_clock_time': 17.772863388061523, 'metric_for_logging': {'pred_time': 8.018552393153094e-06}, 'val_loss': 0.11015108126789774, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {721} INFO - Brief result: {'pred_time': 8.93680945686672e-06, 'wall_clock_time': 17.81844425201416, 'metric_for_logging': {'pred_time': 8.93680945686672e-06}, 'val_loss': 0.06023501530561859, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:36] {701} INFO - Number of trials: 58/1000000, 2 RUNNING, 56 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:37] {721} INFO - Brief result: {'pred_time': 4.903561827065288e-06, 'wall_clock_time': 17.945078372955322, 'metric_for_logging': {'pred_time': 4.903561827065288e-06}, 'val_loss': 0.11385405352029232, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:37] {721} INFO - Brief result: {'pred_time': 6.04771185612333e-06, 'wall_clock_time': 18.01078748703003, 'metric_for_logging': {'pred_time': 6.04771185612333e-06}, 'val_loss': 0.08250222178335143, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:37] {701} INFO - Number of trials: 60/1000000, 2 RUNNING, 58 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:37] {721} INFO - Brief result: {'pred_time': 3.395737081334211e-06, 'wall_clock_time': 18.21552562713623, 'metric_for_logging': {'pred_time': 3.395737081334211e-06}, 'val_loss': 0.06472795497185735, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:37] {721} INFO - Brief result: {'pred_time': 6.033890489218892e-06, 'wall_clock_time': 18.311420917510986, 'metric_for_logging': {'pred_time': 6.033890489218892e-06}, 'val_loss': 0.10417695270070126, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:37] {701} INFO - Number of trials: 62/1000000, 2 RUNNING, 60 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:37] {721} INFO - Brief result: {'pred_time': 6.0904717099839365e-06, 'wall_clock_time': 18.445258855819702, 'metric_for_logging': {'pred_time': 6.0904717099839365e-06}, 'val_loss': 0.08437839439123151, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:37] {721} INFO - Brief result: {'pred_time': 5.839095599409463e-06, 'wall_clock_time': 18.58301091194153, 'metric_for_logging': {'pred_time': 5.839095599409463e-06}, 'val_loss': 0.0753431420953885, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:37] {701} INFO - Number of trials: 64/1000000, 2 RUNNING, 62 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 5.846438200577445e-06, 'wall_clock_time': 18.726320266723633, 'metric_for_logging': {'pred_time': 5.846438200577445e-06}, 'val_loss': 0.09849906191369606, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 6.297360295834749e-06, 'wall_clock_time': 18.90593457221985, 'metric_for_logging': {'pred_time': 6.297360295834749e-06}, 'val_loss': 0.059494420855139785, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {701} INFO - Number of trials: 66/1000000, 2 RUNNING, 64 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 3.2454297162484433e-06, 'wall_clock_time': 18.985801696777344, 'metric_for_logging': {'pred_time': 3.2454297162484433e-06}, 'val_loss': 0.09415424113755311, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 9.18429830799932e-06, 'wall_clock_time': 19.04706835746765, 'metric_for_logging': {'pred_time': 9.18429830799932e-06}, 'val_loss': 0.11884072282018354, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {701} INFO - Number of trials: 68/1000000, 2 RUNNING, 66 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 3.5672084144924e-06, 'wall_clock_time': 19.174312353134155, 'metric_for_logging': {'pred_time': 3.5672084144924e-06}, 'val_loss': 0.06043250715907966, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 3.0838924905528193e-06, 'wall_clock_time': 19.106544256210327, 'metric_for_logging': {'pred_time': 3.0838924905528193e-06}, 'val_loss': 0.1773476844080183, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {701} INFO - Number of trials: 70/1000000, 2 RUNNING, 68 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 2.6657961416935576e-06, 'wall_clock_time': 19.25450086593628, 'metric_for_logging': {'pred_time': 2.6657961416935576e-06}, 'val_loss': 0.07356571541423917, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 3.0126260674518086e-06, 'wall_clock_time': 19.338970184326172, 'metric_for_logging': {'pred_time': 3.0126260674518086e-06}, 'val_loss': 0.11257035647279534, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {701} INFO - Number of trials: 72/1000000, 2 RUNNING, 70 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 9.176955706831338e-06, 'wall_clock_time': 19.547762393951416, 'metric_for_logging': {'pred_time': 9.176955706831338e-06}, 'val_loss': 0.055198973042361876, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 2.90421472079512e-06, 'wall_clock_time': 19.430681467056274, 'metric_for_logging': {'pred_time': 2.90421472079512e-06}, 'val_loss': 0.07529376913202335, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {701} INFO - Number of trials: 74/1000000, 2 RUNNING, 72 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 5.785105884939e-06, 'wall_clock_time': 19.72303557395935, 'metric_for_logging': {'pred_time': 5.785105884939e-06}, 'val_loss': 0.07573812580231065, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {721} INFO - Brief result: {'pred_time': 6.937462350596553e-06, 'wall_clock_time': 19.632790088653564, 'metric_for_logging': {'pred_time': 6.937462350596553e-06}, 'val_loss': 0.05608768638293671, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:38] {701} INFO - Number of trials: 76/1000000, 2 RUNNING, 74 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 8.21421111839405e-06, 'wall_clock_time': 19.933900833129883, 'metric_for_logging': {'pred_time': 8.21421111839405e-06}, 'val_loss': 0.1174089068825912, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 5.2931516066841455e-06, 'wall_clock_time': 19.92952609062195, 'metric_for_logging': {'pred_time': 5.2931516066841455e-06}, 'val_loss': 0.07104769428261082, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {701} INFO - Number of trials: 78/1000000, 2 RUNNING, 76 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 3.788782202679178e-06, 'wall_clock_time': 20.200384855270386, 'metric_for_logging': {'pred_time': 3.788782202679178e-06}, 'val_loss': 0.0743063098647182, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 4.645275033038596e-06, 'wall_clock_time': 20.132648468017578, 'metric_for_logging': {'pred_time': 4.645275033038596e-06}, 'val_loss': 0.13641749777821666, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {701} INFO - Number of trials: 80/1000000, 2 RUNNING, 78 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 8.836604546809543e-06, 'wall_clock_time': 20.385242700576782, 'metric_for_logging': {'pred_time': 8.836604546809543e-06}, 'val_loss': 0.05100227115631484, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 5.2603258602861045e-06, 'wall_clock_time': 20.43856120109558, 'metric_for_logging': {'pred_time': 5.2603258602861045e-06}, 'val_loss': 0.0940061222474573, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {701} INFO - Number of trials: 82/1000000, 2 RUNNING, 80 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 5.6779902914296025e-06, 'wall_clock_time': 20.56763219833374, 'metric_for_logging': {'pred_time': 5.6779902914296025e-06}, 'val_loss': 0.09306803594351742, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 5.091877951138262e-06, 'wall_clock_time': 20.56761121749878, 'metric_for_logging': {'pred_time': 5.091877951138262e-06}, 'val_loss': 0.0489286066949739, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {701} INFO - Number of trials: 84/1000000, 2 RUNNING, 82 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 2.659317375957102e-06, 'wall_clock_time': 20.810898542404175, 'metric_for_logging': {'pred_time': 2.659317375957102e-06}, 'val_loss': 0.0694183864915574, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:39] {721} INFO - Brief result: {'pred_time': 8.414620938508406e-06, 'wall_clock_time': 20.675727367401123, 'metric_for_logging': {'pred_time': 8.414620938508406e-06}, 'val_loss': 0.11573022612817219, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {701} INFO - Number of trials: 86/1000000, 2 RUNNING, 84 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 2.832084462262582e-06, 'wall_clock_time': 20.903096199035645, 'metric_for_logging': {'pred_time': 2.832084462262582e-06}, 'val_loss': 0.04626246667324985, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 2.9219233471414317e-06, 'wall_clock_time': 20.932437419891357, 'metric_for_logging': {'pred_time': 2.9219233471414317e-06}, 'val_loss': 0.06018564234225343, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {701} INFO - Number of trials: 88/1000000, 2 RUNNING, 86 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 6.758216498554617e-06, 'wall_clock_time': 21.03868079185486, 'metric_for_logging': {'pred_time': 6.758216498554617e-06}, 'val_loss': 0.06428359830156993, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 2.92408293572025e-06, 'wall_clock_time': 21.065490245819092, 'metric_for_logging': {'pred_time': 2.92408293572025e-06}, 'val_loss': 0.0632961390342649, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {701} INFO - Number of trials: 90/1000000, 2 RUNNING, 88 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 1.649493756501571e-05, 'wall_clock_time': 21.313084840774536, 'metric_for_logging': {'pred_time': 1.649493756501571e-05}, 'val_loss': 0.06270366347388179, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 6.298224131266276e-06, 'wall_clock_time': 21.25125765800476, 'metric_for_logging': {'pred_time': 6.298224131266276e-06}, 'val_loss': 0.05514960007899683, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {701} INFO - Number of trials: 92/1000000, 2 RUNNING, 90 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 9.413646615069846e-06, 'wall_clock_time': 21.699151277542114, 'metric_for_logging': {'pred_time': 9.413646615069846e-06}, 'val_loss': 0.05332280043448212, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 5.425318427707838e-06, 'wall_clock_time': 21.548757791519165, 'metric_for_logging': {'pred_time': 5.425318427707838e-06}, 'val_loss': 0.11592771798163315, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {701} INFO - Number of trials: 94/1000000, 2 RUNNING, 92 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 6.139710329580998e-06, 'wall_clock_time': 21.82002091407776, 'metric_for_logging': {'pred_time': 6.139710329580998e-06}, 'val_loss': 0.05159474671669795, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:40] {721} INFO - Brief result: {'pred_time': 3.3011471015819605e-06, 'wall_clock_time': 21.81204319000244, 'metric_for_logging': {'pred_time': 3.3011471015819605e-06}, 'val_loss': 0.09059938777525423, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {701} INFO - Number of trials: 96/1000000, 2 RUNNING, 94 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 4.165414450825125e-06, 'wall_clock_time': 21.923016786575317, 'metric_for_logging': {'pred_time': 4.165414450825125e-06}, 'val_loss': 0.11385405352029232, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 6.249417429384978e-06, 'wall_clock_time': 22.018856048583984, 'metric_for_logging': {'pred_time': 6.249417429384978e-06}, 'val_loss': 0.07075145650241921, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {701} INFO - Number of trials: 98/1000000, 2 RUNNING, 96 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 6.034322406934655e-06, 'wall_clock_time': 22.17363977432251, 'metric_for_logging': {'pred_time': 6.034322406934655e-06}, 'val_loss': 0.06887528389453934, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 2.9556129289710005e-06, 'wall_clock_time': 22.160629272460938, 'metric_for_logging': {'pred_time': 2.9556129289710005e-06}, 'val_loss': 0.09133998222573325, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {701} INFO - Number of trials: 100/1000000, 2 RUNNING, 98 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 2.9854152513586958e-06, 'wall_clock_time': 22.27741003036499, 'metric_for_logging': {'pred_time': 2.9854152513586958e-06}, 'val_loss': 0.11449590204404081, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 5.744073701941448e-06, 'wall_clock_time': 22.31732988357544, 'metric_for_logging': {'pred_time': 5.744073701941448e-06}, 'val_loss': 0.05924755603831344, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {701} INFO - Number of trials: 102/1000000, 2 RUNNING, 100 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 1.626947651738706e-05, 'wall_clock_time': 22.530508756637573, 'metric_for_logging': {'pred_time': 1.626947651738706e-05}, 'val_loss': 0.08546459958526709, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 5.756167397982832e-06, 'wall_clock_time': 22.592064142227173, 'metric_for_logging': {'pred_time': 5.756167397982832e-06}, 'val_loss': 0.14668707415819104, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {701} INFO - Number of trials: 104/1000000, 2 RUNNING, 102 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 2.829492955968e-06, 'wall_clock_time': 22.753239631652832, 'metric_for_logging': {'pred_time': 2.829492955968e-06}, 'val_loss': 0.12071689542806352, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {721} INFO - Brief result: {'pred_time': 3.902808479640795e-06, 'wall_clock_time': 22.676719188690186, 'metric_for_logging': {'pred_time': 3.902808479640795e-06}, 'val_loss': 0.06507356571541434, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:41] {701} INFO - Number of trials: 106/1000000, 2 RUNNING, 104 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:42] {721} INFO - Brief result: {'pred_time': 6.100405817446501e-06, 'wall_clock_time': 22.878417491912842, 'metric_for_logging': {'pred_time': 6.100405817446501e-06}, 'val_loss': 0.06087686382936708, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:42] {721} INFO - Brief result: {'pred_time': 2.5179075158160664e-05, 'wall_clock_time': 23.052462100982666, 'metric_for_logging': {'pred_time': 2.5179075158160664e-05}, 'val_loss': 0.11869260393008785, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:42] {701} INFO - Number of trials: 108/1000000, 2 RUNNING, 106 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:42] {721} INFO - Brief result: {'pred_time': 5.793744239254274e-06, 'wall_clock_time': 23.17588472366333, 'metric_for_logging': {'pred_time': 5.793744239254274e-06}, 'val_loss': 0.08294657845363884, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:42] {721} INFO - Brief result: {'pred_time': 2.396322678828585e-05, 'wall_clock_time': 23.34018874168396, 'metric_for_logging': {'pred_time': 2.396322678828585e-05}, 'val_loss': 0.06961587834501837, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:42] {701} INFO - Number of trials: 110/1000000, 2 RUNNING, 108 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:42] {721} INFO - Brief result: {'pred_time': 5.7159990504168085e-06, 'wall_clock_time': 23.49625301361084, 'metric_for_logging': {'pred_time': 5.7159990504168085e-06}, 'val_loss': 0.11459464797077112, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:42] {721} INFO - Brief result: {'pred_time': 3.985304763351661e-06, 'wall_clock_time': 23.621938467025757, 'metric_for_logging': {'pred_time': 3.985304763351661e-06}, 'val_loss': 0.07934235212797469, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:42] {701} INFO - Number of trials: 112/1000000, 2 RUNNING, 110 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 6.010998850283415e-06, 'wall_clock_time': 23.92509412765503, 'metric_for_logging': {'pred_time': 6.010998850283415e-06}, 'val_loss': 0.06201244198676814, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 2.77463940606601e-06, 'wall_clock_time': 23.766287803649902, 'metric_for_logging': {'pred_time': 2.77463940606601e-06}, 'val_loss': 0.05312530858102105, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {701} INFO - Number of trials: 114/1000000, 2 RUNNING, 112 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 3.0186729154725005e-06, 'wall_clock_time': 24.07707452774048, 'metric_for_logging': {'pred_time': 3.0186729154725005e-06}, 'val_loss': 0.08516836180507548, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 2.608783003212749e-06, 'wall_clock_time': 24.00983738899231, 'metric_for_logging': {'pred_time': 2.608783003212749e-06}, 'val_loss': 0.06764095981040785, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {701} INFO - Number of trials: 116/1000000, 2 RUNNING, 114 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 3.4251074860061426e-06, 'wall_clock_time': 24.38084888458252, 'metric_for_logging': {'pred_time': 3.4251074860061426e-06}, 'val_loss': 0.06339488496099543, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 7.032052330348803e-06, 'wall_clock_time': 24.1881685256958, 'metric_for_logging': {'pred_time': 7.032052330348803e-06}, 'val_loss': 0.04636121259998027, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {701} INFO - Number of trials: 118/1000000, 2 RUNNING, 116 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 9.86240912174833e-06, 'wall_clock_time': 24.56498122215271, 'metric_for_logging': {'pred_time': 9.86240912174833e-06}, 'val_loss': 0.07119581317270662, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 3.183665482894234e-06, 'wall_clock_time': 24.54759931564331, 'metric_for_logging': {'pred_time': 3.183665482894234e-06}, 'val_loss': 0.08887133405747016, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {701} INFO - Number of trials: 120/1000000, 2 RUNNING, 118 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 2.754339273425116e-06, 'wall_clock_time': 24.789905548095703, 'metric_for_logging': {'pred_time': 2.754339273425116e-06}, 'val_loss': 0.10827490866001788, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {721} INFO - Brief result: {'pred_time': 2.649383268494537e-06, 'wall_clock_time': 24.70473837852478, 'metric_for_logging': {'pred_time': 2.649383268494537e-06}, 'val_loss': 0.06028438826898397, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:43] {701} INFO - Number of trials: 122/1000000, 2 RUNNING, 120 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 2.9823918273483496e-06, 'wall_clock_time': 24.895788431167603, 'metric_for_logging': {'pred_time': 2.9823918273483496e-06}, 'val_loss': 0.06685099239656367, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 9.796325711236484e-06, 'wall_clock_time': 25.00363516807556, 'metric_for_logging': {'pred_time': 9.796325711236484e-06}, 'val_loss': 0.1250123432408412, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {701} INFO - Number of trials: 124/1000000, 2 RUNNING, 122 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 2.7569307797196982e-06, 'wall_clock_time': 25.178345680236816, 'metric_for_logging': {'pred_time': 2.7569307797196982e-06}, 'val_loss': 0.053767157104769536, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 6.0209329577459805e-06, 'wall_clock_time': 25.213536024093628, 'metric_for_logging': {'pred_time': 6.0209329577459805e-06}, 'val_loss': 0.05203910338698525, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {701} INFO - Number of trials: 126/1000000, 2 RUNNING, 124 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 5.583832229393116e-06, 'wall_clock_time': 25.35109281539917, 'metric_for_logging': {'pred_time': 5.583832229393116e-06}, 'val_loss': 0.060136269378888274, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 7.961107336956522e-06, 'wall_clock_time': 25.331994771957397, 'metric_for_logging': {'pred_time': 7.961107336956522e-06}, 'val_loss': 0.06324676607089952, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {701} INFO - Number of trials: 128/1000000, 2 RUNNING, 126 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 5.300494207852129e-06, 'wall_clock_time': 25.485967874526978, 'metric_for_logging': {'pred_time': 5.300494207852129e-06}, 'val_loss': 0.05954379381850505, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 3.0044196308522984e-06, 'wall_clock_time': 25.516738414764404, 'metric_for_logging': {'pred_time': 3.0044196308522984e-06}, 'val_loss': 0.11192850794904707, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {701} INFO - Number of trials: 130/1000000, 2 RUNNING, 128 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 2.8558399366295855e-06, 'wall_clock_time': 25.770437479019165, 'metric_for_logging': {'pred_time': 2.8558399366295855e-06}, 'val_loss': 0.06122247457292396, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {721} INFO - Brief result: {'pred_time': 5.801086840422257e-06, 'wall_clock_time': 25.760963678359985, 'metric_for_logging': {'pred_time': 5.801086840422257e-06}, 'val_loss': 0.07178828873308973, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:44] {701} INFO - Number of trials: 132/1000000, 2 RUNNING, 130 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 5.684900974881822e-06, 'wall_clock_time': 26.019713401794434, 'metric_for_logging': {'pred_time': 5.684900974881822e-06}, 'val_loss': 0.05633455119976316, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 5.650347557620726e-06, 'wall_clock_time': 25.950324296951294, 'metric_for_logging': {'pred_time': 5.650347557620726e-06}, 'val_loss': 0.13631875185148612, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {701} INFO - Number of trials: 134/1000000, 2 RUNNING, 132 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 5.522067996038907e-06, 'wall_clock_time': 26.28586196899414, 'metric_for_logging': {'pred_time': 5.522067996038907e-06}, 'val_loss': 0.060629999012540736, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 3.1707079514213232e-06, 'wall_clock_time': 26.178901433944702, 'metric_for_logging': {'pred_time': 3.1707079514213232e-06}, 'val_loss': 0.05880319936802603, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {701} INFO - Number of trials: 136/1000000, 2 RUNNING, 134 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 4.254821417988211e-06, 'wall_clock_time': 26.415063619613647, 'metric_for_logging': {'pred_time': 4.254821417988211e-06}, 'val_loss': 0.11977880912412364, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 3.474346105603204e-06, 'wall_clock_time': 26.48921513557434, 'metric_for_logging': {'pred_time': 3.474346105603204e-06}, 'val_loss': 0.06927026760146149, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {701} INFO - Number of trials: 138/1000000, 2 RUNNING, 136 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 3.110239471214405e-06, 'wall_clock_time': 26.65872859954834, 'metric_for_logging': {'pred_time': 3.110239471214405e-06}, 'val_loss': 0.04729929890392026, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {721} INFO - Brief result: {'pred_time': 6.504248881685561e-06, 'wall_clock_time': 26.67936396598816, 'metric_for_logging': {'pred_time': 6.504248881685561e-06}, 'val_loss': 0.05445837859188307, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:45] {701} INFO - Number of trials: 140/1000000, 2 RUNNING, 138 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 3.000964289126189e-06, 'wall_clock_time': 26.915833473205566, 'metric_for_logging': {'pred_time': 3.000964289126189e-06}, 'val_loss': 0.09311740890688258, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 2.5223994600600092e-06, 'wall_clock_time': 26.75563335418701, 'metric_for_logging': {'pred_time': 2.5223994600600092e-06}, 'val_loss': 0.11508837760442381, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {701} INFO - Number of trials: 142/1000000, 2 RUNNING, 140 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 2.4826630302097485e-06, 'wall_clock_time': 26.999792337417603, 'metric_for_logging': {'pred_time': 2.4826630302097485e-06}, 'val_loss': 0.05969191270860086, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 5.978173103885374e-06, 'wall_clock_time': 27.102710723876953, 'metric_for_logging': {'pred_time': 5.978173103885374e-06}, 'val_loss': 0.1033376123234917, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {701} INFO - Number of trials: 144/1000000, 2 RUNNING, 142 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 2.6787536731664687e-06, 'wall_clock_time': 27.26138925552368, 'metric_for_logging': {'pred_time': 2.6787536731664687e-06}, 'val_loss': 0.06773970573713839, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 7.882930230403293e-06, 'wall_clock_time': 27.29090642929077, 'metric_for_logging': {'pred_time': 7.882930230403293e-06}, 'val_loss': 0.05603831341957155, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {701} INFO - Number of trials: 146/1000000, 2 RUNNING, 144 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 3.4492948780889097e-06, 'wall_clock_time': 27.43669104576111, 'metric_for_logging': {'pred_time': 3.4492948780889097e-06}, 'val_loss': 0.050854152266219144, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {721} INFO - Brief result: {'pred_time': 2.1374743917713996e-05, 'wall_clock_time': 27.60170078277588, 'metric_for_logging': {'pred_time': 2.1374743917713996e-05}, 'val_loss': 0.05603831341957144, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:46] {701} INFO - Number of trials: 148/1000000, 2 RUNNING, 146 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 5.791584650675456e-06, 'wall_clock_time': 27.897152423858643, 'metric_for_logging': {'pred_time': 5.791584650675456e-06}, 'val_loss': 0.05564332971264929, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 5.852053130882374e-06, 'wall_clock_time': 27.7713520526886, 'metric_for_logging': {'pred_time': 5.852053130882374e-06}, 'val_loss': 0.07144267798953297, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {701} INFO - Number of trials: 150/1000000, 2 RUNNING, 148 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 2.819126930789671e-06, 'wall_clock_time': 27.985836029052734, 'metric_for_logging': {'pred_time': 2.819126930789671e-06}, 'val_loss': 0.07247951022020338, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 5.9984732365262685e-06, 'wall_clock_time': 28.041109085083008, 'metric_for_logging': {'pred_time': 5.9984732365262685e-06}, 'val_loss': 0.12392613804680552, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {701} INFO - Number of trials: 152/1000000, 2 RUNNING, 150 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 3.1093756357828775e-06, 'wall_clock_time': 28.283621549606323, 'metric_for_logging': {'pred_time': 3.1093756357828775e-06}, 'val_loss': 0.05579144860274521, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 6.266262220299763e-06, 'wall_clock_time': 28.21390414237976, 'metric_for_logging': {'pred_time': 6.266262220299763e-06}, 'val_loss': 0.13656561666831246, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {701} INFO - Number of trials: 154/1000000, 2 RUNNING, 152 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 3.817720689635346e-06, 'wall_clock_time': 28.367319583892822, 'metric_for_logging': {'pred_time': 3.817720689635346e-06}, 'val_loss': 0.07006023501530567, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 2.1372152411419414e-05, 'wall_clock_time': 28.473020315170288, 'metric_for_logging': {'pred_time': 2.1372152411419414e-05}, 'val_loss': 0.11652019354201637, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {701} INFO - Number of trials: 156/1000000, 2 RUNNING, 154 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 9.535879328630973e-06, 'wall_clock_time': 28.671865940093994, 'metric_for_logging': {'pred_time': 9.535879328630973e-06}, 'val_loss': 0.05238471413054213, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 2.6986218880915986e-06, 'wall_clock_time': 28.558914184570312, 'metric_for_logging': {'pred_time': 2.6986218880915986e-06}, 'val_loss': 0.06976399723511406, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {701} INFO - Number of trials: 158/1000000, 2 RUNNING, 156 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 2.5504741115846494e-06, 'wall_clock_time': 28.771398544311523, 'metric_for_logging': {'pred_time': 2.5504741115846494e-06}, 'val_loss': 0.06176557716994169, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {721} INFO - Brief result: {'pred_time': 6.525412849757982e-06, 'wall_clock_time': 28.801488161087036, 'metric_for_logging': {'pred_time': 6.525412849757982e-06}, 'val_loss': 0.05090352522958441, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:47] {701} INFO - Number of trials: 160/1000000, 2 RUNNING, 158 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 2.7310157167738764e-06, 'wall_clock_time': 28.991811275482178, 'metric_for_logging': {'pred_time': 2.7310157167738764e-06}, 'val_loss': 0.09756097560975618, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 6.087016368257826e-06, 'wall_clock_time': 28.992658138275146, 'metric_for_logging': {'pred_time': 6.087016368257826e-06}, 'val_loss': 0.0654191764589711, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {701} INFO - Number of trials: 162/1000000, 2 RUNNING, 160 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 2.654998198799465e-06, 'wall_clock_time': 29.154409408569336, 'metric_for_logging': {'pred_time': 2.654998198799465e-06}, 'val_loss': 0.06941838649155718, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 9.444312772889068e-06, 'wall_clock_time': 29.20253562927246, 'metric_for_logging': {'pred_time': 9.444312772889068e-06}, 'val_loss': 0.12032191172114148, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {701} INFO - Number of trials: 164/1000000, 2 RUNNING, 162 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 2.8545441834822944e-06, 'wall_clock_time': 29.423895120620728, 'metric_for_logging': {'pred_time': 2.8545441834822944e-06}, 'val_loss': 0.10091833711859377, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 6.95819440095321e-06, 'wall_clock_time': 29.467605590820312, 'metric_for_logging': {'pred_time': 6.95819440095321e-06}, 'val_loss': 0.05297718969092535, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {701} INFO - Number of trials: 166/1000000, 2 RUNNING, 164 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 6.079673767089844e-06, 'wall_clock_time': 29.63276433944702, 'metric_for_logging': {'pred_time': 6.079673767089844e-06}, 'val_loss': 0.054112767848326304, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {721} INFO - Brief result: {'pred_time': 6.118978279224341e-06, 'wall_clock_time': 29.592938661575317, 'metric_for_logging': {'pred_time': 6.118978279224341e-06}, 'val_loss': 0.058556334551199685, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:48] {701} INFO - Number of trials: 168/1000000, 2 RUNNING, 166 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 2.6830728503241055e-06, 'wall_clock_time': 29.754254817962646, 'metric_for_logging': {'pred_time': 2.6830728503241055e-06}, 'val_loss': 0.05297718969092535, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 2.916740334552267e-06, 'wall_clock_time': 29.84345054626465, 'metric_for_logging': {'pred_time': 2.916740334552267e-06}, 'val_loss': 0.08151476251604617, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {701} INFO - Number of trials: 170/1000000, 2 RUNNING, 168 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 5.913817364236583e-06, 'wall_clock_time': 29.988696813583374, 'metric_for_logging': {'pred_time': 5.913817364236583e-06}, 'val_loss': 0.06230867976695964, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 6.103429241456848e-06, 'wall_clock_time': 30.003417015075684, 'metric_for_logging': {'pred_time': 6.103429241456848e-06}, 'val_loss': 0.0464599585267107, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {701} INFO - Number of trials: 172/1000000, 2 RUNNING, 170 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 2.9914620993793877e-06, 'wall_clock_time': 30.08467698097229, 'metric_for_logging': {'pred_time': 2.9914620993793877e-06}, 'val_loss': 0.05549521082255349, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 2.528014390364937e-06, 'wall_clock_time': 30.258479833602905, 'metric_for_logging': {'pred_time': 2.528014390364937e-06}, 'val_loss': 0.07692307692307698, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {701} INFO - Number of trials: 174/1000000, 2 RUNNING, 172 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 2.8256056965261265e-06, 'wall_clock_time': 30.337883234024048, 'metric_for_logging': {'pred_time': 2.8256056965261265e-06}, 'val_loss': 0.057322010467068196, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 7.503706475962764e-06, 'wall_clock_time': 30.442070960998535, 'metric_for_logging': {'pred_time': 7.503706475962764e-06}, 'val_loss': 0.1305421151377505, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {701} INFO - Number of trials: 176/1000000, 2 RUNNING, 174 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 2.7495881785517153e-06, 'wall_clock_time': 30.57797908782959, 'metric_for_logging': {'pred_time': 2.7495881785517153e-06}, 'val_loss': 0.07820677397057363, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 2.941791562066562e-06, 'wall_clock_time': 30.61523151397705, 'metric_for_logging': {'pred_time': 2.941791562066562e-06}, 'val_loss': 0.054705243408709414, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {701} INFO - Number of trials: 178/1000000, 2 RUNNING, 176 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 6.585449412249137e-06, 'wall_clock_time': 30.807382106781006, 'metric_for_logging': {'pred_time': 6.585449412249137e-06}, 'val_loss': 0.1426384911622396, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {721} INFO - Brief result: {'pred_time': 6.213136341260827e-06, 'wall_clock_time': 30.763610363006592, 'metric_for_logging': {'pred_time': 6.213136341260827e-06}, 'val_loss': 0.08506961587834505, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:49] {701} INFO - Number of trials: 180/1000000, 2 RUNNING, 178 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 3.3482261326002036e-06, 'wall_clock_time': 30.985364198684692, 'metric_for_logging': {'pred_time': 3.3482261326002036e-06}, 'val_loss': 0.12254369507257823, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 2.7305837990581123e-06, 'wall_clock_time': 30.894768238067627, 'metric_for_logging': {'pred_time': 2.7305837990581123e-06}, 'val_loss': 0.08511898884171032, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {701} INFO - Number of trials: 182/1000000, 2 RUNNING, 180 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 2.0933755929919258e-05, 'wall_clock_time': 31.190297842025757, 'metric_for_logging': {'pred_time': 2.0933755929919258e-05}, 'val_loss': 0.07751555248346009, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 6.078378013942553e-06, 'wall_clock_time': 31.18459129333496, 'metric_for_logging': {'pred_time': 6.078378013942553e-06}, 'val_loss': 0.060629999012540736, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {701} INFO - Number of trials: 184/1000000, 2 RUNNING, 182 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 2.6381534078846807e-06, 'wall_clock_time': 31.447407722473145, 'metric_for_logging': {'pred_time': 2.6381534078846807e-06}, 'val_loss': 0.05845758862446926, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 5.920296129973038e-06, 'wall_clock_time': 31.4540798664093, 'metric_for_logging': {'pred_time': 5.920296129973038e-06}, 'val_loss': 0.08630393996247654, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {701} INFO - Number of trials: 186/1000000, 2 RUNNING, 184 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 1.126311827397001e-05, 'wall_clock_time': 31.64486312866211, 'metric_for_logging': {'pred_time': 1.126311827397001e-05}, 'val_loss': 0.06773970573713828, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {721} INFO - Brief result: {'pred_time': 2.512465352597444e-06, 'wall_clock_time': 31.520182609558105, 'metric_for_logging': {'pred_time': 2.512465352597444e-06}, 'val_loss': 0.08388466475757883, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:50] {701} INFO - Number of trials: 188/1000000, 2 RUNNING, 186 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 4.643979279891305e-06, 'wall_clock_time': 31.83918595314026, 'metric_for_logging': {'pred_time': 4.643979279891305e-06}, 'val_loss': 0.09287054409005646, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 2.8687974681024966e-06, 'wall_clock_time': 31.790475130081177, 'metric_for_logging': {'pred_time': 2.8687974681024966e-06}, 'val_loss': 0.06369112274118693, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {701} INFO - Number of trials: 190/1000000, 2 RUNNING, 188 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 3.0778456425321274e-06, 'wall_clock_time': 32.00390648841858, 'metric_for_logging': {'pred_time': 3.0778456425321274e-06}, 'val_loss': 0.04433692110200449, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 2.587187117424564e-06, 'wall_clock_time': 31.911731004714966, 'metric_for_logging': {'pred_time': 2.587187117424564e-06}, 'val_loss': 0.06699911128665947, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {701} INFO - Number of trials: 192/1000000, 2 RUNNING, 190 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 2.6200128638226054e-06, 'wall_clock_time': 32.14400100708008, 'metric_for_logging': {'pred_time': 2.6200128638226054e-06}, 'val_loss': 0.1485632467660709, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 5.799359169559202e-06, 'wall_clock_time': 32.168201208114624, 'metric_for_logging': {'pred_time': 5.799359169559202e-06}, 'val_loss': 0.13073960699121168, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {701} INFO - Number of trials: 194/1000000, 2 RUNNING, 192 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 5.016724268595378e-06, 'wall_clock_time': 32.50045704841614, 'metric_for_logging': {'pred_time': 5.016724268595378e-06}, 'val_loss': 0.14486027451367622, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 6.710705549820609e-06, 'wall_clock_time': 32.2952516078949, 'metric_for_logging': {'pred_time': 6.710705549820609e-06}, 'val_loss': 0.08906882591093124, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {701} INFO - Number of trials: 196/1000000, 2 RUNNING, 194 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 5.957009135812953e-06, 'wall_clock_time': 32.696682929992676, 'metric_for_logging': {'pred_time': 5.957009135812953e-06}, 'val_loss': 0.10304137454330009, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {721} INFO - Brief result: {'pred_time': 5.950530370076497e-06, 'wall_clock_time': 32.67389702796936, 'metric_for_logging': {'pred_time': 5.950530370076497e-06}, 'val_loss': 0.10605312530858102, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:51] {701} INFO - Number of trials: 198/1000000, 2 RUNNING, 196 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 2.725832704184712e-06, 'wall_clock_time': 32.79148006439209, 'metric_for_logging': {'pred_time': 2.725832704184712e-06}, 'val_loss': 0.20549027352621707, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 2.6394491610319717e-06, 'wall_clock_time': 32.91495752334595, 'metric_for_logging': {'pred_time': 2.6394491610319717e-06}, 'val_loss': 0.06354300385109113, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {701} INFO - Number of trials: 200/1000000, 2 RUNNING, 198 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 5.880127782407015e-06, 'wall_clock_time': 33.023895263671875, 'metric_for_logging': {'pred_time': 5.880127782407015e-06}, 'val_loss': 0.056778907870050466, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 2.7564988620039346e-06, 'wall_clock_time': 33.02772092819214, 'metric_for_logging': {'pred_time': 2.7564988620039346e-06}, 'val_loss': 0.08062604917547156, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {701} INFO - Number of trials: 202/1000000, 2 RUNNING, 200 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 5.923319553983385e-06, 'wall_clock_time': 33.15027189254761, 'metric_for_logging': {'pred_time': 5.923319553983385e-06}, 'val_loss': 0.09568480300187632, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 2.9448149860769077e-06, 'wall_clock_time': 33.218233823776245, 'metric_for_logging': {'pred_time': 2.9448149860769077e-06}, 'val_loss': 0.10738619531944316, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {701} INFO - Number of trials: 204/1000000, 2 RUNNING, 202 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 2.8398589811463286e-06, 'wall_clock_time': 33.45307111740112, 'metric_for_logging': {'pred_time': 2.8398589811463286e-06}, 'val_loss': 0.07430630986471809, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 2.51548877660779e-06, 'wall_clock_time': 33.30553865432739, 'metric_for_logging': {'pred_time': 2.51548877660779e-06}, 'val_loss': 0.12116125209835105, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {701} INFO - Number of trials: 206/1000000, 2 RUNNING, 204 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 5.69526700006015e-06, 'wall_clock_time': 33.58352589607239, 'metric_for_logging': {'pred_time': 5.69526700006015e-06}, 'val_loss': 0.08635331292584192, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 2.4900056313777315e-06, 'wall_clock_time': 33.53607630729675, 'metric_for_logging': {'pred_time': 2.4900056313777315e-06}, 'val_loss': 0.08279845956354304, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:52] {701} INFO - Number of trials: 208/1000000, 2 RUNNING, 206 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:52] {721} INFO - Brief result: {'pred_time': 5.34152639084968e-06, 'wall_clock_time': 33.76783299446106, 'metric_for_logging': {'pred_time': 5.34152639084968e-06}, 'val_loss': 0.049274217438530554, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {721} INFO - Brief result: {'pred_time': 9.49700673421224e-06, 'wall_clock_time': 33.82385492324829, 'metric_for_logging': {'pred_time': 9.49700673421224e-06}, 'val_loss': 0.08674829663276395, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {701} INFO - Number of trials: 210/1000000, 2 RUNNING, 208 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:53] {721} INFO - Brief result: {'pred_time': 5.49096992050392e-06, 'wall_clock_time': 34.218355894088745, 'metric_for_logging': {'pred_time': 5.49096992050392e-06}, 'val_loss': 0.17152167473091728, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {721} INFO - Brief result: {'pred_time': 3.0247197634931924e-06, 'wall_clock_time': 33.99519920349121, 'metric_for_logging': {'pred_time': 3.0247197634931924e-06}, 'val_loss': 0.12313617063296123, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {701} INFO - Number of trials: 212/1000000, 2 RUNNING, 210 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:53] {721} INFO - Brief result: {'pred_time': 5.79676766326462e-06, 'wall_clock_time': 34.37370991706848, 'metric_for_logging': {'pred_time': 5.79676766326462e-06}, 'val_loss': 0.1578947368421052, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {721} INFO - Brief result: {'pred_time': 2.651974774789119e-06, 'wall_clock_time': 34.314613342285156, 'metric_for_logging': {'pred_time': 2.651974774789119e-06}, 'val_loss': 0.05455712451861361, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {701} INFO - Number of trials: 214/1000000, 2 RUNNING, 212 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:53] {721} INFO - Brief result: {'pred_time': 9.071567784184994e-06, 'wall_clock_time': 34.56174850463867, 'metric_for_logging': {'pred_time': 9.071567784184994e-06}, 'val_loss': 0.10402883381060535, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {721} INFO - Brief result: {'pred_time': 5.7172948035640996e-06, 'wall_clock_time': 34.53734111785889, 'metric_for_logging': {'pred_time': 5.7172948035640996e-06}, 'val_loss': 0.07238076429347284, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:53] {701} INFO - Number of trials: 216/1000000, 2 RUNNING, 214 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 9.425308393395465e-06, 'wall_clock_time': 34.88150906562805, 'metric_for_logging': {'pred_time': 9.425308393395465e-06}, 'val_loss': 0.1479707712056878, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.7465647545413696e-06, 'wall_clock_time': 34.8332679271698, 'metric_for_logging': {'pred_time': 2.7465647545413696e-06}, 'val_loss': 0.049619828182087544, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {701} INFO - Number of trials: 218/1000000, 2 RUNNING, 216 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.4900056313777315e-06, 'wall_clock_time': 34.96613597869873, 'metric_for_logging': {'pred_time': 2.4900056313777315e-06}, 'val_loss': 0.07904611434778319, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.7323114699211675e-06, 'wall_clock_time': 35.01647138595581, 'metric_for_logging': {'pred_time': 2.7323114699211675e-06}, 'val_loss': 0.060333761232349126, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {701} INFO - Number of trials: 220/1000000, 2 RUNNING, 218 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 5.6451645450315615e-06, 'wall_clock_time': 35.20986366271973, 'metric_for_logging': {'pred_time': 5.6451645450315615e-06}, 'val_loss': 0.21413054211513782, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.86102294921875e-06, 'wall_clock_time': 35.111485958099365, 'metric_for_logging': {'pred_time': 2.86102294921875e-06}, 'val_loss': 0.07410881801125702, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {701} INFO - Number of trials: 222/1000000, 2 RUNNING, 220 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.972025802170021e-06, 'wall_clock_time': 35.368159770965576, 'metric_for_logging': {'pred_time': 2.972025802170021e-06}, 'val_loss': 0.07652809321615495, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.5068504222925157e-06, 'wall_clock_time': 35.347482442855835, 'metric_for_logging': {'pred_time': 2.5068504222925157e-06}, 'val_loss': 0.08329218919719572, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {701} INFO - Number of trials: 224/1000000, 2 RUNNING, 222 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.96727470729662e-06, 'wall_clock_time': 35.545833587646484, 'metric_for_logging': {'pred_time': 2.96727470729662e-06}, 'val_loss': 0.10837365458674819, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 8.548515430395154e-06, 'wall_clock_time': 35.56863260269165, 'metric_for_logging': {'pred_time': 8.548515430395154e-06}, 'val_loss': 0.09420361410091838, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {701} INFO - Number of trials: 226/1000000, 2 RUNNING, 224 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 3.00226004227348e-06, 'wall_clock_time': 35.75267171859741, 'metric_for_logging': {'pred_time': 3.00226004227348e-06}, 'val_loss': 0.11108916757183762, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {721} INFO - Brief result: {'pred_time': 2.627787382706352e-06, 'wall_clock_time': 35.69258713722229, 'metric_for_logging': {'pred_time': 2.627787382706352e-06}, 'val_loss': 0.41122741186926015, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:54] {701} INFO - Number of trials: 228/1000000, 2 RUNNING, 226 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 9.312145951865375e-06, 'wall_clock_time': 35.95788073539734, 'metric_for_logging': {'pred_time': 9.312145951865375e-06}, 'val_loss': 0.10491754715118007, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 9.375205938366877e-06, 'wall_clock_time': 35.95219969749451, 'metric_for_logging': {'pred_time': 9.375205938366877e-06}, 'val_loss': 0.06378986866791747, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {701} INFO - Number of trials: 230/1000000, 2 RUNNING, 228 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 3.4726184347401494e-06, 'wall_clock_time': 36.27410364151001, 'metric_for_logging': {'pred_time': 3.4726184347401494e-06}, 'val_loss': 0.10728744939271262, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 3.255795741426772e-06, 'wall_clock_time': 36.23434376716614, 'metric_for_logging': {'pred_time': 3.255795741426772e-06}, 'val_loss': 0.05342154636121266, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {701} INFO - Number of trials: 232/1000000, 2 RUNNING, 230 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 2.893848695616791e-06, 'wall_clock_time': 36.45158767700195, 'metric_for_logging': {'pred_time': 2.893848695616791e-06}, 'val_loss': 0.059741285671966016, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 3.4752099410347315e-06, 'wall_clock_time': 36.45778226852417, 'metric_for_logging': {'pred_time': 3.4752099410347315e-06}, 'val_loss': 0.06744346795694689, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {701} INFO - Number of trials: 234/1000000, 2 RUNNING, 232 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 9.550132613251174e-06, 'wall_clock_time': 36.67495918273926, 'metric_for_logging': {'pred_time': 9.550132613251174e-06}, 'val_loss': 0.1305421151377505, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {721} INFO - Brief result: {'pred_time': 2.708555995554164e-06, 'wall_clock_time': 36.53857660293579, 'metric_for_logging': {'pred_time': 2.708555995554164e-06}, 'val_loss': 0.06482670089858789, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:55] {701} INFO - Number of trials: 236/1000000, 2 RUNNING, 234 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 7.125346556953762e-06, 'wall_clock_time': 36.97246479988098, 'metric_for_logging': {'pred_time': 7.125346556953762e-06}, 'val_loss': 0.30147131430828467, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 5.7661015054453975e-06, 'wall_clock_time': 36.861183166503906, 'metric_for_logging': {'pred_time': 5.7661015054453975e-06}, 'val_loss': 0.05416214081169157, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {701} INFO - Number of trials: 238/1000000, 2 RUNNING, 236 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 3.4203563911327417e-06, 'wall_clock_time': 37.19525623321533, 'metric_for_logging': {'pred_time': 3.4203563911327417e-06}, 'val_loss': 0.05564332971264929, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 2.5478826052900674e-06, 'wall_clock_time': 37.06268095970154, 'metric_for_logging': {'pred_time': 2.5478826052900674e-06}, 'val_loss': 0.09884467265725294, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {701} INFO - Number of trials: 240/1000000, 2 RUNNING, 238 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 7.863062015478162e-06, 'wall_clock_time': 37.32249116897583, 'metric_for_logging': {'pred_time': 7.863062015478162e-06}, 'val_loss': 0.055544583785918866, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 2.569910408794016e-06, 'wall_clock_time': 37.285977840423584, 'metric_for_logging': {'pred_time': 2.569910408794016e-06}, 'val_loss': 0.09212994963957744, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {701} INFO - Number of trials: 242/1000000, 2 RUNNING, 240 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 7.406093072200167e-06, 'wall_clock_time': 37.459550619125366, 'metric_for_logging': {'pred_time': 7.406093072200167e-06}, 'val_loss': 0.11138540535202912, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 2.6973261349443076e-06, 'wall_clock_time': 37.50492024421692, 'metric_for_logging': {'pred_time': 2.6973261349443076e-06}, 'val_loss': 0.2591093117408907, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {701} INFO - Number of trials: 244/1000000, 2 RUNNING, 242 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 2.6683876479881397e-06, 'wall_clock_time': 37.67455983161926, 'metric_for_logging': {'pred_time': 2.6683876479881397e-06}, 'val_loss': 0.06729534906685097, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {721} INFO - Brief result: {'pred_time': 8.607256239739018e-06, 'wall_clock_time': 37.70312213897705, 'metric_for_logging': {'pred_time': 8.607256239739018e-06}, 'val_loss': 0.1175570257726869, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:56] {701} INFO - Number of trials: 246/1000000, 2 RUNNING, 244 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 3.0657519464907437e-06, 'wall_clock_time': 37.822755575180054, 'metric_for_logging': {'pred_time': 3.0657519464907437e-06}, 'val_loss': 0.4621309370988447, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 5.6136345517808115e-06, 'wall_clock_time': 37.83004283905029, 'metric_for_logging': {'pred_time': 5.6136345517808115e-06}, 'val_loss': 0.13577564925446817, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {701} INFO - Number of trials: 248/1000000, 2 RUNNING, 246 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 5.581154339555381e-05, 'wall_clock_time': 38.076497077941895, 'metric_for_logging': {'pred_time': 5.581154339555381e-05}, 'val_loss': 0.06453046311839628, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 5.926774895709494e-06, 'wall_clock_time': 38.15274381637573, 'metric_for_logging': {'pred_time': 5.926774895709494e-06}, 'val_loss': 0.09059938777525434, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {701} INFO - Number of trials: 250/1000000, 2 RUNNING, 248 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 2.4960524793984233e-06, 'wall_clock_time': 38.23988962173462, 'metric_for_logging': {'pred_time': 2.4960524793984233e-06}, 'val_loss': 0.11429841019057951, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 6.7906103272368945e-06, 'wall_clock_time': 38.34436869621277, 'metric_for_logging': {'pred_time': 6.7906103272368945e-06}, 'val_loss': 0.1371087192653303, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {701} INFO - Number of trials: 252/1000000, 2 RUNNING, 250 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 5.649051804473435e-06, 'wall_clock_time': 38.4815137386322, 'metric_for_logging': {'pred_time': 5.649051804473435e-06}, 'val_loss': 0.061913696060037604, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 2.6454960090526636e-06, 'wall_clock_time': 38.45310354232788, 'metric_for_logging': {'pred_time': 2.6454960090526636e-06}, 'val_loss': 0.08052730324874102, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {701} INFO - Number of trials: 254/1000000, 2 RUNNING, 252 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 2.513761105744735e-06, 'wall_clock_time': 38.581411361694336, 'metric_for_logging': {'pred_time': 2.513761105744735e-06}, 'val_loss': 0.06843092722425193, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {721} INFO - Brief result: {'pred_time': 8.998141772505166e-06, 'wall_clock_time': 38.6700804233551, 'metric_for_logging': {'pred_time': 8.998141772505166e-06}, 'val_loss': 0.1285671966031401, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:57] {701} INFO - Number of trials: 256/1000000, 2 RUNNING, 254 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 3.0113303143045176e-06, 'wall_clock_time': 38.780728578567505, 'metric_for_logging': {'pred_time': 3.0113303143045176e-06}, 'val_loss': 0.13503505480398936, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 7.608662480893343e-06, 'wall_clock_time': 38.83242845535278, 'metric_for_logging': {'pred_time': 7.608662480893343e-06}, 'val_loss': 0.07025772686876675, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {701} INFO - Number of trials: 258/1000000, 2 RUNNING, 256 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 3.6324279895727187e-06, 'wall_clock_time': 39.20739531517029, 'metric_for_logging': {'pred_time': 3.6324279895727187e-06}, 'val_loss': 0.09232744149303851, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 2.8761400692704795e-06, 'wall_clock_time': 39.027679681777954, 'metric_for_logging': {'pred_time': 2.8761400692704795e-06}, 'val_loss': 0.06255554458378598, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {701} INFO - Number of trials: 260/1000000, 2 RUNNING, 258 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 2.554361371026523e-06, 'wall_clock_time': 39.34074378013611, 'metric_for_logging': {'pred_time': 2.554361371026523e-06}, 'val_loss': 0.0765774661795201, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 3.179346305736597e-06, 'wall_clock_time': 39.41369390487671, 'metric_for_logging': {'pred_time': 3.179346305736597e-06}, 'val_loss': 0.09519107336822352, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {701} INFO - Number of trials: 262/1000000, 2 RUNNING, 260 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 2.9124211573946304e-06, 'wall_clock_time': 39.663169384002686, 'metric_for_logging': {'pred_time': 2.9124211573946304e-06}, 'val_loss': 0.16885553470919323, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {721} INFO - Brief result: {'pred_time': 2.803145975306414e-06, 'wall_clock_time': 39.59644675254822, 'metric_for_logging': {'pred_time': 2.803145975306414e-06}, 'val_loss': 0.05342154636121266, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:58] {701} INFO - Number of trials: 264/1000000, 2 RUNNING, 262 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 3.4821206244869507e-06, 'wall_clock_time': 39.82452154159546, 'metric_for_logging': {'pred_time': 3.4821206244869507e-06}, 'val_loss': 0.23669398637306205, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 2.6787536731664687e-06, 'wall_clock_time': 39.75146722793579, 'metric_for_logging': {'pred_time': 2.6787536731664687e-06}, 'val_loss': 0.07499753135183174, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {701} INFO - Number of trials: 266/1000000, 2 RUNNING, 264 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 5.794608074685802e-06, 'wall_clock_time': 40.1025927066803, 'metric_for_logging': {'pred_time': 5.794608074685802e-06}, 'val_loss': 0.08610644810901547, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 2.6079191677812217e-06, 'wall_clock_time': 39.98125743865967, 'metric_for_logging': {'pred_time': 2.6079191677812217e-06}, 'val_loss': 0.05756887528389454, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {701} INFO - Number of trials: 268/1000000, 2 RUNNING, 266 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 2.8990317082059556e-06, 'wall_clock_time': 40.21757125854492, 'metric_for_logging': {'pred_time': 2.8990317082059556e-06}, 'val_loss': 0.14881011158289725, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 6.962081660395083e-06, 'wall_clock_time': 40.238648414611816, 'metric_for_logging': {'pred_time': 6.962081660395083e-06}, 'val_loss': 0.05391527599486523, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {701} INFO - Number of trials: 270/1000000, 2 RUNNING, 268 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 3.165093021116395e-06, 'wall_clock_time': 40.4962375164032, 'metric_for_logging': {'pred_time': 3.165093021116395e-06}, 'val_loss': 0.09272242519996055, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 4.047932832137398e-06, 'wall_clock_time': 40.44546318054199, 'metric_for_logging': {'pred_time': 4.047932832137398e-06}, 'val_loss': 0.08492149698824925, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {701} INFO - Number of trials: 272/1000000, 2 RUNNING, 270 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 6.80875087129897e-06, 'wall_clock_time': 40.72140169143677, 'metric_for_logging': {'pred_time': 6.80875087129897e-06}, 'val_loss': 0.09805470524340876, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {721} INFO - Brief result: {'pred_time': 1.701237498850062e-05, 'wall_clock_time': 40.7727632522583, 'metric_for_logging': {'pred_time': 1.701237498850062e-05}, 'val_loss': 0.08047793028537575, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:10:59] {701} INFO - Number of trials: 274/1000000, 2 RUNNING, 272 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 8.19434290346892e-06, 'wall_clock_time': 40.864758014678955, 'metric_for_logging': {'pred_time': 8.19434290346892e-06}, 'val_loss': 0.09237681445640367, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 2.670547236566958e-06, 'wall_clock_time': 40.98264765739441, 'metric_for_logging': {'pred_time': 2.670547236566958e-06}, 'val_loss': 0.055248346005727256, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {701} INFO - Number of trials: 276/1000000, 2 RUNNING, 274 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 3.49723774453868e-06, 'wall_clock_time': 41.172908306121826, 'metric_for_logging': {'pred_time': 3.49723774453868e-06}, 'val_loss': 0.06403673348474381, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 3.149543983348902e-06, 'wall_clock_time': 41.09277629852295, 'metric_for_logging': {'pred_time': 3.149543983348902e-06}, 'val_loss': 0.13394884960995357, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {701} INFO - Number of trials: 278/1000000, 2 RUNNING, 276 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 2.888665683027627e-06, 'wall_clock_time': 41.384995460510254, 'metric_for_logging': {'pred_time': 2.888665683027627e-06}, 'val_loss': 0.0687765379678088, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 2.6627727176832115e-06, 'wall_clock_time': 41.27396845817566, 'metric_for_logging': {'pred_time': 2.6627727176832115e-06}, 'val_loss': 0.09084625259208057, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {701} INFO - Number of trials: 280/1000000, 2 RUNNING, 278 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 5.655098652494127e-06, 'wall_clock_time': 41.566795349121094, 'metric_for_logging': {'pred_time': 5.655098652494127e-06}, 'val_loss': 0.07786116322701697, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 2.7366306470788042e-06, 'wall_clock_time': 41.46762752532959, 'metric_for_logging': {'pred_time': 2.7366306470788042e-06}, 'val_loss': 0.08980942036141015, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {701} INFO - Number of trials: 282/1000000, 2 RUNNING, 280 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 2.653702445652174e-06, 'wall_clock_time': 41.74090266227722, 'metric_for_logging': {'pred_time': 2.653702445652174e-06}, 'val_loss': 0.19467759454922495, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {721} INFO - Brief result: {'pred_time': 2.9698662135912026e-06, 'wall_clock_time': 41.69465708732605, 'metric_for_logging': {'pred_time': 2.9698662135912026e-06}, 'val_loss': 0.11010170830453259, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:00] {701} INFO - Number of trials: 284/1000000, 2 RUNNING, 282 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 2.89600828419561e-06, 'wall_clock_time': 41.845386266708374, 'metric_for_logging': {'pred_time': 2.89600828419561e-06}, 'val_loss': 0.059198183074948174, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 6.151804025622382e-06, 'wall_clock_time': 41.970664978027344, 'metric_for_logging': {'pred_time': 6.151804025622382e-06}, 'val_loss': 0.06828280833415634, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {701} INFO - Number of trials: 286/1000000, 2 RUNNING, 284 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 4.253093747125155e-06, 'wall_clock_time': 42.280126094818115, 'metric_for_logging': {'pred_time': 4.253093747125155e-06}, 'val_loss': 0.11671768539547744, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 2.5176483651866084e-06, 'wall_clock_time': 42.05795693397522, 'metric_for_logging': {'pred_time': 2.5176483651866084e-06}, 'val_loss': 0.37612323491655975, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {701} INFO - Number of trials: 288/1000000, 2 RUNNING, 286 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 2.898599790490192e-06, 'wall_clock_time': 42.46685433387756, 'metric_for_logging': {'pred_time': 2.898599790490192e-06}, 'val_loss': 0.10728744939271262, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 2.447245777517125e-06, 'wall_clock_time': 42.36459541320801, 'metric_for_logging': {'pred_time': 2.447245777517125e-06}, 'val_loss': 0.055593956749284024, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {701} INFO - Number of trials: 290/1000000, 2 RUNNING, 288 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 8.246604947076328e-06, 'wall_clock_time': 42.61330699920654, 'metric_for_logging': {'pred_time': 8.246604947076328e-06}, 'val_loss': 0.2013429446035352, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {721} INFO - Brief result: {'pred_time': 2.5344931561013926e-06, 'wall_clock_time': 42.59594440460205, 'metric_for_logging': {'pred_time': 2.5344931561013926e-06}, 'val_loss': 0.06137059346301965, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:01] {701} INFO - Number of trials: 292/1000000, 2 RUNNING, 290 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 5.571306615635969e-06, 'wall_clock_time': 42.7778422832489, 'metric_for_logging': {'pred_time': 5.571306615635969e-06}, 'val_loss': 0.05998815048879236, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 8.825374686199686e-06, 'wall_clock_time': 42.828877687454224, 'metric_for_logging': {'pred_time': 8.825374686199686e-06}, 'val_loss': 0.1256541917645898, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {701} INFO - Number of trials: 294/1000000, 2 RUNNING, 292 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 5.90301942134249e-06, 'wall_clock_time': 42.990697145462036, 'metric_for_logging': {'pred_time': 5.90301942134249e-06}, 'val_loss': 0.09524044633158879, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 2.86102294921875e-06, 'wall_clock_time': 42.96147322654724, 'metric_for_logging': {'pred_time': 2.86102294921875e-06}, 'val_loss': 0.06418485237483962, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {701} INFO - Number of trials: 296/1000000, 2 RUNNING, 294 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 5.4758528004521905e-06, 'wall_clock_time': 43.108901500701904, 'metric_for_logging': {'pred_time': 5.4758528004521905e-06}, 'val_loss': 0.28103090747506676, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 3.9218128591343975e-06, 'wall_clock_time': 43.17945432662964, 'metric_for_logging': {'pred_time': 3.9218128591343975e-06}, 'val_loss': 0.11138540535202912, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {701} INFO - Number of trials: 298/1000000, 2 RUNNING, 296 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 5.689652069755223e-06, 'wall_clock_time': 43.29299068450928, 'metric_for_logging': {'pred_time': 5.689652069755223e-06}, 'val_loss': 0.06304927421743856, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 2.5820041048354e-06, 'wall_clock_time': 43.38965821266174, 'metric_for_logging': {'pred_time': 2.5820041048354e-06}, 'val_loss': 0.12328428952305726, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {701} INFO - Number of trials: 300/1000000, 2 RUNNING, 298 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 3.490327061086461e-06, 'wall_clock_time': 43.544888734817505, 'metric_for_logging': {'pred_time': 3.490327061086461e-06}, 'val_loss': 0.08215661103979466, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 4.278576892355214e-06, 'wall_clock_time': 43.50631380081177, 'metric_for_logging': {'pred_time': 4.278576892355214e-06}, 'val_loss': 0.0840821566110399, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {701} INFO - Number of trials: 302/1000000, 2 RUNNING, 300 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 2.5720699973728345e-06, 'wall_clock_time': 43.63052749633789, 'metric_for_logging': {'pred_time': 2.5720699973728345e-06}, 'val_loss': 0.10077021822849797, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {721} INFO - Brief result: {'pred_time': 3.4294266631637793e-06, 'wall_clock_time': 43.80629873275757, 'metric_for_logging': {'pred_time': 3.4294266631637793e-06}, 'val_loss': 0.09637602448898985, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:02] {701} INFO - Number of trials: 304/1000000, 2 RUNNING, 302 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 2.529310143512228e-06, 'wall_clock_time': 43.88439154624939, 'metric_for_logging': {'pred_time': 2.529310143512228e-06}, 'val_loss': 0.08748889108324287, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 6.236027980196303e-06, 'wall_clock_time': 43.98495531082153, 'metric_for_logging': {'pred_time': 6.236027980196303e-06}, 'val_loss': 0.08521773476844086, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {701} INFO - Number of trials: 306/1000000, 2 RUNNING, 304 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 2.5262867195018823e-06, 'wall_clock_time': 44.120760679244995, 'metric_for_logging': {'pred_time': 2.5262867195018823e-06}, 'val_loss': 0.2136861854448504, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 2.9309936191724693e-06, 'wall_clock_time': 44.15159797668457, 'metric_for_logging': {'pred_time': 2.9309936191724693e-06}, 'val_loss': 0.11558210723807638, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {701} INFO - Number of trials: 308/1000000, 2 RUNNING, 306 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 5.736299183057702e-06, 'wall_clock_time': 44.28510403633118, 'metric_for_logging': {'pred_time': 5.736299183057702e-06}, 'val_loss': 0.08151476251604617, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 2.8489292531773662e-06, 'wall_clock_time': 44.27116084098816, 'metric_for_logging': {'pred_time': 2.8489292531773662e-06}, 'val_loss': 0.11632270168855541, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {701} INFO - Number of trials: 310/1000000, 2 RUNNING, 308 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 7.651422334753949e-06, 'wall_clock_time': 44.40665602684021, 'metric_for_logging': {'pred_time': 7.651422334753949e-06}, 'val_loss': 0.09840031598696564, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 2.891257189322209e-06, 'wall_clock_time': 44.48711013793945, 'metric_for_logging': {'pred_time': 2.891257189322209e-06}, 'val_loss': 0.06601165201935433, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {701} INFO - Number of trials: 312/1000000, 2 RUNNING, 310 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 8.494957633640455e-06, 'wall_clock_time': 44.675684213638306, 'metric_for_logging': {'pred_time': 8.494957633640455e-06}, 'val_loss': 0.09795595931667811, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {721} INFO - Brief result: {'pred_time': 1.6211167625758957e-05, 'wall_clock_time': 44.61126232147217, 'metric_for_logging': {'pred_time': 1.6211167625758957e-05}, 'val_loss': 0.06028438826898386, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:03] {701} INFO - Number of trials: 314/1000000, 2 RUNNING, 312 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 2.7318795522054038e-06, 'wall_clock_time': 44.75480604171753, 'metric_for_logging': {'pred_time': 2.7318795522054038e-06}, 'val_loss': 0.3623975511010169, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 6.9443730340487715e-06, 'wall_clock_time': 44.86552405357361, 'metric_for_logging': {'pred_time': 6.9443730340487715e-06}, 'val_loss': 0.107929297916461, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {701} INFO - Number of trials: 316/1000000, 2 RUNNING, 314 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 9.311714034149611e-06, 'wall_clock_time': 45.05608654022217, 'metric_for_logging': {'pred_time': 9.311714034149611e-06}, 'val_loss': 0.05959316678187032, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 6.889087566431018e-06, 'wall_clock_time': 45.150428771972656, 'metric_for_logging': {'pred_time': 6.889087566431018e-06}, 'val_loss': 0.0984496889503309, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {701} INFO - Number of trials: 318/1000000, 2 RUNNING, 316 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 8.915213571078535e-06, 'wall_clock_time': 45.33338212966919, 'metric_for_logging': {'pred_time': 8.915213571078535e-06}, 'val_loss': 0.1141996642638492, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 4.205150880675385e-06, 'wall_clock_time': 45.244739294052124, 'metric_for_logging': {'pred_time': 4.205150880675385e-06}, 'val_loss': 0.07850301175076524, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {701} INFO - Number of trials: 320/1000000, 2 RUNNING, 318 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 8.842651394830234e-06, 'wall_clock_time': 45.553513526916504, 'metric_for_logging': {'pred_time': 8.842651394830234e-06}, 'val_loss': 0.30107633060136274, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 9.179979130841684e-06, 'wall_clock_time': 45.4545316696167, 'metric_for_logging': {'pred_time': 9.179979130841684e-06}, 'val_loss': 0.12412362990026649, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {701} INFO - Number of trials: 322/1000000, 2 RUNNING, 320 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 3.4061031065125396e-06, 'wall_clock_time': 45.715179204940796, 'metric_for_logging': {'pred_time': 3.4061031065125396e-06}, 'val_loss': 0.06225930680359437, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {721} INFO - Brief result: {'pred_time': 2.647655597631482e-06, 'wall_clock_time': 45.62370419502258, 'metric_for_logging': {'pred_time': 2.647655597631482e-06}, 'val_loss': 0.09267305223659528, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:04] {701} INFO - Number of trials: 324/1000000, 2 RUNNING, 322 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 3.3771646195563716e-06, 'wall_clock_time': 45.86565113067627, 'metric_for_logging': {'pred_time': 3.3771646195563716e-06}, 'val_loss': 0.10220203416609075, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 3.3369962719903476e-06, 'wall_clock_time': 45.88248014450073, 'metric_for_logging': {'pred_time': 3.3369962719903476e-06}, 'val_loss': 0.05821072380764303, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {701} INFO - Number of trials: 326/1000000, 2 RUNNING, 324 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 2.6467917621999546e-06, 'wall_clock_time': 45.94897103309631, 'metric_for_logging': {'pred_time': 2.6467917621999546e-06}, 'val_loss': 0.05845758862446937, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 2.322421557661416e-06, 'wall_clock_time': 46.08386421203613, 'metric_for_logging': {'pred_time': 2.322421557661416e-06}, 'val_loss': 0.5972647378295646, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {701} INFO - Number of trials: 328/1000000, 2 RUNNING, 326 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 5.505223205124122e-06, 'wall_clock_time': 46.23549151420593, 'metric_for_logging': {'pred_time': 5.505223205124122e-06}, 'val_loss': 0.0635923768144564, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 2.9534533403921817e-06, 'wall_clock_time': 46.38034653663635, 'metric_for_logging': {'pred_time': 2.9534533403921817e-06}, 'val_loss': 0.07771304433692117, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {701} INFO - Number of trials: 330/1000000, 2 RUNNING, 328 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 3.2372232796489326e-06, 'wall_clock_time': 46.50217866897583, 'metric_for_logging': {'pred_time': 3.2372232796489326e-06}, 'val_loss': 0.08516836180507559, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 2.6092149209285127e-06, 'wall_clock_time': 46.45779371261597, 'metric_for_logging': {'pred_time': 2.6092149209285127e-06}, 'val_loss': 0.08877258813073963, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {701} INFO - Number of trials: 332/1000000, 2 RUNNING, 330 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 3.9611173712688945e-06, 'wall_clock_time': 46.72101807594299, 'metric_for_logging': {'pred_time': 3.9611173712688945e-06}, 'val_loss': 0.06448109015503112, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {721} INFO - Brief result: {'pred_time': 5.902587503626727e-06, 'wall_clock_time': 46.6513352394104, 'metric_for_logging': {'pred_time': 5.902587503626727e-06}, 'val_loss': 0.09449985188110988, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:05] {701} INFO - Number of trials: 334/1000000, 2 RUNNING, 332 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 5.566123603046805e-06, 'wall_clock_time': 46.86505722999573, 'metric_for_logging': {'pred_time': 5.566123603046805e-06}, 'val_loss': 0.08936506369112296, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 3.003555795420771e-06, 'wall_clock_time': 46.83564281463623, 'metric_for_logging': {'pred_time': 3.003555795420771e-06}, 'val_loss': 0.09563543003851083, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {701} INFO - Number of trials: 336/1000000, 2 RUNNING, 334 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 5.6058600328970645e-06, 'wall_clock_time': 47.18586564064026, 'metric_for_logging': {'pred_time': 5.6058600328970645e-06}, 'val_loss': 0.4622790559889405, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 5.736731100773466e-06, 'wall_clock_time': 47.12805390357971, 'metric_for_logging': {'pred_time': 5.736731100773466e-06}, 'val_loss': 0.1099042164510714, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {701} INFO - Number of trials: 338/1000000, 2 RUNNING, 336 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 2.6442002559053726e-06, 'wall_clock_time': 47.25177192687988, 'metric_for_logging': {'pred_time': 2.6442002559053726e-06}, 'val_loss': 0.06161745827984588, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 2.65327052793641e-06, 'wall_clock_time': 47.40237069129944, 'metric_for_logging': {'pred_time': 2.65327052793641e-06}, 'val_loss': 0.10067147230176765, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {701} INFO - Number of trials: 340/1000000, 2 RUNNING, 338 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 2.7793905009394106e-06, 'wall_clock_time': 47.491913080215454, 'metric_for_logging': {'pred_time': 2.7793905009394106e-06}, 'val_loss': 0.048089266317764445, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {721} INFO - Brief result: {'pred_time': 5.941892015761224e-06, 'wall_clock_time': 47.724848985672, 'metric_for_logging': {'pred_time': 5.941892015761224e-06}, 'val_loss': 0.0825515947467167, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:06] {701} INFO - Number of trials: 342/1000000, 2 RUNNING, 340 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 3.2575234122898267e-06, 'wall_clock_time': 48.01954627037048, 'metric_for_logging': {'pred_time': 3.2575234122898267e-06}, 'val_loss': 0.0765774661795201, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 5.740618360215339e-06, 'wall_clock_time': 47.838258504867554, 'metric_for_logging': {'pred_time': 5.740618360215339e-06}, 'val_loss': 0.11661893946874691, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {701} INFO - Number of trials: 344/1000000, 2 RUNNING, 342 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 2.6748664137245952e-06, 'wall_clock_time': 48.090203046798706, 'metric_for_logging': {'pred_time': 2.6748664137245952e-06}, 'val_loss': 0.06630788979954583, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 1.028266505918641e-05, 'wall_clock_time': 48.19990420341492, 'metric_for_logging': {'pred_time': 1.028266505918641e-05}, 'val_loss': 0.09331490076034366, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {701} INFO - Number of trials: 346/1000000, 2 RUNNING, 344 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 2.8100566587586333e-06, 'wall_clock_time': 48.39952206611633, 'metric_for_logging': {'pred_time': 2.8100566587586333e-06}, 'val_loss': 0.11479213982423242, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 2.5802764339723447e-06, 'wall_clock_time': 48.29435658454895, 'metric_for_logging': {'pred_time': 2.5802764339723447e-06}, 'val_loss': 0.14510713933050268, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {701} INFO - Number of trials: 348/1000000, 2 RUNNING, 346 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 2.7107155841329824e-06, 'wall_clock_time': 48.514384269714355, 'metric_for_logging': {'pred_time': 2.7107155841329824e-06}, 'val_loss': 0.06137059346301976, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 5.555325660152712e-06, 'wall_clock_time': 48.53211259841919, 'metric_for_logging': {'pred_time': 5.555325660152712e-06}, 'val_loss': 0.3464007109706725, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {701} INFO - Number of trials: 350/1000000, 2 RUNNING, 348 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 3.637179084446119e-06, 'wall_clock_time': 48.66910982131958, 'metric_for_logging': {'pred_time': 3.637179084446119e-06}, 'val_loss': 0.07554063394884958, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {721} INFO - Brief result: {'pred_time': 5.497448686240376e-06, 'wall_clock_time': 48.63874888420105, 'metric_for_logging': {'pred_time': 5.497448686240376e-06}, 'val_loss': 0.10042460748494131, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:07] {701} INFO - Number of trials: 352/1000000, 2 RUNNING, 350 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 2.853680348050767e-06, 'wall_clock_time': 48.80108451843262, 'metric_for_logging': {'pred_time': 2.853680348050767e-06}, 'val_loss': 0.09163622000592497, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 6.888655648715254e-06, 'wall_clock_time': 48.83551573753357, 'metric_for_logging': {'pred_time': 6.888655648715254e-06}, 'val_loss': 0.09227806852967335, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {701} INFO - Number of trials: 354/1000000, 2 RUNNING, 352 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 6.925800572270933e-06, 'wall_clock_time': 49.084237813949585, 'metric_for_logging': {'pred_time': 6.925800572270933e-06}, 'val_loss': 0.060629999012540736, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 3.124492755834607e-06, 'wall_clock_time': 48.943636894226074, 'metric_for_logging': {'pred_time': 3.124492755834607e-06}, 'val_loss': 0.09711661893946877, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {701} INFO - Number of trials: 356/1000000, 2 RUNNING, 354 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 8.846970571987871e-06, 'wall_clock_time': 49.2479522228241, 'metric_for_logging': {'pred_time': 8.846970571987871e-06}, 'val_loss': 0.119680063197393, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 2.8515207594719487e-06, 'wall_clock_time': 49.18432641029358, 'metric_for_logging': {'pred_time': 2.8515207594719487e-06}, 'val_loss': 0.051101017083045375, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {701} INFO - Number of trials: 358/1000000, 2 RUNNING, 356 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 5.913817364236583e-06, 'wall_clock_time': 49.410420656204224, 'metric_for_logging': {'pred_time': 5.913817364236583e-06}, 'val_loss': 0.09854843487706133, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 2.865774044092151e-06, 'wall_clock_time': 49.31886911392212, 'metric_for_logging': {'pred_time': 2.865774044092151e-06}, 'val_loss': 0.3219610941048683, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {701} INFO - Number of trials: 360/1000000, 2 RUNNING, 358 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 6.953875223795573e-06, 'wall_clock_time': 49.52638101577759, 'metric_for_logging': {'pred_time': 6.953875223795573e-06}, 'val_loss': 0.05697639972351132, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 2.819126930789671e-06, 'wall_clock_time': 49.62110686302185, 'metric_for_logging': {'pred_time': 2.819126930789671e-06}, 'val_loss': 0.06754221388367732, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {701} INFO - Number of trials: 362/1000000, 2 RUNNING, 360 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 3.4294266631637793e-06, 'wall_clock_time': 49.75969314575195, 'metric_for_logging': {'pred_time': 3.4294266631637793e-06}, 'val_loss': 0.11489088575096273, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {721} INFO - Brief result: {'pred_time': 2.742677495099496e-06, 'wall_clock_time': 49.71234083175659, 'metric_for_logging': {'pred_time': 2.742677495099496e-06}, 'val_loss': 0.2230176755208848, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:08] {701} INFO - Number of trials: 364/1000000, 2 RUNNING, 362 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 9.26247541455255e-06, 'wall_clock_time': 50.06815552711487, 'metric_for_logging': {'pred_time': 9.26247541455255e-06}, 'val_loss': 0.10634936308877285, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 3.074390300806018e-06, 'wall_clock_time': 49.88128137588501, 'metric_for_logging': {'pred_time': 3.074390300806018e-06}, 'val_loss': 0.09133998222573325, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {701} INFO - Number of trials: 366/1000000, 2 RUNNING, 364 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 2.722377362458602e-06, 'wall_clock_time': 50.14736580848694, 'metric_for_logging': {'pred_time': 2.722377362458602e-06}, 'val_loss': 0.1112866594252987, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 2.9219233471414317e-06, 'wall_clock_time': 50.214579343795776, 'metric_for_logging': {'pred_time': 2.9219233471414317e-06}, 'val_loss': 0.10743556828280831, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {701} INFO - Number of trials: 368/1000000, 2 RUNNING, 366 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 5.849029706872028e-06, 'wall_clock_time': 50.3379693031311, 'metric_for_logging': {'pred_time': 5.849029706872028e-06}, 'val_loss': 0.10916362200059249, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 5.63220701355865e-06, 'wall_clock_time': 50.41889691352844, 'metric_for_logging': {'pred_time': 5.63220701355865e-06}, 'val_loss': 0.33084822751061516, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {701} INFO - Number of trials: 370/1000000, 2 RUNNING, 368 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 3.191871919493744e-06, 'wall_clock_time': 50.70270562171936, 'metric_for_logging': {'pred_time': 3.191871919493744e-06}, 'val_loss': 0.0687765379678088, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {721} INFO - Brief result: {'pred_time': 3.263872602711553e-05, 'wall_clock_time': 50.642518043518066, 'metric_for_logging': {'pred_time': 3.263872602711553e-05}, 'val_loss': 0.07795990915374751, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:09] {701} INFO - Number of trials: 372/1000000, 2 RUNNING, 370 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 5.593766336855681e-06, 'wall_clock_time': 50.929919481277466, 'metric_for_logging': {'pred_time': 5.593766336855681e-06}, 'val_loss': 0.057963858990816686, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 5.838663681693699e-06, 'wall_clock_time': 50.87266397476196, 'metric_for_logging': {'pred_time': 5.838663681693699e-06}, 'val_loss': 0.15932655277969787, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {701} INFO - Number of trials: 374/1000000, 2 RUNNING, 372 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 2.6696834011354307e-06, 'wall_clock_time': 51.000532150268555, 'metric_for_logging': {'pred_time': 2.6696834011354307e-06}, 'val_loss': 0.10077021822849808, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 2.750020096267479e-06, 'wall_clock_time': 51.14569044113159, 'metric_for_logging': {'pred_time': 2.750020096267479e-06}, 'val_loss': 0.06201244198676803, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {701} INFO - Number of trials: 376/1000000, 2 RUNNING, 374 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 9.440857431162959e-06, 'wall_clock_time': 51.30731511116028, 'metric_for_logging': {'pred_time': 9.440857431162959e-06}, 'val_loss': 0.12614792139824227, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 2.7193539384482563e-06, 'wall_clock_time': 51.22069048881531, 'metric_for_logging': {'pred_time': 2.7193539384482563e-06}, 'val_loss': 0.05974128567196613, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {701} INFO - Number of trials: 378/1000000, 2 RUNNING, 376 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 4.064345705336419e-06, 'wall_clock_time': 51.47393989562988, 'metric_for_logging': {'pred_time': 4.064345705336419e-06}, 'val_loss': 0.12817221289621805, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 8.94847123519234e-06, 'wall_clock_time': 51.5119526386261, 'metric_for_logging': {'pred_time': 8.94847123519234e-06}, 'val_loss': 0.06852967315098257, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {701} INFO - Number of trials: 380/1000000, 2 RUNNING, 378 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 5.768261094024216e-06, 'wall_clock_time': 51.628968477249146, 'metric_for_logging': {'pred_time': 5.768261094024216e-06}, 'val_loss': 0.0950923274414931, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {721} INFO - Brief result: {'pred_time': 5.9881072113479394e-06, 'wall_clock_time': 51.64580011367798, 'metric_for_logging': {'pred_time': 5.9881072113479394e-06}, 'val_loss': 0.16273328725190084, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:10] {701} INFO - Number of trials: 382/1000000, 2 RUNNING, 380 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 2.727128457332003e-06, 'wall_clock_time': 51.7838830947876, 'metric_for_logging': {'pred_time': 2.727128457332003e-06}, 'val_loss': 0.10911424903722733, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 6.249849347100741e-06, 'wall_clock_time': 51.96243190765381, 'metric_for_logging': {'pred_time': 6.249849347100741e-06}, 'val_loss': 0.08092228695566317, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {701} INFO - Number of trials: 384/1000000, 2 RUNNING, 382 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 9.159247080485025e-06, 'wall_clock_time': 52.274254322052, 'metric_for_logging': {'pred_time': 9.159247080485025e-06}, 'val_loss': 0.1033376123234917, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 5.88315120641736e-06, 'wall_clock_time': 52.08809208869934, 'metric_for_logging': {'pred_time': 5.88315120641736e-06}, 'val_loss': 0.08097165991902833, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {701} INFO - Number of trials: 386/1000000, 2 RUNNING, 384 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 2.42910523345505e-06, 'wall_clock_time': 52.34836435317993, 'metric_for_logging': {'pred_time': 2.42910523345505e-06}, 'val_loss': 0.09459859780784052, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 2.661390580992768e-05, 'wall_clock_time': 52.442232847213745, 'metric_for_logging': {'pred_time': 2.661390580992768e-05}, 'val_loss': 0.05840821566110388, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {701} INFO - Number of trials: 388/1000000, 2 RUNNING, 386 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 6.209249081818954e-06, 'wall_clock_time': 52.53273391723633, 'metric_for_logging': {'pred_time': 6.209249081818954e-06}, 'val_loss': 0.09341364668707419, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {721} INFO - Brief result: {'pred_time': 5.693971246912859e-06, 'wall_clock_time': 52.75709676742554, 'metric_for_logging': {'pred_time': 5.693971246912859e-06}, 'val_loss': 0.07178828873308973, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:11] {701} INFO - Number of trials: 390/1000000, 2 RUNNING, 388 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 5.756167397982832e-06, 'wall_clock_time': 53.0208683013916, 'metric_for_logging': {'pred_time': 5.756167397982832e-06}, 'val_loss': 0.08487212402488409, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 7.866085439488508e-06, 'wall_clock_time': 53.00402307510376, 'metric_for_logging': {'pred_time': 7.866085439488508e-06}, 'val_loss': 0.09291991705342162, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {701} INFO - Number of trials: 392/1000000, 2 RUNNING, 390 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 2.5457662084828254e-05, 'wall_clock_time': 53.187580585479736, 'metric_for_logging': {'pred_time': 2.5457662084828254e-05}, 'val_loss': 0.06290115532734275, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 2.709851748701455e-06, 'wall_clock_time': 53.09220004081726, 'metric_for_logging': {'pred_time': 2.709851748701455e-06}, 'val_loss': 0.09923965636417509, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {701} INFO - Number of trials: 394/1000000, 2 RUNNING, 392 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 9.354905805725982e-06, 'wall_clock_time': 53.32613658905029, 'metric_for_logging': {'pred_time': 9.354905805725982e-06}, 'val_loss': 0.06304927421743856, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 5.845142447430154e-06, 'wall_clock_time': 53.292086124420166, 'metric_for_logging': {'pred_time': 5.845142447430154e-06}, 'val_loss': 0.052285968203811595, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {701} INFO - Number of trials: 396/1000000, 2 RUNNING, 394 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 2.6731387428615405e-06, 'wall_clock_time': 53.54794144630432, 'metric_for_logging': {'pred_time': 2.6731387428615405e-06}, 'val_loss': 0.13488693591389356, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 2.5841636934142183e-06, 'wall_clock_time': 53.45198321342468, 'metric_for_logging': {'pred_time': 2.5841636934142183e-06}, 'val_loss': 0.05988940456206182, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {701} INFO - Number of trials: 398/1000000, 2 RUNNING, 396 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 2.718490103016729e-06, 'wall_clock_time': 53.65571451187134, 'metric_for_logging': {'pred_time': 2.718490103016729e-06}, 'val_loss': 0.10659622790559897, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {721} INFO - Brief result: {'pred_time': 5.923751471699148e-06, 'wall_clock_time': 53.73057174682617, 'metric_for_logging': {'pred_time': 5.923751471699148e-06}, 'val_loss': 0.06828280833415623, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:12] {701} INFO - Number of trials: 400/1000000, 2 RUNNING, 398 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 6.254514058430989e-05, 'wall_clock_time': 53.85580897331238, 'metric_for_logging': {'pred_time': 6.254514058430989e-05}, 'val_loss': 0.10610249827194629, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 8.861223856608072e-06, 'wall_clock_time': 53.9481360912323, 'metric_for_logging': {'pred_time': 8.861223856608072e-06}, 'val_loss': 0.21482176360225136, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {701} INFO - Number of trials: 402/1000000, 2 RUNNING, 400 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 2.9111254042473393e-06, 'wall_clock_time': 54.04872703552246, 'metric_for_logging': {'pred_time': 2.9111254042473393e-06}, 'val_loss': 0.12456798657055401, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 5.925911060277967e-06, 'wall_clock_time': 54.12918782234192, 'metric_for_logging': {'pred_time': 5.925911060277967e-06}, 'val_loss': 0.08334156216056088, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {701} INFO - Number of trials: 404/1000000, 2 RUNNING, 402 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 6.305566732434259e-06, 'wall_clock_time': 54.425273418426514, 'metric_for_logging': {'pred_time': 6.305566732434259e-06}, 'val_loss': 0.06581416016589325, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 5.417975826539855e-06, 'wall_clock_time': 54.432724952697754, 'metric_for_logging': {'pred_time': 5.417975826539855e-06}, 'val_loss': 0.08121852473585456, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {701} INFO - Number of trials: 406/1000000, 2 RUNNING, 404 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 2.8342440508414004e-06, 'wall_clock_time': 54.677727460861206, 'metric_for_logging': {'pred_time': 2.8342440508414004e-06}, 'val_loss': 0.08245284881998627, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {721} INFO - Brief result: {'pred_time': 3.6725963371387427e-06, 'wall_clock_time': 54.54199457168579, 'metric_for_logging': {'pred_time': 3.6725963371387427e-06}, 'val_loss': 0.04848425002468648, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:13] {701} INFO - Number of trials: 408/1000000, 2 RUNNING, 406 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 8.517849272575932e-06, 'wall_clock_time': 54.82688546180725, 'metric_for_logging': {'pred_time': 8.517849272575932e-06}, 'val_loss': 0.13824429742273137, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 2.690847369207852e-06, 'wall_clock_time': 54.766968965530396, 'metric_for_logging': {'pred_time': 2.690847369207852e-06}, 'val_loss': 0.05485336229880522, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {701} INFO - Number of trials: 410/1000000, 2 RUNNING, 408 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 2.8182630953581436e-06, 'wall_clock_time': 54.97231888771057, 'metric_for_logging': {'pred_time': 2.8182630953581436e-06}, 'val_loss': 0.21358743951811998, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 2.9206275939941406e-06, 'wall_clock_time': 54.98260712623596, 'metric_for_logging': {'pred_time': 2.9206275939941406e-06}, 'val_loss': 0.07356571541423917, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {701} INFO - Number of trials: 412/1000000, 2 RUNNING, 410 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 1.0324561077615488e-05, 'wall_clock_time': 55.19045925140381, 'metric_for_logging': {'pred_time': 1.0324561077615488e-05}, 'val_loss': 0.09084625259208057, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 2.7608180391615717e-06, 'wall_clock_time': 55.1554594039917, 'metric_for_logging': {'pred_time': 2.7608180391615717e-06}, 'val_loss': 0.09701787301273823, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {701} INFO - Number of trials: 414/1000000, 2 RUNNING, 412 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 5.730684252752774e-06, 'wall_clock_time': 55.54926371574402, 'metric_for_logging': {'pred_time': 5.730684252752774e-06}, 'val_loss': 0.10111582897205496, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 5.923751471699148e-06, 'wall_clock_time': 55.381755352020264, 'metric_for_logging': {'pred_time': 5.923751471699148e-06}, 'val_loss': 0.08595832921891988, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {701} INFO - Number of trials: 416/1000000, 2 RUNNING, 414 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 2.594529718592547e-06, 'wall_clock_time': 55.66962146759033, 'metric_for_logging': {'pred_time': 2.594529718592547e-06}, 'val_loss': 0.07850301175076546, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {721} INFO - Brief result: {'pred_time': 1.4396681301835654e-05, 'wall_clock_time': 55.6902801990509, 'metric_for_logging': {'pred_time': 1.4396681301835654e-05}, 'val_loss': 0.06092623679273235, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:14] {701} INFO - Number of trials: 418/1000000, 2 RUNNING, 416 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 6.6683776136757675e-06, 'wall_clock_time': 55.90080142021179, 'metric_for_logging': {'pred_time': 6.6683776136757675e-06}, 'val_loss': 0.07267700207366456, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 9.944905405459197e-06, 'wall_clock_time': 55.809242725372314, 'metric_for_logging': {'pred_time': 9.944905405459197e-06}, 'val_loss': 0.09420361410091838, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {701} INFO - Number of trials: 420/1000000, 2 RUNNING, 418 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 6.050735280133676e-06, 'wall_clock_time': 56.164353370666504, 'metric_for_logging': {'pred_time': 6.050735280133676e-06}, 'val_loss': 0.11400217241038824, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 6.120274032371632e-06, 'wall_clock_time': 56.01582479476929, 'metric_for_logging': {'pred_time': 6.120274032371632e-06}, 'val_loss': 0.04863236891478229, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {701} INFO - Number of trials: 422/1000000, 2 RUNNING, 420 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 6.684790486874788e-06, 'wall_clock_time': 56.35014271736145, 'metric_for_logging': {'pred_time': 6.684790486874788e-06}, 'val_loss': 0.06902340278463515, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 2.4718650873156562e-06, 'wall_clock_time': 56.273961305618286, 'metric_for_logging': {'pred_time': 2.4718650873156562e-06}, 'val_loss': 0.16317764392218825, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {701} INFO - Number of trials: 424/1000000, 2 RUNNING, 422 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 4.131292951279792e-06, 'wall_clock_time': 56.527623414993286, 'metric_for_logging': {'pred_time': 4.131292951279792e-06}, 'val_loss': 0.12007504690431514, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 2.805305563885233e-06, 'wall_clock_time': 56.48895716667175, 'metric_for_logging': {'pred_time': 2.805305563885233e-06}, 'val_loss': 0.07233139133010769, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {701} INFO - Number of trials: 426/1000000, 2 RUNNING, 424 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 2.6951665463654892e-06, 'wall_clock_time': 56.61201238632202, 'metric_for_logging': {'pred_time': 2.6951665463654892e-06}, 'val_loss': 0.08640268588920708, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {721} INFO - Brief result: {'pred_time': 2.583299857982691e-06, 'wall_clock_time': 56.75276756286621, 'metric_for_logging': {'pred_time': 2.583299857982691e-06}, 'val_loss': 0.07805865508047793, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:15] {701} INFO - Number of trials: 428/1000000, 2 RUNNING, 426 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 2.8942806133325547e-06, 'wall_clock_time': 56.92208170890808, 'metric_for_logging': {'pred_time': 2.8942806133325547e-06}, 'val_loss': 0.0741581909746224, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 2.1483155264370684e-05, 'wall_clock_time': 56.88188314437866, 'metric_for_logging': {'pred_time': 2.1483155264370684e-05}, 'val_loss': 0.09252493334649958, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {701} INFO - Number of trials: 430/1000000, 2 RUNNING, 428 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 2.805305563885233e-06, 'wall_clock_time': 57.00736331939697, 'metric_for_logging': {'pred_time': 2.805305563885233e-06}, 'val_loss': 0.0736644613409696, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 2.6904154514920884e-06, 'wall_clock_time': 57.13954186439514, 'metric_for_logging': {'pred_time': 2.6904154514920884e-06}, 'val_loss': 0.17705144662782657, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {701} INFO - Number of trials: 432/1000000, 2 RUNNING, 430 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 6.023092546324799e-06, 'wall_clock_time': 57.27061152458191, 'metric_for_logging': {'pred_time': 6.023092546324799e-06}, 'val_loss': 0.05840821566110388, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 2.7590903682985166e-06, 'wall_clock_time': 57.24157786369324, 'metric_for_logging': {'pred_time': 2.7590903682985166e-06}, 'val_loss': 0.09543793818504986, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {701} INFO - Number of trials: 434/1000000, 2 RUNNING, 432 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 8.891458096711531e-06, 'wall_clock_time': 57.404789209365845, 'metric_for_logging': {'pred_time': 8.891458096711531e-06}, 'val_loss': 0.09380863039399634, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 2.835539803988692e-06, 'wall_clock_time': 57.5037739276886, 'metric_for_logging': {'pred_time': 2.835539803988692e-06}, 'val_loss': 0.08339093512392626, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {701} INFO - Number of trials: 436/1000000, 2 RUNNING, 434 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 5.978605021601138e-06, 'wall_clock_time': 57.60341668128967, 'metric_for_logging': {'pred_time': 5.978605021601138e-06}, 'val_loss': 0.051397254863236985, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {721} INFO - Brief result: {'pred_time': 3.007011137146881e-06, 'wall_clock_time': 57.65639519691467, 'metric_for_logging': {'pred_time': 3.007011137146881e-06}, 'val_loss': 0.11543398834798069, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:16] {701} INFO - Number of trials: 438/1000000, 2 RUNNING, 436 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 6.969424261563066e-06, 'wall_clock_time': 57.96469497680664, 'metric_for_logging': {'pred_time': 6.969424261563066e-06}, 'val_loss': 0.08798262071689544, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 6.382880003555961e-06, 'wall_clock_time': 57.78635215759277, 'metric_for_logging': {'pred_time': 6.382880003555961e-06}, 'val_loss': 0.11716204206576497, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {701} INFO - Number of trials: 440/1000000, 2 RUNNING, 438 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 6.960353989532028e-06, 'wall_clock_time': 58.084784269332886, 'metric_for_logging': {'pred_time': 6.960353989532028e-06}, 'val_loss': 0.10116520193542022, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 4.088533097419186e-06, 'wall_clock_time': 58.083258628845215, 'metric_for_logging': {'pred_time': 4.088533097419186e-06}, 'val_loss': 0.056137059346301976, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {701} INFO - Number of trials: 442/1000000, 2 RUNNING, 440 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 8.479408595872963e-06, 'wall_clock_time': 58.22364544868469, 'metric_for_logging': {'pred_time': 8.479408595872963e-06}, 'val_loss': 0.12007504690431525, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 1.3571286547011223e-05, 'wall_clock_time': 58.23431181907654, 'metric_for_logging': {'pred_time': 1.3571286547011223e-05}, 'val_loss': 0.06354300385109113, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {701} INFO - Number of trials: 444/1000000, 2 RUNNING, 442 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 3.421220226564269e-06, 'wall_clock_time': 58.33852028846741, 'metric_for_logging': {'pred_time': 3.421220226564269e-06}, 'val_loss': 0.0461637207465192, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 6.137982658717943e-06, 'wall_clock_time': 58.38371539115906, 'metric_for_logging': {'pred_time': 6.137982658717943e-06}, 'val_loss': 0.09361113854053527, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {701} INFO - Number of trials: 446/1000000, 2 RUNNING, 444 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 5.828729574231134e-06, 'wall_clock_time': 58.52914047241211, 'metric_for_logging': {'pred_time': 5.828729574231134e-06}, 'val_loss': 0.06852967315098257, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 3.4436799477839815e-06, 'wall_clock_time': 58.49435234069824, 'metric_for_logging': {'pred_time': 3.4436799477839815e-06}, 'val_loss': 0.10792929791646111, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {701} INFO - Number of trials: 448/1000000, 2 RUNNING, 446 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 2.9461107392241988e-06, 'wall_clock_time': 58.64449882507324, 'metric_for_logging': {'pred_time': 2.9461107392241988e-06}, 'val_loss': 0.06601165201935422, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 6.038209666376528e-06, 'wall_clock_time': 58.666993618011475, 'metric_for_logging': {'pred_time': 6.038209666376528e-06}, 'val_loss': 0.08526710773180612, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {701} INFO - Number of trials: 450/1000000, 2 RUNNING, 448 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 2.924946771151778e-06, 'wall_clock_time': 58.79150605201721, 'metric_for_logging': {'pred_time': 2.924946771151778e-06}, 'val_loss': 0.05840821566110399, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {721} INFO - Brief result: {'pred_time': 5.7293884996054825e-06, 'wall_clock_time': 58.808069705963135, 'metric_for_logging': {'pred_time': 5.7293884996054825e-06}, 'val_loss': 0.05717389157697239, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:17] {701} INFO - Number of trials: 452/1000000, 2 RUNNING, 450 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:18] {721} INFO - Brief result: {'pred_time': 5.896540655606035e-06, 'wall_clock_time': 58.95470404624939, 'metric_for_logging': {'pred_time': 5.896540655606035e-06}, 'val_loss': 0.0776636713735559, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:18] {721} INFO - Brief result: {'pred_time': 1.275496206421783e-05, 'wall_clock_time': 59.13272428512573, 'metric_for_logging': {'pred_time': 1.275496206421783e-05}, 'val_loss': 0.1166189394687468, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:18] {701} INFO - Number of trials: 454/1000000, 2 RUNNING, 452 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:18] {721} INFO - Brief result: {'pred_time': 6.439461224321006e-06, 'wall_clock_time': 59.39792513847351, 'metric_for_logging': {'pred_time': 6.439461224321006e-06}, 'val_loss': 0.11528586945788488, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:18] {721} INFO - Brief result: {'pred_time': 5.8434147765671e-06, 'wall_clock_time': 59.28473091125488, 'metric_for_logging': {'pred_time': 5.8434147765671e-06}, 'val_loss': 0.1259504295447813, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:18] {701} INFO - Number of trials: 456/1000000, 2 RUNNING, 454 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:18] {721} INFO - Brief result: {'pred_time': 5.742346031078394e-06, 'wall_clock_time': 59.60823345184326, 'metric_for_logging': {'pred_time': 5.742346031078394e-06}, 'val_loss': 0.07070208353905405, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:18] {721} INFO - Brief result: {'pred_time': 5.849029706872028e-06, 'wall_clock_time': 59.67988133430481, 'metric_for_logging': {'pred_time': 5.849029706872028e-06}, 'val_loss': 0.08062604917547156, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:18] {701} INFO - Number of trials: 458/1000000, 2 RUNNING, 456 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:19] {721} INFO - Brief result: {'pred_time': 5.664168924525164e-06, 'wall_clock_time': 59.888566970825195, 'metric_for_logging': {'pred_time': 5.664168924525164e-06}, 'val_loss': 0.09632665152562458, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:19] {721} INFO - Brief result: {'pred_time': 2.696894217228544e-06, 'wall_clock_time': 59.753591537475586, 'metric_for_logging': {'pred_time': 2.696894217228544e-06}, 'val_loss': 0.06941838649155718, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:19] {701} INFO - Number of trials: 460/1000000, 2 RUNNING, 458 TERMINATED\n", + "[flaml.tune.tune: 04-19 01:11:19] {721} INFO - Brief result: {'pred_time': 1.2012063593104266e-05, 'wall_clock_time': 60.01166772842407, 'metric_for_logging': {'pred_time': 1.2012063593104266e-05}, 'val_loss': 0.06255554458378598, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:19] {721} INFO - Brief result: {'pred_time': 2.9016232145005376e-06, 'wall_clock_time': 60.00053381919861, 'metric_for_logging': {'pred_time': 2.9016232145005376e-06}, 'val_loss': 0.09810407820677403, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:19] {701} INFO - Number of trials: 462/1000000, 2 RUNNING, 460 TERMINATED\n", + "\n", + "[flaml.tune.tune: 04-19 01:11:19] {721} INFO - Brief result: {'pred_time': 6.200178809787916e-06, 'wall_clock_time': 60.18111515045166, 'metric_for_logging': {'pred_time': 6.200178809787916e-06}, 'val_loss': 0.07692307692307687, 'trained_estimator': }\n", + "[flaml.tune.tune: 04-19 01:11:19] {721} INFO - Brief result: {'pred_time': 5.788129308949346e-06, 'wall_clock_time': 60.19044256210327, 'metric_for_logging': {'pred_time': 5.788129308949346e-06}, 'val_loss': 0.057075145650241965, 'trained_estimator': }\n", + "[flaml.automl.logger: 04-19 01:11:19] {2485} INFO - selected model: None\n", + "[flaml.automl.logger: 04-19 01:11:19] {2619} INFO - retrain lgbm for 0.2s\n", + "[flaml.automl.logger: 04-19 01:11:19] {2622} INFO - retrained model: LGBMClassifier(colsample_bytree=0.9633671819625609,\n", + " learning_rate=0.27021587856943113, max_bin=255,\n", + " min_child_samples=21, n_estimators=4, num_leaves=9,\n", + " reg_alpha=0.014098641144674361, reg_lambda=1.5196347818125986,\n", + " verbose=-1)\n", + "[flaml.automl.logger: 04-19 01:11:19] {1930} INFO - fit succeeded\n", + "[flaml.automl.logger: 04-19 01:11:19] {1931} INFO - Time taken to find the best model: 32.00390648841858\n" + ] + } + ], + "source": [ + "'''The main flaml automl API'''\n", + "automl.fit(dataframe=pandas_df, label='Bankrupt?', **settings)" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T01:11:22.1516753Z", + "execution_start_time": "2023-04-19T01:11:21.8482489Z", + "livy_statement_state": "available", + "parent_msg_id": "4bf310f1-9866-44cd-be3f-fb17edf35376", + "queued_time": "2023-04-19T01:10:16.9197277Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 47 + }, + "text/plain": [ + "StatementMeta(automl, 27, 47, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Best hyperparmeter config: {'n_estimators': 4, 'num_leaves': 9, 'min_child_samples': 21, 'learning_rate': 0.27021587856943113, 'log_max_bin': 8, 'colsample_bytree': 0.9633671819625609, 'reg_alpha': 0.014098641144674361, 'reg_lambda': 1.5196347818125986}\n", + "Best roc_auc on validation data: 0.9557\n", + "Training duration of best run: 0.1563 s\n" + ] + } + ], + "source": [ + "''' retrieve best config'''\n", + "print('Best hyperparmeter config:', automl.best_config)\n", + "print('Best roc_auc on validation data: {0:.4g}'.format(1-automl.best_loss))\n", + "print('Training duration of best run: {0:.4g} s'.format(automl.best_config_train_time))" + ] + }, + { + "cell_type": "code", + "execution_count": 90, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-19T01:44:54.3605657Z", + "execution_start_time": "2023-04-19T01:44:42.6184902Z", + "livy_statement_state": "available", + "parent_msg_id": "bc4bd38f-ea2a-4a16-baad-c0a18c4e4e31", + "queued_time": "2023-04-19T01:44:42.3928483Z", + "session_id": "27", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 96 + }, + "text/plain": [ + "StatementMeta(automl, 27, 96, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "+---------------+--------------------+------------------+---------+------------------+------------------+\n", + "|evaluation_type| confusion_matrix| accuracy|precision| recall| AUC|\n", + "+---------------+--------------------+------------------+---------+------------------+------------------+\n", + "| Classification|1266.0 7.0 \\n37...|0.9665907365223994| 0.5|0.1590909090909091|0.5767960437049204|\n", + "+---------------+--------------------+------------------+---------+------------------+------------------+\n", + "\n" + ] + } + ], + "source": [ + "# predict function for non-spark models\n", + "def predict_pandas(automl, test_raw):\n", + " from synapse.ml.train import ComputeModelStatistics\n", + " import pandas as pd\n", + " pandas_test = test_raw.toPandas()\n", + " predictions = automl.predict(pandas_test.iloc[:,1:]).astype('float')\n", + " predictions = pd.DataFrame({\"Bankrupt?\":pandas_test.iloc[:,0], \"prediction\": predictions.tolist()})\n", + " predictions = spark.createDataFrame(predictions)\n", + " \n", + " metrics = ComputeModelStatistics(\n", + " evaluationMetric=\"classification\",\n", + " labelCol=\"Bankrupt?\",\n", + " scoredLabelsCol=\"prediction\",\n", + " ).transform(predictions)\n", + " return metrics\n", + "\n", + "automl_metrics = predict_pandas(automl, test_raw)\n", + "automl_metrics.show()" + ] + } + ], + "metadata": { + "description": null, + "kernelspec": { + "display_name": "Synapse PySpark", + "name": "synapse_pyspark" + }, + "language_info": { + "name": "python" + }, + "save_output": true + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/notebook/automl_flight_delays.ipynb b/notebook/automl_flight_delays.ipynb new file mode 100644 index 00000000000..05b5222d06c --- /dev/null +++ b/notebook/automl_flight_delays.ipynb @@ -0,0 +1,2443 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "# AutoML with FLAML Library\n", + "\n", + "\n", + "| | | | |\n", + "|-----|--------|--------|--------|\n", + "| \"drawing\" \n", + "\n", + "\n", + "\n", + "### Goal\n", + "In this notebook, we demonstrate how to use AutoML with FLAML to find the best model for our dataset.\n", + "\n", + "\n", + "## 1. Introduction\n", + "\n", + "FLAML is a Python library (https://github.com/microsoft/FLAML) designed to automatically produce accurate machine learning models \n", + "with low computational cost. It is fast and economical. The simple and lightweight design makes it easy to use and extend, such as adding new learners. FLAML can \n", + "- serve as an economical AutoML engine,\n", + "- be used as a fast hyperparameter tuning tool, or \n", + "- be embedded in self-tuning software that requires low latency & resource in repetitive\n", + " tuning tasks.\n", + "\n", + "In this notebook, we use one real data example (binary classification) to showcase how to use FLAML library.\n", + "\n", + "FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the `notebook` option:\n", + "```bash\n", + "pip install flaml[notebook]==1.1.3\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": { + "jupyter": { + "outputs_hidden": true + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:11:05.782522Z", + "execution_start_time": "2023-04-09T03:11:05.7822033Z", + "livy_statement_state": "available", + "parent_msg_id": "18b2ee64-09c4-4ceb-8975-e4ed43d7c41a", + "queued_time": "2023-04-09T03:10:33.571519Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "finished", + "statement_id": -1 + }, + "text/plain": [ + "StatementMeta(, 7, -1, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": {}, + "execution_count": 39, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting flaml[synapse]==1.1.3\n", + " Using cached FLAML-1.1.3-py3-none-any.whl (224 kB)\n", + "Collecting xgboost==1.6.1\n", + " Using cached xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl (192.9 MB)\n", + "Collecting pandas==1.5.1\n", + " Using cached pandas-1.5.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)\n", + "Collecting numpy==1.23.4\n", + " Using cached numpy-1.23.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)\n", + "Collecting openml\n", + " Using cached openml-0.13.1-py3-none-any.whl\n", + "Collecting scipy>=1.4.1\n", + " Using cached scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB)\n", + "Collecting scikit-learn>=0.24\n", + " Using cached scikit_learn-1.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)\n", + "Collecting lightgbm>=2.3.1\n", + " Using cached lightgbm-3.3.5-py3-none-manylinux1_x86_64.whl (2.0 MB)\n", + "Collecting pyspark>=3.0.0\n", + " Using cached pyspark-3.3.2-py2.py3-none-any.whl\n", + "Collecting optuna==2.8.0\n", + " Using cached optuna-2.8.0-py3-none-any.whl (301 kB)\n", + "Collecting joblibspark>=0.5.0\n", + " Using cached joblibspark-0.5.1-py3-none-any.whl (15 kB)\n", + "Collecting python-dateutil>=2.8.1\n", + " Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)\n", + "Collecting pytz>=2020.1\n", + " Using cached pytz-2023.3-py2.py3-none-any.whl (502 kB)\n", + "Collecting cliff\n", + " Using cached cliff-4.2.0-py3-none-any.whl (81 kB)\n", + "Collecting packaging>=20.0\n", + " Using cached packaging-23.0-py3-none-any.whl (42 kB)\n", + "Collecting cmaes>=0.8.2\n", + " Using cached cmaes-0.9.1-py3-none-any.whl (21 kB)\n", + "Collecting sqlalchemy>=1.1.0\n", + " Using cached SQLAlchemy-2.0.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.8 MB)\n", + "Collecting tqdm\n", + " Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)\n", + "Collecting alembic\n", + " Using cached alembic-1.10.3-py3-none-any.whl (212 kB)\n", + "Collecting colorlog\n", + " Using cached colorlog-6.7.0-py2.py3-none-any.whl (11 kB)\n", + "Collecting xmltodict\n", + " Using cached xmltodict-0.13.0-py2.py3-none-any.whl (10.0 kB)\n", + "Collecting requests\n", + " Using cached requests-2.28.2-py3-none-any.whl (62 kB)\n", + "Collecting minio\n", + " Using cached minio-7.1.14-py3-none-any.whl (77 kB)\n", + "Collecting liac-arff>=2.4.0\n", + " Using cached liac_arff-2.5.0-py3-none-any.whl\n", + "Collecting pyarrow\n", + " Using cached pyarrow-11.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (35.0 MB)\n", + "Collecting joblib>=0.14\n", + " Using cached joblib-1.2.0-py3-none-any.whl (297 kB)\n", + "Collecting wheel\n", + " Using cached wheel-0.40.0-py3-none-any.whl (64 kB)\n", + "Collecting py4j==0.10.9.5\n", + " Using cached py4j-0.10.9.5-py2.py3-none-any.whl (199 kB)\n", + "Collecting six>=1.5\n", + " Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)\n", + "Collecting threadpoolctl>=2.0.0\n", + " Using cached threadpoolctl-3.1.0-py3-none-any.whl (14 kB)\n", + "Collecting urllib3\n", + " Using cached urllib3-1.26.15-py2.py3-none-any.whl (140 kB)\n", + "Collecting certifi\n", + " Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)\n", + "Collecting idna<4,>=2.5\n", + " Using cached idna-3.4-py3-none-any.whl (61 kB)\n", + "Collecting charset-normalizer<4,>=2\n", + " Using cached charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (195 kB)\n", + "Collecting typing-extensions>=4.2.0\n", + " Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)\n", + "Collecting greenlet!=0.4.17\n", + " Using cached greenlet-2.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (618 kB)\n", + "Collecting importlib-metadata\n", + " Using cached importlib_metadata-6.2.0-py3-none-any.whl (21 kB)\n", + "Collecting importlib-resources\n", + " Using cached importlib_resources-5.12.0-py3-none-any.whl (36 kB)\n", + "Collecting Mako\n", + " Using cached Mako-1.2.4-py3-none-any.whl (78 kB)\n", + "Collecting autopage>=0.4.0\n", + " Using cached autopage-0.5.1-py3-none-any.whl (29 kB)\n", + "Collecting cmd2>=1.0.0\n", + " Using cached cmd2-2.4.3-py3-none-any.whl (147 kB)\n", + "Collecting stevedore>=2.0.1\n", + " Using cached stevedore-5.0.0-py3-none-any.whl (49 kB)\n", + "Collecting PrettyTable>=0.7.2\n", + " Using cached prettytable-3.6.0-py3-none-any.whl (27 kB)\n", + "Collecting PyYAML>=3.12\n", + " Using cached PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (701 kB)\n", + "Collecting attrs>=16.3.0\n", + " Using cached attrs-22.2.0-py3-none-any.whl (60 kB)\n", + "Collecting pyperclip>=1.6\n", + " Using cached pyperclip-1.8.2-py3-none-any.whl\n", + "Collecting wcwidth>=0.1.7\n", + " Using cached wcwidth-0.2.6-py2.py3-none-any.whl (29 kB)\n", + "Collecting zipp>=0.5\n", + " Using cached zipp-3.15.0-py3-none-any.whl (6.8 kB)\n", + "Collecting pbr!=2.1.0,>=2.0.0\n", + " Using cached pbr-5.11.1-py2.py3-none-any.whl (112 kB)\n", + "Collecting MarkupSafe>=0.9.2\n", + " Using cached MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)\n", + "Installing collected packages: wcwidth, pytz, pyperclip, py4j, zipp, xmltodict, wheel, urllib3, typing-extensions, tqdm, threadpoolctl, six, PyYAML, pyspark, PrettyTable, pbr, packaging, numpy, MarkupSafe, liac-arff, joblib, idna, greenlet, colorlog, charset-normalizer, certifi, autopage, attrs, stevedore, sqlalchemy, scipy, requests, python-dateutil, pyarrow, minio, Mako, joblibspark, importlib-resources, importlib-metadata, cmd2, cmaes, xgboost, scikit-learn, pandas, cliff, alembic, optuna, openml, lightgbm, flaml\n", + " Attempting uninstall: wcwidth\n", + " Found existing installation: wcwidth 0.2.6\n", + " Uninstalling wcwidth-0.2.6:\n", + " Successfully uninstalled wcwidth-0.2.6\n", + " Attempting uninstall: pytz\n", + " Found existing installation: pytz 2023.3\n", + " Uninstalling pytz-2023.3:\n", + " Successfully uninstalled pytz-2023.3\n", + " Attempting uninstall: pyperclip\n", + " Found existing installation: pyperclip 1.8.2\n", + " Uninstalling pyperclip-1.8.2:\n", + " Successfully uninstalled pyperclip-1.8.2\n", + " Attempting uninstall: py4j\n", + " Found existing installation: py4j 0.10.9.5\n", + " Uninstalling py4j-0.10.9.5:\n", + " Successfully uninstalled py4j-0.10.9.5\n", + " Attempting uninstall: zipp\n", + " Found existing installation: zipp 3.15.0\n", + " Uninstalling zipp-3.15.0:\n", + " Successfully uninstalled zipp-3.15.0\n", + " Attempting uninstall: xmltodict\n", + " Found existing installation: xmltodict 0.13.0\n", + " Uninstalling xmltodict-0.13.0:\n", + " Successfully uninstalled xmltodict-0.13.0\n", + " Attempting uninstall: wheel\n", + " Found existing installation: wheel 0.40.0\n", + " Uninstalling wheel-0.40.0:\n", + " Successfully uninstalled wheel-0.40.0\n", + " Attempting uninstall: urllib3\n", + " Found existing installation: urllib3 1.26.15\n", + " Uninstalling urllib3-1.26.15:\n", + " Successfully uninstalled urllib3-1.26.15\n", + " Attempting uninstall: typing-extensions\n", + " Found existing installation: typing_extensions 4.5.0\n", + " Uninstalling typing_extensions-4.5.0:\n", + " Successfully uninstalled typing_extensions-4.5.0\n", + " Attempting uninstall: tqdm\n", + " Found existing installation: tqdm 4.65.0\n", + " Uninstalling tqdm-4.65.0:\n", + " Successfully uninstalled tqdm-4.65.0\n", + " Attempting uninstall: threadpoolctl\n", + " Found existing installation: threadpoolctl 3.1.0\n", + " Uninstalling threadpoolctl-3.1.0:\n", + " Successfully uninstalled threadpoolctl-3.1.0\n", + " Attempting uninstall: six\n", + " Found existing installation: six 1.16.0\n", + " Uninstalling six-1.16.0:\n", + " Successfully uninstalled six-1.16.0\n", + " Attempting uninstall: PyYAML\n", + " Found existing installation: PyYAML 6.0\n", + " Uninstalling PyYAML-6.0:\n", + " Successfully uninstalled PyYAML-6.0\n", + " Attempting uninstall: pyspark\n", + " Found existing installation: pyspark 3.3.2\n", + " Uninstalling pyspark-3.3.2:\n", + " Successfully uninstalled pyspark-3.3.2\n", + " Attempting uninstall: PrettyTable\n", + " Found existing installation: prettytable 3.6.0\n", + " Uninstalling prettytable-3.6.0:\n", + " Successfully uninstalled prettytable-3.6.0\n", + " Attempting uninstall: pbr\n", + " Found existing installation: pbr 5.11.1\n", + " Uninstalling pbr-5.11.1:\n", + " Successfully uninstalled pbr-5.11.1\n", + " Attempting uninstall: packaging\n", + " Found existing installation: packaging 23.0\n", + " Uninstalling packaging-23.0:\n", + " Successfully uninstalled packaging-23.0\n", + " Attempting uninstall: numpy\n", + " Found existing installation: numpy 1.23.4\n", + " Uninstalling numpy-1.23.4:\n", + " Successfully uninstalled numpy-1.23.4\n", + " Attempting uninstall: MarkupSafe\n", + " Found existing installation: MarkupSafe 2.1.2\n", + " Uninstalling MarkupSafe-2.1.2:\n", + " Successfully uninstalled MarkupSafe-2.1.2\n", + " Attempting uninstall: liac-arff\n", + " Found existing installation: liac-arff 2.5.0\n", + " Uninstalling liac-arff-2.5.0:\n", + " Successfully uninstalled liac-arff-2.5.0\n", + " Attempting uninstall: joblib\n", + " Found existing installation: joblib 1.2.0\n", + " Uninstalling joblib-1.2.0:\n", + " Successfully uninstalled joblib-1.2.0\n", + " Attempting uninstall: idna\n", + " Found existing installation: idna 3.4\n", + " Uninstalling idna-3.4:\n", + " Successfully uninstalled idna-3.4\n", + " Attempting uninstall: greenlet\n", + " Found existing installation: greenlet 2.0.2\n", + " Uninstalling greenlet-2.0.2:\n", + " Successfully uninstalled greenlet-2.0.2\n", + " Attempting uninstall: colorlog\n", + " Found existing installation: colorlog 6.7.0\n", + " Uninstalling colorlog-6.7.0:\n", + " Successfully uninstalled colorlog-6.7.0\n", + " Attempting uninstall: charset-normalizer\n", + " Found existing installation: charset-normalizer 3.1.0\n", + " Uninstalling charset-normalizer-3.1.0:\n", + " Successfully uninstalled charset-normalizer-3.1.0\n", + " Attempting uninstall: certifi\n", + " Found existing installation: certifi 2022.12.7\n", + " Uninstalling certifi-2022.12.7:\n", + " Successfully uninstalled certifi-2022.12.7\n", + " Attempting uninstall: autopage\n", + " Found existing installation: autopage 0.5.1\n", + " Uninstalling autopage-0.5.1:\n", + " Successfully uninstalled autopage-0.5.1\n", + " Attempting uninstall: attrs\n", + " Found existing installation: attrs 22.2.0\n", + " Uninstalling attrs-22.2.0:\n", + " Successfully uninstalled attrs-22.2.0\n", + " Attempting uninstall: stevedore\n", + " Found existing installation: stevedore 5.0.0\n", + " Uninstalling stevedore-5.0.0:\n", + " Successfully uninstalled stevedore-5.0.0\n", + " Attempting uninstall: sqlalchemy\n", + " Found existing installation: SQLAlchemy 2.0.9\n", + " Uninstalling SQLAlchemy-2.0.9:\n", + " Successfully uninstalled SQLAlchemy-2.0.9\n", + " Attempting uninstall: scipy\n", + " Found existing installation: scipy 1.10.1\n", + " Uninstalling scipy-1.10.1:\n", + " Successfully uninstalled scipy-1.10.1\n", + " Attempting uninstall: requests\n", + " Found existing installation: requests 2.28.2\n", + " Uninstalling requests-2.28.2:\n", + " Successfully uninstalled requests-2.28.2\n", + " Attempting uninstall: python-dateutil\n", + " Found existing installation: python-dateutil 2.8.2\n", + " Uninstalling python-dateutil-2.8.2:\n", + " Successfully uninstalled python-dateutil-2.8.2\n", + " Attempting uninstall: pyarrow\n", + " Found existing installation: pyarrow 11.0.0\n", + " Uninstalling pyarrow-11.0.0:\n", + " Successfully uninstalled pyarrow-11.0.0\n", + " Attempting uninstall: minio\n", + " Found existing installation: minio 7.1.14\n", + " Uninstalling minio-7.1.14:\n", + " Successfully uninstalled minio-7.1.14\n", + " Attempting uninstall: Mako\n", + " Found existing installation: Mako 1.2.4\n", + " Uninstalling Mako-1.2.4:\n", + " Successfully uninstalled Mako-1.2.4\n", + " Attempting uninstall: joblibspark\n", + " Found existing installation: joblibspark 0.5.1\n", + " Uninstalling joblibspark-0.5.1:\n", + " Successfully uninstalled joblibspark-0.5.1\n", + " Attempting uninstall: importlib-resources\n", + " Found existing installation: importlib-resources 5.12.0\n", + " Uninstalling importlib-resources-5.12.0:\n", + " Successfully uninstalled importlib-resources-5.12.0\n", + " Attempting uninstall: importlib-metadata\n", + " Found existing installation: importlib-metadata 6.2.0\n", + " Uninstalling importlib-metadata-6.2.0:\n", + " Successfully uninstalled importlib-metadata-6.2.0\n", + " Attempting uninstall: cmd2\n", + " Found existing installation: cmd2 2.4.3\n", + " Uninstalling cmd2-2.4.3:\n", + " Successfully uninstalled cmd2-2.4.3\n", + " Attempting uninstall: cmaes\n", + " Found existing installation: cmaes 0.9.1\n", + " Uninstalling cmaes-0.9.1:\n", + " Successfully uninstalled cmaes-0.9.1\n", + " Attempting uninstall: xgboost\n", + " Found existing installation: xgboost 1.6.1\n", + " Uninstalling xgboost-1.6.1:\n", + " Successfully uninstalled xgboost-1.6.1\n", + " Attempting uninstall: scikit-learn\n", + " Found existing installation: scikit-learn 1.2.2\n", + " Uninstalling scikit-learn-1.2.2:\n", + " Successfully uninstalled scikit-learn-1.2.2\n", + " Attempting uninstall: pandas\n", + " Found existing installation: pandas 1.5.1\n", + " Uninstalling pandas-1.5.1:\n", + " Successfully uninstalled pandas-1.5.1\n", + " Attempting uninstall: cliff\n", + " Found existing installation: cliff 4.2.0\n", + " Uninstalling cliff-4.2.0:\n", + " Successfully uninstalled cliff-4.2.0\n", + " Attempting uninstall: alembic\n", + " Found existing installation: alembic 1.10.3\n", + " Uninstalling alembic-1.10.3:\n", + " Successfully uninstalled alembic-1.10.3\n", + " Attempting uninstall: optuna\n", + " Found existing installation: optuna 2.8.0\n", + " Uninstalling optuna-2.8.0:\n", + " Successfully uninstalled optuna-2.8.0\n", + " Attempting uninstall: openml\n", + " Found existing installation: openml 0.13.1\n", + " Uninstalling openml-0.13.1:\n", + " Successfully uninstalled openml-0.13.1\n", + " Attempting uninstall: lightgbm\n", + " Found existing installation: lightgbm 3.3.5\n", + " Uninstalling lightgbm-3.3.5:\n", + " Successfully uninstalled lightgbm-3.3.5\n", + " Attempting uninstall: flaml\n", + " Found existing installation: FLAML 1.1.3\n", + " Uninstalling FLAML-1.1.3:\n", + " Successfully uninstalled FLAML-1.1.3\n", + "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", + "virtualenv 20.14.0 requires platformdirs<3,>=2, but you have platformdirs 3.2.0 which is incompatible.\n", + "tensorflow 2.4.1 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.\n", + "tensorflow 2.4.1 requires typing-extensions~=3.7.4, but you have typing-extensions 4.5.0 which is incompatible.\n", + "pmdarima 1.8.2 requires numpy~=1.19.0, but you have numpy 1.23.4 which is incompatible.\n", + "koalas 1.8.0 requires numpy<1.20.0,>=1.14, but you have numpy 1.23.4 which is incompatible.\n", + "gevent 21.1.2 requires greenlet<2.0,>=0.4.17; platform_python_implementation == \"CPython\", but you have greenlet 2.0.2 which is incompatible.\n", + "azureml-dataset-runtime 1.34.0 requires pyarrow<4.0.0,>=0.17.0, but you have pyarrow 11.0.0 which is incompatible.\n", + "azureml-core 1.34.0 requires urllib3<=1.26.6,>=1.23, but you have urllib3 1.26.15 which is incompatible.\u001b[0m\u001b[31m\n", + "\u001b[0mSuccessfully installed Mako-1.2.4 MarkupSafe-2.1.2 PrettyTable-3.6.0 PyYAML-6.0 alembic-1.10.3 attrs-22.2.0 autopage-0.5.1 certifi-2022.12.7 charset-normalizer-3.1.0 cliff-4.2.0 cmaes-0.9.1 cmd2-2.4.3 colorlog-6.7.0 flaml-1.1.3 greenlet-2.0.2 idna-3.4 importlib-metadata-6.2.0 importlib-resources-5.12.0 joblib-1.2.0 joblibspark-0.5.1 liac-arff-2.5.0 lightgbm-3.3.5 minio-7.1.14 numpy-1.23.4 openml-0.13.1 optuna-2.8.0 packaging-23.0 pandas-1.5.1 pbr-5.11.1 py4j-0.10.9.5 pyarrow-11.0.0 pyperclip-1.8.2 pyspark-3.3.2 python-dateutil-2.8.2 pytz-2023.3 requests-2.28.2 scikit-learn-1.2.2 scipy-1.10.1 six-1.16.0 sqlalchemy-2.0.9 stevedore-5.0.0 threadpoolctl-3.1.0 tqdm-4.65.0 typing-extensions-4.5.0 urllib3-1.26.15 wcwidth-0.2.6 wheel-0.40.0 xgboost-1.6.1 xmltodict-0.13.0 zipp-3.15.0\n", + "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.0.1 is available.\n", + "You should consider upgrading via the '/nfs4/pyenv-bfada21f-d1ed-44b9-a41d-4ff480d237e7/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", + "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" + ] + }, + { + "data": {}, + "execution_count": 39, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Warning: PySpark kernel has been restarted to use updated packages.\n", + "\n" + ] + } + ], + "source": [ + "%pip install flaml[synapse]==1.1.3 xgboost==1.6.1 pandas==1.5.1 numpy==1.23.4 openml --force-reinstall" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "## 2. Classification Example\n", + "### Load data and preprocess\n", + "\n", + "Download [Airlines dataset](https://www.openml.org/d/1169) from OpenML. The task is to predict whether a given flight will be delayed, given the information of the scheduled departure." + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": { + "jupyter": { + "outputs_hidden": true + }, + "slideshow": { + "slide_type": "subslide" + }, + "tags": [] + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:11:11.6973622Z", + "execution_start_time": "2023-04-09T03:11:09.4074274Z", + "livy_statement_state": "available", + "parent_msg_id": "25ba0152-0936-464b-83eb-afa5f2f517fb", + "queued_time": "2023-04-09T03:10:33.8002088Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 67 + }, + "text/plain": [ + "StatementMeta(automl, 7, 67, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/dask/dataframe/backends.py:187: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", + " _numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)\n", + "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/dask/dataframe/backends.py:187: FutureWarning: pandas.Float64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", + " _numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)\n", + "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/dask/dataframe/backends.py:187: FutureWarning: pandas.UInt64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", + " _numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)\n" + ] + } + ], + "source": [ + "from flaml.data import load_openml_dataset\n", + "X_train, X_test, y_train, y_test = load_openml_dataset(dataset_id=1169, data_dir='./')" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:11:12.2518637Z", + "execution_start_time": "2023-04-09T03:11:11.9466307Z", + "livy_statement_state": "available", + "parent_msg_id": "c6f3064c-401e-447b-bd1d-65cd00f48fe1", + "queued_time": "2023-04-09T03:10:33.901764Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 68 + }, + "text/plain": [ + "StatementMeta(automl, 7, 68, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
AirlineFlightAirportFromAirportToDayOfWeekTimeLength
249392EV5309.0MDTATL3794.0131.0
166918CO1079.0IAHSAT5900.060.0
89110US1636.0CLECLT1530.0103.0
70258WN928.0CMHLAS7480.0280.0
492985WN729.0GEGLAS3630.0140.0
\n", + "
" + ], + "text/plain": [ + " Airline Flight AirportFrom AirportTo DayOfWeek Time Length\n", + "249392 EV 5309.0 MDT ATL 3 794.0 131.0\n", + "166918 CO 1079.0 IAH SAT 5 900.0 60.0\n", + "89110 US 1636.0 CLE CLT 1 530.0 103.0\n", + "70258 WN 928.0 CMH LAS 7 480.0 280.0\n", + "492985 WN 729.0 GEG LAS 3 630.0 140.0" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X_train.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "### Run FLAML\n", + "In the FLAML automl run configuration, users can specify the task type, time budget, error metric, learner list, whether to subsample, resampling strategy type, and so on. All these arguments have default values which will be used if users do not provide them. For example, the default classifiers are `['lgbm', 'xgboost', 'xgb_limitdepth', 'catboost', 'rf', 'extra_tree', 'lrl1']`. " + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:11:12.8001867Z", + "execution_start_time": "2023-04-09T03:11:12.5256701Z", + "livy_statement_state": "available", + "parent_msg_id": "f2fba5ab-4e87-41e8-8a76-b7b7367e6fc6", + "queued_time": "2023-04-09T03:10:34.0855462Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 69 + }, + "text/plain": [ + "StatementMeta(automl, 7, 69, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "''' import AutoML class from flaml package '''\n", + "from flaml import AutoML\n", + "automl = AutoML()" + ] + }, + { + "cell_type": "code", + "execution_count": 44, + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:11:13.391257Z", + "execution_start_time": "2023-04-09T03:11:13.1109201Z", + "livy_statement_state": "available", + "parent_msg_id": "d5e4a7ed-3192-4e43-a7a8-44cf1469e685", + "queued_time": "2023-04-09T03:10:34.3172166Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 70 + }, + "text/plain": [ + "StatementMeta(automl, 7, 70, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "settings = {\n", + " \"time_budget\": 120, # total running time in seconds\n", + " \"metric\": 'accuracy', \n", + " # check the documentation for options of metrics (https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#optimization-metric)\n", + " \"task\": 'classification', # task type\n", + " \"log_file_name\": 'airlines_experiment.log', # flaml log file\n", + " \"seed\": 7654321, # random seed\n", + "}\n" + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "metadata": { + "slideshow": { + "slide_type": "slide" + }, + "tags": [ + "outputPrepend" + ] + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:20.8381216Z", + "execution_start_time": "2023-04-09T03:11:13.647266Z", + "livy_statement_state": "available", + "parent_msg_id": "29dd0ba0-8f0d-428b-acb9-1d8e62f1b157", + "queued_time": "2023-04-09T03:10:34.4667686Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 71 + }, + "text/plain": [ + "StatementMeta(automl, 7, 71, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[flaml.automl.automl: 04-09 03:11:13] {2726} INFO - task = classification\n", + "[flaml.automl.automl: 04-09 03:11:13] {2728} INFO - Data split method: stratified\n", + "[flaml.automl.automl: 04-09 03:11:13] {2731} INFO - Evaluation method: holdout\n", + "[flaml.automl.automl: 04-09 03:11:14] {2858} INFO - Minimizing error metric: 1-accuracy\n", + "[flaml.automl.automl: 04-09 03:11:14] {3004} INFO - List of ML learners in AutoML Run: ['lgbm', 'rf', 'xgboost', 'extra_tree', 'xgb_limitdepth', 'lrl1']\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 0, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3472} INFO - Estimated sufficient time budget=17413s. Estimated necessary time budget=401s.\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 0.5s,\testimator lgbm's best error=0.3777,\tbest estimator lgbm's best error=0.3777\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 1, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 0.5s,\testimator lgbm's best error=0.3777,\tbest estimator lgbm's best error=0.3777\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 2, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 0.5s,\testimator lgbm's best error=0.3614,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 3, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 0.6s,\testimator lgbm's best error=0.3614,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 4, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 0.6s,\testimator lgbm's best error=0.3614,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 5, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 1.0s,\testimator xgboost's best error=0.3787,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 6, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 1.0s,\testimator lgbm's best error=0.3614,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 7, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 1.2s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 8, current learner rf\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 1.2s,\testimator rf's best error=0.3816,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 9, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 1.3s,\testimator lgbm's best error=0.3614,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 10, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:14] {3519} INFO - at 1.3s,\testimator lgbm's best error=0.3614,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:14] {3334} INFO - iteration 11, current learner rf\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 1.5s,\testimator rf's best error=0.3791,\tbest estimator lgbm's best error=0.3614\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 12, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 1.6s,\testimator lgbm's best error=0.3550,\tbest estimator lgbm's best error=0.3550\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 13, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 1.7s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3550\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 14, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 1.8s,\testimator xgboost's best error=0.3746,\tbest estimator lgbm's best error=0.3550\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 15, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 1.9s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3550\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 16, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 1.9s,\testimator lgbm's best error=0.3550,\tbest estimator lgbm's best error=0.3550\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 17, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 2.2s,\testimator xgboost's best error=0.3699,\tbest estimator lgbm's best error=0.3550\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 18, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:15] {3519} INFO - at 2.4s,\testimator lgbm's best error=0.3545,\tbest estimator lgbm's best error=0.3545\n", + "[flaml.automl.automl: 04-09 03:11:15] {3334} INFO - iteration 19, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:16] {3519} INFO - at 2.5s,\testimator lgbm's best error=0.3545,\tbest estimator lgbm's best error=0.3545\n", + "[flaml.automl.automl: 04-09 03:11:16] {3334} INFO - iteration 20, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:16] {3519} INFO - at 2.9s,\testimator lgbm's best error=0.3545,\tbest estimator lgbm's best error=0.3545\n", + "[flaml.automl.automl: 04-09 03:11:16] {3334} INFO - iteration 21, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:16] {3519} INFO - at 3.0s,\testimator lgbm's best error=0.3536,\tbest estimator lgbm's best error=0.3536\n", + "[flaml.automl.automl: 04-09 03:11:16] {3334} INFO - iteration 22, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:16] {3519} INFO - at 3.1s,\testimator lgbm's best error=0.3536,\tbest estimator lgbm's best error=0.3536\n", + "[flaml.automl.automl: 04-09 03:11:16] {3334} INFO - iteration 23, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:17] {3519} INFO - at 3.4s,\testimator lgbm's best error=0.3536,\tbest estimator lgbm's best error=0.3536\n", + "[flaml.automl.automl: 04-09 03:11:17] {3334} INFO - iteration 24, current learner rf\n", + "[flaml.automl.automl: 04-09 03:11:17] {3519} INFO - at 3.6s,\testimator rf's best error=0.3791,\tbest estimator lgbm's best error=0.3536\n", + "[flaml.automl.automl: 04-09 03:11:17] {3334} INFO - iteration 25, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:17] {3519} INFO - at 3.9s,\testimator xgboost's best error=0.3596,\tbest estimator lgbm's best error=0.3536\n", + "[flaml.automl.automl: 04-09 03:11:17] {3334} INFO - iteration 26, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:17] {3519} INFO - at 4.3s,\testimator lgbm's best error=0.3528,\tbest estimator lgbm's best error=0.3528\n", + "[flaml.automl.automl: 04-09 03:11:17] {3334} INFO - iteration 27, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:18] {3519} INFO - at 4.6s,\testimator xgboost's best error=0.3596,\tbest estimator lgbm's best error=0.3528\n", + "[flaml.automl.automl: 04-09 03:11:18] {3334} INFO - iteration 28, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:18] {3519} INFO - at 4.7s,\testimator xgboost's best error=0.3596,\tbest estimator lgbm's best error=0.3528\n", + "[flaml.automl.automl: 04-09 03:11:18] {3334} INFO - iteration 29, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:18] {3519} INFO - at 5.3s,\testimator xgboost's best error=0.3586,\tbest estimator lgbm's best error=0.3528\n", + "[flaml.automl.automl: 04-09 03:11:18] {3334} INFO - iteration 30, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:20] {3519} INFO - at 6.5s,\testimator lgbm's best error=0.3405,\tbest estimator lgbm's best error=0.3405\n", + "[flaml.automl.automl: 04-09 03:11:20] {3334} INFO - iteration 31, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:20] {3519} INFO - at 6.9s,\testimator lgbm's best error=0.3405,\tbest estimator lgbm's best error=0.3405\n", + "[flaml.automl.automl: 04-09 03:11:20] {3334} INFO - iteration 32, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:21] {3519} INFO - at 8.1s,\testimator lgbm's best error=0.3370,\tbest estimator lgbm's best error=0.3370\n", + "[flaml.automl.automl: 04-09 03:11:21] {3334} INFO - iteration 33, current learner rf\n", + "[flaml.automl.automl: 04-09 03:11:21] {3519} INFO - at 8.2s,\testimator rf's best error=0.3791,\tbest estimator lgbm's best error=0.3370\n", + "[flaml.automl.automl: 04-09 03:11:21] {3334} INFO - iteration 34, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:23] {3519} INFO - at 9.5s,\testimator lgbm's best error=0.3370,\tbest estimator lgbm's best error=0.3370\n", + "[flaml.automl.automl: 04-09 03:11:23] {3334} INFO - iteration 35, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:24] {3519} INFO - at 10.5s,\testimator lgbm's best error=0.3370,\tbest estimator lgbm's best error=0.3370\n", + "[flaml.automl.automl: 04-09 03:11:24] {3334} INFO - iteration 36, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:11:24] {3519} INFO - at 11.0s,\testimator xgboost's best error=0.3577,\tbest estimator lgbm's best error=0.3370\n", + "[flaml.automl.automl: 04-09 03:11:24] {3334} INFO - iteration 37, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:25] {3519} INFO - at 12.4s,\testimator lgbm's best error=0.3318,\tbest estimator lgbm's best error=0.3318\n", + "[flaml.automl.automl: 04-09 03:11:25] {3334} INFO - iteration 38, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:11:26] {3519} INFO - at 12.6s,\testimator xgb_limitdepth's best error=0.3630,\tbest estimator lgbm's best error=0.3318\n", + "[flaml.automl.automl: 04-09 03:11:26] {3334} INFO - iteration 39, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:11:26] {3519} INFO - at 12.7s,\testimator xgb_limitdepth's best error=0.3630,\tbest estimator lgbm's best error=0.3318\n", + "[flaml.automl.automl: 04-09 03:11:26] {3334} INFO - iteration 40, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:11:26] {3519} INFO - at 13.1s,\testimator xgb_limitdepth's best error=0.3630,\tbest estimator lgbm's best error=0.3318\n", + "[flaml.automl.automl: 04-09 03:11:26] {3334} INFO - iteration 41, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:11:26] {3519} INFO - at 13.3s,\testimator xgb_limitdepth's best error=0.3630,\tbest estimator lgbm's best error=0.3318\n", + "[flaml.automl.automl: 04-09 03:11:26] {3334} INFO - iteration 42, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:11:26] {3519} INFO - at 13.4s,\testimator xgb_limitdepth's best error=0.3630,\tbest estimator lgbm's best error=0.3318\n", + "[flaml.automl.automl: 04-09 03:11:26] {3334} INFO - iteration 43, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:28] {3519} INFO - at 14.8s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:28] {3334} INFO - iteration 44, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:11:28] {3519} INFO - at 15.1s,\testimator xgb_limitdepth's best error=0.3630,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:28] {3334} INFO - iteration 45, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:11:28] {3519} INFO - at 15.2s,\testimator xgb_limitdepth's best error=0.3623,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:28] {3334} INFO - iteration 46, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:30] {3519} INFO - at 16.6s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:30] {3334} INFO - iteration 47, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:31] {3519} INFO - at 18.0s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:31] {3334} INFO - iteration 48, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:33] {3519} INFO - at 20.3s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:33] {3334} INFO - iteration 49, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:35] {3519} INFO - at 22.2s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:35] {3334} INFO - iteration 50, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:37] {3519} INFO - at 23.6s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:37] {3334} INFO - iteration 51, current learner lrl1\n", + "No low-cost partial config given to the search algorithm. For cost-frugal search, consider providing low-cost values for cost-related hps via 'low_cost_partial_config'. More info can be found at https://microsoft.github.io/FLAML/docs/FAQ#about-low_cost_partial_config-in-tune\n", + "[flaml.automl.automl: 04-09 03:11:37] {3519} INFO - at 23.8s,\testimator lrl1's best error=0.4339,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:37] {3334} INFO - iteration 52, current learner lrl1\n", + "[flaml.automl.automl: 04-09 03:11:37] {3519} INFO - at 24.0s,\testimator lrl1's best error=0.4339,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:37] {3334} INFO - iteration 53, current learner lrl1\n", + "[flaml.automl.automl: 04-09 03:11:37] {3519} INFO - at 24.2s,\testimator lrl1's best error=0.4339,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:37] {3334} INFO - iteration 54, current learner lrl1\n", + "[flaml.automl.automl: 04-09 03:11:38] {3519} INFO - at 25.0s,\testimator lrl1's best error=0.4334,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:38] {3334} INFO - iteration 55, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:39] {3519} INFO - at 26.3s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:39] {3334} INFO - iteration 56, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:42] {3519} INFO - at 28.7s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:42] {3334} INFO - iteration 57, current learner rf\n", + "[flaml.automl.automl: 04-09 03:11:42] {3519} INFO - at 28.9s,\testimator rf's best error=0.3789,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:42] {3334} INFO - iteration 58, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:48] {3519} INFO - at 35.0s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:48] {3334} INFO - iteration 59, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:11:49] {3519} INFO - at 35.6s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:11:49] {3334} INFO - iteration 60, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:01] {3519} INFO - at 47.9s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:12:01] {3334} INFO - iteration 61, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:01] {3519} INFO - at 48.3s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:12:01] {3334} INFO - iteration 62, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:12:02] {3519} INFO - at 49.1s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:12:02] {3334} INFO - iteration 63, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:04] {3519} INFO - at 51.3s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:12:04] {3334} INFO - iteration 64, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:12:05] {3519} INFO - at 52.0s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:12:05] {3334} INFO - iteration 65, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:12:06] {3519} INFO - at 53.0s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:12:06] {3334} INFO - iteration 66, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:07] {3519} INFO - at 54.2s,\testimator lgbm's best error=0.3282,\tbest estimator lgbm's best error=0.3282\n", + "[flaml.automl.automl: 04-09 03:12:07] {3334} INFO - iteration 67, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:09] {3519} INFO - at 55.9s,\testimator lgbm's best error=0.3274,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:09] {3334} INFO - iteration 68, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:12:10] {3519} INFO - at 56.9s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:10] {3334} INFO - iteration 69, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:11] {3519} INFO - at 58.3s,\testimator lgbm's best error=0.3274,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:11] {3334} INFO - iteration 70, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:12] {3519} INFO - at 59.2s,\testimator lgbm's best error=0.3274,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:12] {3334} INFO - iteration 71, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:12] {3519} INFO - at 59.4s,\testimator rf's best error=0.3781,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:12] {3334} INFO - iteration 72, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:13] {3519} INFO - at 59.4s,\testimator rf's best error=0.3781,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:13] {3334} INFO - iteration 73, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:13] {3519} INFO - at 59.5s,\testimator rf's best error=0.3725,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:13] {3334} INFO - iteration 74, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:13] {3519} INFO - at 59.6s,\testimator rf's best error=0.3725,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:13] {3334} INFO - iteration 75, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:13] {3519} INFO - at 59.7s,\testimator rf's best error=0.3725,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:13] {3334} INFO - iteration 76, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:13] {3519} INFO - at 59.7s,\testimator rf's best error=0.3706,\tbest estimator lgbm's best error=0.3274\n", + "[flaml.automl.automl: 04-09 03:12:13] {3334} INFO - iteration 77, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:18] {3519} INFO - at 65.4s,\testimator lgbm's best error=0.3268,\tbest estimator lgbm's best error=0.3268\n", + "[flaml.automl.automl: 04-09 03:12:18] {3334} INFO - iteration 78, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:21] {3519} INFO - at 68.1s,\testimator lgbm's best error=0.3268,\tbest estimator lgbm's best error=0.3268\n", + "[flaml.automl.automl: 04-09 03:12:21] {3334} INFO - iteration 79, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:21] {3519} INFO - at 68.3s,\testimator rf's best error=0.3706,\tbest estimator lgbm's best error=0.3268\n", + "[flaml.automl.automl: 04-09 03:12:21] {3334} INFO - iteration 80, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:27] {3519} INFO - at 74.4s,\testimator lgbm's best error=0.3250,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:27] {3334} INFO - iteration 81, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:30] {3519} INFO - at 77.0s,\testimator lgbm's best error=0.3250,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:30] {3334} INFO - iteration 82, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:12:30] {3519} INFO - at 77.2s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:30] {3334} INFO - iteration 83, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:50] {3519} INFO - at 96.7s,\testimator lgbm's best error=0.3250,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:50] {3334} INFO - iteration 84, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:50] {3519} INFO - at 96.8s,\testimator rf's best error=0.3706,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:50] {3334} INFO - iteration 85, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:50] {3519} INFO - at 97.0s,\testimator rf's best error=0.3678,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:50] {3334} INFO - iteration 86, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:12:50] {3519} INFO - at 97.3s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:50] {3334} INFO - iteration 87, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:51] {3519} INFO - at 97.4s,\testimator rf's best error=0.3678,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:51] {3334} INFO - iteration 88, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:51] {3519} INFO - at 97.5s,\testimator rf's best error=0.3666,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:51] {3334} INFO - iteration 89, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:51] {3519} INFO - at 97.7s,\testimator rf's best error=0.3645,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:51] {3334} INFO - iteration 90, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:51] {3519} INFO - at 97.8s,\testimator rf's best error=0.3645,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:51] {3334} INFO - iteration 91, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:12:51] {3519} INFO - at 98.2s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:51] {3334} INFO - iteration 92, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:51] {3519} INFO - at 98.3s,\testimator rf's best error=0.3645,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:51] {3334} INFO - iteration 93, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:12:51] {3519} INFO - at 98.3s,\testimator xgb_limitdepth's best error=0.3612,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:51] {3334} INFO - iteration 94, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:12:52] {3519} INFO - at 98.5s,\testimator xgb_limitdepth's best error=0.3612,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:52] {3334} INFO - iteration 95, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:12:52] {3519} INFO - at 98.8s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:52] {3334} INFO - iteration 96, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:12:58] {3519} INFO - at 105.1s,\testimator lgbm's best error=0.3250,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:58] {3334} INFO - iteration 97, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:12:58] {3519} INFO - at 105.3s,\testimator xgb_limitdepth's best error=0.3612,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:58] {3334} INFO - iteration 98, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:59] {3519} INFO - at 105.5s,\testimator rf's best error=0.3560,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:59] {3334} INFO - iteration 99, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:59] {3519} INFO - at 105.7s,\testimator rf's best error=0.3560,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:59] {3334} INFO - iteration 100, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:59] {3519} INFO - at 106.0s,\testimator rf's best error=0.3560,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:59] {3334} INFO - iteration 101, current learner rf\n", + "[flaml.automl.automl: 04-09 03:12:59] {3519} INFO - at 106.3s,\testimator rf's best error=0.3560,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:59] {3334} INFO - iteration 102, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:12:59] {3519} INFO - at 106.4s,\testimator xgb_limitdepth's best error=0.3604,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:12:59] {3334} INFO - iteration 103, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:00] {3519} INFO - at 106.7s,\testimator rf's best error=0.3547,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:00] {3334} INFO - iteration 104, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:06] {3519} INFO - at 113.1s,\testimator lgbm's best error=0.3250,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:06] {3334} INFO - iteration 105, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:07] {3519} INFO - at 113.5s,\testimator xgboost's best error=0.3561,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:07] {3334} INFO - iteration 106, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:09] {3519} INFO - at 116.2s,\testimator lgbm's best error=0.3250,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:09] {3334} INFO - iteration 107, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 116.4s,\testimator xgb_limitdepth's best error=0.3604,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 108, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 116.5s,\testimator xgb_limitdepth's best error=0.3584,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 109, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 116.6s,\testimator xgb_limitdepth's best error=0.3584,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 110, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 116.8s,\testimator xgb_limitdepth's best error=0.3575,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 111, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 116.9s,\testimator xgb_limitdepth's best error=0.3575,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 112, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 117.1s,\testimator rf's best error=0.3547,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 113, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 117.2s,\testimator xgb_limitdepth's best error=0.3575,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 114, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:10] {3519} INFO - at 117.3s,\testimator xgb_limitdepth's best error=0.3575,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:10] {3334} INFO - iteration 115, current learner lrl1\n", + "[flaml.automl.automl: 04-09 03:13:11] {3519} INFO - at 118.0s,\testimator lrl1's best error=0.4334,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:11] {3334} INFO - iteration 116, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:11] {3519} INFO - at 118.1s,\testimator rf's best error=0.3547,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:11] {3334} INFO - iteration 117, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:11] {3519} INFO - at 118.3s,\testimator rf's best error=0.3547,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:11] {3334} INFO - iteration 118, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:11] {3519} INFO - at 118.4s,\testimator xgb_limitdepth's best error=0.3575,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:11] {3334} INFO - iteration 119, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:12] {3519} INFO - at 118.5s,\testimator xgb_limitdepth's best error=0.3575,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:12] {3334} INFO - iteration 120, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:12] {3519} INFO - at 118.6s,\testimator rf's best error=0.3547,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:12] {3334} INFO - iteration 121, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:12] {3519} INFO - at 119.2s,\testimator xgb_limitdepth's best error=0.3520,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:12] {3334} INFO - iteration 122, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 119.8s,\testimator xgb_limitdepth's best error=0.3481,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:13] {3334} INFO - iteration 123, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 119.8s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:13] {3334} INFO - iteration 124, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 119.8s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:13] {3334} INFO - iteration 125, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 119.9s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:13] {3334} INFO - iteration 126, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 119.9s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:13] {3334} INFO - iteration 127, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 119.9s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:13] {3334} INFO - iteration 128, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 119.9s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:13] {3334} INFO - iteration 129, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:13:13] {3519} INFO - at 120.0s,\testimator extra_tree's best error=0.3787,\tbest estimator lgbm's best error=0.3250\n", + "[flaml.automl.automl: 04-09 03:13:19] {3783} INFO - retrain lgbm for 5.8s\n", + "[flaml.automl.automl: 04-09 03:13:19] {3790} INFO - retrained model: LGBMClassifier(colsample_bytree=0.763983850698587,\n", + " learning_rate=0.087493667994037, max_bin=127,\n", + " min_child_samples=128, n_estimators=302, num_leaves=466,\n", + " reg_alpha=0.09968008477303378, reg_lambda=23.227419343318914,\n", + " verbose=-1)\n", + "[flaml.automl.automl: 04-09 03:13:19] {3034} INFO - fit succeeded\n", + "[flaml.automl.automl: 04-09 03:13:19] {3035} INFO - Time taken to find the best model: 74.35051536560059\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/nfs4/pyenv-bfada21f-d1ed-44b9-a41d-4ff480d237e7/lib/python3.8/site-packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge\n", + " warnings.warn(\n", + "/nfs4/pyenv-bfada21f-d1ed-44b9-a41d-4ff480d237e7/lib/python3.8/site-packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge\n", + " warnings.warn(\n", + "/nfs4/pyenv-bfada21f-d1ed-44b9-a41d-4ff480d237e7/lib/python3.8/site-packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge\n", + " warnings.warn(\n", + "/nfs4/pyenv-bfada21f-d1ed-44b9-a41d-4ff480d237e7/lib/python3.8/site-packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge\n", + " warnings.warn(\n", + "/nfs4/pyenv-bfada21f-d1ed-44b9-a41d-4ff480d237e7/lib/python3.8/site-packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge\n", + " warnings.warn(\n" + ] + } + ], + "source": [ + "'''The main flaml automl API'''\n", + "automl.fit(X_train=X_train, y_train=y_train, **settings)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "### Best model and metric" + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "metadata": { + "slideshow": { + "slide_type": "slide" + }, + "tags": [] + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:21.4301236Z", + "execution_start_time": "2023-04-09T03:13:21.0903825Z", + "livy_statement_state": "available", + "parent_msg_id": "7d9a796c-9ca5-415d-9dab-de06e4170216", + "queued_time": "2023-04-09T03:10:34.5888418Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 72 + }, + "text/plain": [ + "StatementMeta(automl, 7, 72, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Best ML leaner: lgbm\n", + "Best hyperparmeter config: {'n_estimators': 302, 'num_leaves': 466, 'min_child_samples': 128, 'learning_rate': 0.087493667994037, 'log_max_bin': 7, 'colsample_bytree': 0.763983850698587, 'reg_alpha': 0.09968008477303378, 'reg_lambda': 23.227419343318914}\n", + "Best accuracy on validation data: 0.675\n", + "Training duration of best run: 5.756 s\n" + ] + } + ], + "source": [ + "'''retrieve best config and best learner'''\n", + "print('Best ML leaner:', automl.best_estimator)\n", + "print('Best hyperparmeter config:', automl.best_config)\n", + "print('Best accuracy on validation data: {0:.4g}'.format(1-automl.best_loss))\n", + "print('Training duration of best run: {0:.4g} s'.format(automl.best_config_train_time))" + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:22.00515Z", + "execution_start_time": "2023-04-09T03:13:21.668468Z", + "livy_statement_state": "available", + "parent_msg_id": "69be3bb6-08bb-40d8-bfbd-bfd3eabd2abf", + "queued_time": "2023-04-09T03:10:34.6939373Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 73 + }, + "text/plain": [ + "StatementMeta(automl, 7, 73, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
LGBMClassifier(colsample_bytree=0.763983850698587,\n",
+              "               learning_rate=0.087493667994037, max_bin=127,\n",
+              "               min_child_samples=128, n_estimators=302, num_leaves=466,\n",
+              "               reg_alpha=0.09968008477303378, reg_lambda=23.227419343318914,\n",
+              "               verbose=-1)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" + ], + "text/plain": [ + "LGBMClassifier(colsample_bytree=0.763983850698587,\n", + " learning_rate=0.087493667994037, max_bin=127,\n", + " min_child_samples=128, n_estimators=302, num_leaves=466,\n", + " reg_alpha=0.09968008477303378, reg_lambda=23.227419343318914,\n", + " verbose=-1)" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "automl.model.estimator" + ] + }, + { + "cell_type": "code", + "execution_count": 48, + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:22.565239Z", + "execution_start_time": "2023-04-09T03:13:22.2540989Z", + "livy_statement_state": "available", + "parent_msg_id": "75ef8b8e-a50b-4f56-9d25-5fc985379c27", + "queued_time": "2023-04-09T03:10:34.7945603Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 74 + }, + "text/plain": [ + "StatementMeta(automl, 7, 74, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "'''pickle and save the automl object'''\n", + "import pickle\n", + "with open('automl.pkl', 'wb') as f:\n", + " pickle.dump(automl, f, pickle.HIGHEST_PROTOCOL)\n", + "'''load pickled automl object'''\n", + "with open('automl.pkl', 'rb') as f:\n", + " automl = pickle.load(f)" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "metadata": { + "slideshow": { + "slide_type": "slide" + }, + "tags": [] + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:25.1592289Z", + "execution_start_time": "2023-04-09T03:13:22.8210504Z", + "livy_statement_state": "available", + "parent_msg_id": "32c71506-0598-4e00-aea9-cb84387ecc5b", + "queued_time": "2023-04-09T03:10:34.9144997Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 75 + }, + "text/plain": [ + "StatementMeta(automl, 7, 75, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Predicted labels ['1' '0' '1' ... '1' '0' '0']\n", + "True labels 118331 0\n", + "328182 0\n", + "335454 0\n", + "520591 1\n", + "344651 0\n", + " ..\n", + "367080 0\n", + "203510 1\n", + "254894 0\n", + "296512 1\n", + "362444 0\n", + "Name: Delay, Length: 134846, dtype: category\n", + "Categories (2, object): ['0' < '1']\n" + ] + } + ], + "source": [ + "'''compute predictions of testing dataset''' \n", + "y_pred = automl.predict(X_test)\n", + "print('Predicted labels', y_pred)\n", + "print('True labels', y_test)\n", + "y_pred_proba = automl.predict_proba(X_test)[:,1]" + ] + }, + { + "cell_type": "code", + "execution_count": 50, + "metadata": { + "slideshow": { + "slide_type": "slide" + }, + "tags": [] + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:26.1850094Z", + "execution_start_time": "2023-04-09T03:13:25.4270376Z", + "livy_statement_state": "available", + "parent_msg_id": "5c1b0a67-28a7-4155-84e2-e732fb48b37d", + "queued_time": "2023-04-09T03:10:35.0461186Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 76 + }, + "text/plain": [ + "StatementMeta(automl, 7, 76, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "accuracy = 0.6732939797991784\n", + "roc_auc = 0.7276250346550404\n", + "log_loss = 0.6014655432027879\n" + ] + } + ], + "source": [ + "''' compute different metric values on testing dataset'''\n", + "from flaml.ml import sklearn_metric_loss_score\n", + "print('accuracy', '=', 1 - sklearn_metric_loss_score('accuracy', y_pred, y_test))\n", + "print('roc_auc', '=', 1 - sklearn_metric_loss_score('roc_auc', y_pred_proba, y_test))\n", + "print('log_loss', '=', sklearn_metric_loss_score('log_loss', y_pred_proba, y_test))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "See Section 4 for an accuracy comparison with default LightGBM and XGBoost.\n", + "\n", + "### Log history" + ] + }, + { + "cell_type": "code", + "execution_count": 51, + "metadata": { + "slideshow": { + "slide_type": "subslide" + }, + "tags": [] + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:26.7290827Z", + "execution_start_time": "2023-04-09T03:13:26.4652129Z", + "livy_statement_state": "available", + "parent_msg_id": "74e2927e-2fe9-4956-9e67-1246b2b24c66", + "queued_time": "2023-04-09T03:10:35.1554934Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 77 + }, + "text/plain": [ + "StatementMeta(automl, 7, 77, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'Current Learner': 'lgbm', 'Current Sample': 10000, 'Current Hyper-parameters': {'n_estimators': 4, 'num_leaves': 4, 'min_child_samples': 20, 'learning_rate': 0.09999999999999995, 'log_max_bin': 8, 'colsample_bytree': 1.0, 'reg_alpha': 0.0009765625, 'reg_lambda': 1.0, 'FLAML_sample_size': 10000}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 4, 'num_leaves': 4, 'min_child_samples': 20, 'learning_rate': 0.09999999999999995, 'log_max_bin': 8, 'colsample_bytree': 1.0, 'reg_alpha': 0.0009765625, 'reg_lambda': 1.0, 'FLAML_sample_size': 10000}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 10000, 'Current Hyper-parameters': {'n_estimators': 26, 'num_leaves': 4, 'min_child_samples': 18, 'learning_rate': 0.2293009676418639, 'log_max_bin': 9, 'colsample_bytree': 0.9086551727646448, 'reg_alpha': 0.0015561782752413472, 'reg_lambda': 0.33127416269768944, 'FLAML_sample_size': 10000}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 26, 'num_leaves': 4, 'min_child_samples': 18, 'learning_rate': 0.2293009676418639, 'log_max_bin': 9, 'colsample_bytree': 0.9086551727646448, 'reg_alpha': 0.0015561782752413472, 'reg_lambda': 0.33127416269768944, 'FLAML_sample_size': 10000}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 40000, 'Current Hyper-parameters': {'n_estimators': 55, 'num_leaves': 4, 'min_child_samples': 20, 'learning_rate': 0.43653962213332903, 'log_max_bin': 10, 'colsample_bytree': 0.8048558760626646, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.23010605579846408, 'FLAML_sample_size': 40000}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 55, 'num_leaves': 4, 'min_child_samples': 20, 'learning_rate': 0.43653962213332903, 'log_max_bin': 10, 'colsample_bytree': 0.8048558760626646, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.23010605579846408, 'FLAML_sample_size': 40000}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 40000, 'Current Hyper-parameters': {'n_estimators': 90, 'num_leaves': 18, 'min_child_samples': 34, 'learning_rate': 0.3572626620529719, 'log_max_bin': 10, 'colsample_bytree': 0.9295656128173544, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.1981463604305675, 'FLAML_sample_size': 40000}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 90, 'num_leaves': 18, 'min_child_samples': 34, 'learning_rate': 0.3572626620529719, 'log_max_bin': 10, 'colsample_bytree': 0.9295656128173544, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.1981463604305675, 'FLAML_sample_size': 40000}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 40000, 'Current Hyper-parameters': {'n_estimators': 56, 'num_leaves': 7, 'min_child_samples': 92, 'learning_rate': 0.23536463281405412, 'log_max_bin': 10, 'colsample_bytree': 0.9898009552962395, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.143294261726433, 'FLAML_sample_size': 40000}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 56, 'num_leaves': 7, 'min_child_samples': 92, 'learning_rate': 0.23536463281405412, 'log_max_bin': 10, 'colsample_bytree': 0.9898009552962395, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.143294261726433, 'FLAML_sample_size': 40000}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 56, 'num_leaves': 7, 'min_child_samples': 92, 'learning_rate': 0.23536463281405412, 'log_max_bin': 10, 'colsample_bytree': 0.9898009552962395, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.143294261726433, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 56, 'num_leaves': 7, 'min_child_samples': 92, 'learning_rate': 0.23536463281405412, 'log_max_bin': 10, 'colsample_bytree': 0.9898009552962395, 'reg_alpha': 0.0009765625, 'reg_lambda': 0.143294261726433, 'FLAML_sample_size': 364083}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 179, 'num_leaves': 27, 'min_child_samples': 75, 'learning_rate': 0.09744966359309021, 'log_max_bin': 10, 'colsample_bytree': 1.0, 'reg_alpha': 0.002826104794043855, 'reg_lambda': 0.145731823715616, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 179, 'num_leaves': 27, 'min_child_samples': 75, 'learning_rate': 0.09744966359309021, 'log_max_bin': 10, 'colsample_bytree': 1.0, 'reg_alpha': 0.002826104794043855, 'reg_lambda': 0.145731823715616, 'FLAML_sample_size': 364083}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 180, 'num_leaves': 31, 'min_child_samples': 112, 'learning_rate': 0.14172261747380863, 'log_max_bin': 8, 'colsample_bytree': 0.9882716197099741, 'reg_alpha': 0.004676080321450302, 'reg_lambda': 2.7048628270368136, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 180, 'num_leaves': 31, 'min_child_samples': 112, 'learning_rate': 0.14172261747380863, 'log_max_bin': 8, 'colsample_bytree': 0.9882716197099741, 'reg_alpha': 0.004676080321450302, 'reg_lambda': 2.7048628270368136, 'FLAML_sample_size': 364083}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 284, 'num_leaves': 24, 'min_child_samples': 57, 'learning_rate': 0.34506374431782616, 'log_max_bin': 8, 'colsample_bytree': 0.9661606582789269, 'reg_alpha': 0.05708594148438563, 'reg_lambda': 3.080643548412343, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 284, 'num_leaves': 24, 'min_child_samples': 57, 'learning_rate': 0.34506374431782616, 'log_max_bin': 8, 'colsample_bytree': 0.9661606582789269, 'reg_alpha': 0.05708594148438563, 'reg_lambda': 3.080643548412343, 'FLAML_sample_size': 364083}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 150, 'num_leaves': 176, 'min_child_samples': 62, 'learning_rate': 0.2607939951456863, 'log_max_bin': 8, 'colsample_bytree': 1.0, 'reg_alpha': 0.015973158305354472, 'reg_lambda': 1.1581244082992237, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 150, 'num_leaves': 176, 'min_child_samples': 62, 'learning_rate': 0.2607939951456863, 'log_max_bin': 8, 'colsample_bytree': 1.0, 'reg_alpha': 0.015973158305354472, 'reg_lambda': 1.1581244082992237, 'FLAML_sample_size': 364083}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 100, 'num_leaves': 380, 'min_child_samples': 83, 'learning_rate': 0.1439688182217924, 'log_max_bin': 7, 'colsample_bytree': 0.9365250834556608, 'reg_alpha': 0.07492795084698504, 'reg_lambda': 10.854898771631566, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 100, 'num_leaves': 380, 'min_child_samples': 83, 'learning_rate': 0.1439688182217924, 'log_max_bin': 7, 'colsample_bytree': 0.9365250834556608, 'reg_alpha': 0.07492795084698504, 'reg_lambda': 10.854898771631566, 'FLAML_sample_size': 364083}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 157, 'num_leaves': 985, 'min_child_samples': 115, 'learning_rate': 0.15986853540486204, 'log_max_bin': 6, 'colsample_bytree': 0.8905312088154893, 'reg_alpha': 0.17376372850615002, 'reg_lambda': 196.8899439847594, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 157, 'num_leaves': 985, 'min_child_samples': 115, 'learning_rate': 0.15986853540486204, 'log_max_bin': 6, 'colsample_bytree': 0.8905312088154893, 'reg_alpha': 0.17376372850615002, 'reg_lambda': 196.8899439847594, 'FLAML_sample_size': 364083}}\n", + "{'Current Learner': 'lgbm', 'Current Sample': 364083, 'Current Hyper-parameters': {'n_estimators': 302, 'num_leaves': 466, 'min_child_samples': 128, 'learning_rate': 0.087493667994037, 'log_max_bin': 7, 'colsample_bytree': 0.763983850698587, 'reg_alpha': 0.09968008477303378, 'reg_lambda': 23.227419343318914, 'FLAML_sample_size': 364083}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 302, 'num_leaves': 466, 'min_child_samples': 128, 'learning_rate': 0.087493667994037, 'log_max_bin': 7, 'colsample_bytree': 0.763983850698587, 'reg_alpha': 0.09968008477303378, 'reg_lambda': 23.227419343318914, 'FLAML_sample_size': 364083}}\n" + ] + } + ], + "source": [ + "from flaml.data import get_output_from_log\n", + "time_history, best_valid_loss_history, valid_loss_history, config_history, metric_history = \\\n", + " get_output_from_log(filename=settings['log_file_name'], time_budget=240)\n", + "for config in config_history:\n", + " print(config)" + ] + }, + { + "cell_type": "code", + "execution_count": 52, + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:27.2414306Z", + "execution_start_time": "2023-04-09T03:13:26.9671462Z", + "livy_statement_state": "available", + "parent_msg_id": "5e00da90-af15-4ffd-b1b5-b946fabfc565", + "queued_time": "2023-04-09T03:10:35.2740852Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 78 + }, + "text/plain": [ + "StatementMeta(automl, 7, 78, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAkAAAAHHCAYAAABXx+fLAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABNL0lEQVR4nO3de1xUdf4/8NcwMAMqDCoMjMhNCRMRL6jEkrdEUcvS/Bm1uoi2WoiKYrvKtolWC7qtl9xcUUvUrPWCWpiKKV7KvOYtCQXvmHLRkIsXGJ05vz/8MtvEgAzOMDDn9Xw85vFwPudzzrw/Qw949Tmfc45EEAQBRERERCJiY+kCiIiIiBoaAxARERGJDgMQERERiQ4DEBEREYkOAxARERGJDgMQERERiQ4DEBEREYkOAxARERGJDgMQERERiQ4DEBE1ST4+PoiOjrZ0GUTURDEAEYnY6tWrIZFI8OOPP1q6lCanoqICixYtQkhICBQKBezt7eHv74/JkycjNzfX0uUR0RPYWroAIqL6yMnJgY2NZf4f7vbt2xg8eDBOnDiBl156CX/84x/RokUL5OTkYP369VixYgXUarVFaiOiumEAIiKLe/ToEbRaLWQyWZ33kcvlZqyodtHR0Th16hTS0tIwcuRIvW0ffPAB3n33XZN8Tn2+FyKqG54CI6InunHjBsaPHw83NzfI5XJ06tQJq1at0uujVqsxe/ZsBAcHQ6FQoHnz5ujduzf27dun1+/q1auQSCT417/+hcWLF6N9+/aQy+XIzs7GnDlzIJFIcPHiRURHR8PZ2RkKhQLjxo3D/fv39Y7z+zVAVafzfvjhB8THx8PV1RXNmzfHiBEjcOvWLb19tVot5syZgzZt2qBZs2bo378/srOz67Su6OjRo9i+fTvefPPNauEHeBzM/vWvf+ne9+vXD/369avWLzo6Gj4+Pk/8Xk6dOgVbW1vMnTu32jFycnIgkUjwySef6NpKSkowbdo0eHp6Qi6Xw8/PD/Pnz4dWq611XERiwxkgIqpVYWEhnnvuOUgkEkyePBmurq7YuXMn3nzzTZSVlWHatGkAgLKyMnz66ad44403MGHCBJSXl+Ozzz5DREQEjh07hq5du+odNzU1FRUVFZg4cSLkcjlatWql2/baa6/B19cXycnJOHnyJD799FMolUrMnz//ifVOmTIFLVu2RGJiIq5evYrFixdj8uTJ2LBhg65PQkIC/vnPf2LYsGGIiIjAmTNnEBERgYqKiicePz09HQDwpz/9qQ7fnvF+/72oVCr07dsXGzduRGJiol7fDRs2QCqVYtSoUQCA+/fvo2/fvrhx4wbeeusteHl54dChQ0hISEB+fj4WL15slpqJmiSBiEQrNTVVACAcP368xj5vvvmmoFKphNu3b+u1v/7664JCoRDu378vCIIgPHr0SKisrNTrc+fOHcHNzU0YP368ru3KlSsCAMHJyUkoKirS65+YmCgA0OsvCIIwYsQIoXXr1npt3t7ewtixY6uNJTw8XNBqtbr26dOnC1KpVCgpKREEQRAKCgoEW1tbYfjw4XrHmzNnjgBA75iGjBgxQgAg3Llzp9Z+Vfr27Sv07du3WvvYsWMFb29v3fvavpfly5cLAISzZ8/qtQcEBAgvvPCC7v0HH3wgNG/eXMjNzdXrN2vWLEEqlQp5eXl1qplIDHgKjIhqJAgCNm/ejGHDhkEQBNy+fVv3ioiIQGlpKU6ePAkAkEqlurUqWq0WxcXFePToEXr06KHr81sjR46Eq6urwc99++239d737t0bv/76K8rKyp5Y88SJEyGRSPT21Wg0uHbtGgAgMzMTjx49wqRJk/T2mzJlyhOPDUBXg6OjY536G8vQ9/Lqq6/C1tZWbxYrKysL2dnZiIyM1LVt2rQJvXv3RsuWLfV+VuHh4dBoNPjuu+/MUjNRU8RTYERUo1u3bqGkpAQrVqzAihUrDPYpKirS/XvNmjVYsGABzp8/j4cPH+rafX19q+1nqK2Kl5eX3vuWLVsCAO7cuQMnJ6daa65tXwC6IOTn56fXr1WrVrq+tan6/PLycjg7Oz+xv7EMfS8uLi4YMGAANm7ciA8++ADA49Nftra2ePXVV3X9Lly4gJ9++qnGYPnbnxWR2DEAEVGNqhbOjhkzBmPHjjXYJygoCACwbt06REdHY/jw4fjLX/4CpVIJqVSK5ORkXLp0qdp+Dg4ONX6uVCo12C4IwhNrfpp96+LZZ58FAJw9exa9e/d+Yn+JRGLwszUajcH+NX0vr7/+OsaNG4fTp0+ja9eu2LhxIwYMGAAXFxddH61Wi4EDB+Kvf/2rwWP4+/s/sV4isWAAIqIaubq6wtHRERqNBuHh4bX2TUtLQ7t27bBlyxa9U1C/X7hrad7e3gCAixcv6s22/Prrr7pZotoMGzYMycnJWLduXZ0CUMuWLXH58uVq7VUzUXU1fPhwvPXWW7rTYLm5uUhISNDr0759e9y9e/eJPysi4mXwRFQLqVSKkSNHYvPmzcjKyqq2/beXl1fNvPx2tuPo0aM4fPiw+Qs1woABA2Bra4tly5bptf/2UvLahIaGYvDgwfj000/x1VdfVduuVqvxzjvv6N63b98e58+f1/uuzpw5gx9++MGoup2dnREREYGNGzdi/fr1kMlkGD58uF6f1157DYcPH8auXbuq7V9SUoJHjx4Z9ZlE1owzQESEVatWISMjo1p7XFwc5s2bh3379iEkJAQTJkxAQEAAiouLcfLkSezZswfFxcUAgJdeeglbtmzBiBEj8OKLL+LKlStISUlBQEAA7t6929BDqpGbmxvi4uKwYMECvPzyyxg8eDDOnDmDnTt3wsXFRW/2qiZr167FoEGD8Oqrr2LYsGEYMGAAmjdvjgsXLmD9+vXIz8/X3Qto/PjxWLhwISIiIvDmm2+iqKgIKSkp6NSpU50Wdf9WZGQkxowZg//85z+IiIiotgbpL3/5C9LT0/HSSy8hOjoawcHBuHfvHs6ePYu0tDRcvXpV75QZkZgxABFRtdmQKtHR0Wjbti2OHTuG999/H1u2bMF//vMftG7dGp06ddK7L090dDQKCgqwfPly7Nq1CwEBAVi3bh02bdqE/fv3N9BI6mb+/Plo1qwZVq5ciT179iA0NBTffvstnn/+edjb2z9xf1dXVxw6dAj/+c9/sGHDBrz77rtQq9Xw9vbGyy+/jLi4OF3fjh07Yu3atZg9ezbi4+MREBCAzz//HF9++aXR38vLL78MBwcHlJeX6139VaVZs2Y4cOAAkpKSsGnTJqxduxZOTk7w9/fH3LlzoVAojPo8ImsmEUy1MpCIqAkrKSlBy5Yt8eGHH5rsURZE1HhxDRARic6DBw+qtVXdJdnQYyuIyPrwFBgRic6GDRuwevVqDB06FC1atMDBgwfx3//+F4MGDUJYWJilyyOiBsAARESiExQUBFtbW/zzn/9EWVmZbmH0hx9+aOnSiKiBcA0QERERiQ7XABEREZHoMAARERGR6HANkAFarRY3b96Eo6NjnW6KRkRERJYnCALKy8vRpk0b2NjUPsfDAGTAzZs34enpaekyiIiIqB6uX7+Otm3b1tqHAcgAR0dHAI+/QCcnJwtXQ0RERHVRVlYGT09P3d/x2jAAGVB12svJyYkBiIiIqImpy/IVLoImIiIi0WEAIiIiItFhACIiIiLRYQAiIiIi0WEAIiIiItFhACIiIiLRYQAiIiIi0WEAIiIiItFhACIiIiLR4Z2giYiIqMFotAKOXSlGUXkFlI726OXbClKbhn/wOAMQERERNYiMrHzM3ZaN/NIKXZtKYY/EYQEYHKhq0Fp4CoyIiIjMLiMrHzHrTuqFHwAoKK1AzLqTyMjKb9B6GICIiIjIrDRaAXO3ZUMwsK2qbe62bGi0hnqYBwMQERERmdWxK8XVZn5+SwCQX1qBY1eKG6wmBiAiIiIyq6LymsNPffqZAgMQERERmZXS0d6k/UyBAYiIiIjMqpdvK6gU9qjpYncJHl8N1su3VYPVxABEREREZiW1kSBxWIDBbVWhKHFYQIPeD4gBiIiIiMxucKAKy8Z0h5uTXK/dXWGPZWO6N/h9gHgjRCIiImoQgwNVCPNzQec53wIAUqN7oo+/q0XuBM0ZICIiImowvw07Ie0s8xgMgDNAREREjU5jeV6WNWMAIiIiakQa0/OyrBlPgRERETUSje15WdaMM0BERNRkWPOpobo8Lysx/WeE+bk06THfV2ssXQIABiAiImoirP3U0JOelwUAhWWVuiuo6OnwFBgRETV6Yjg11JDPwWoMeni3hIOd1GKfzxkgIiJq1MRyasjJ3q5O/VKjeyKkXcM9MsJcHOykkEgs9/NiACIiokaNp4Yek+DxXZMtdeNAa8MARFbBmhdGEomd2E4NGWKp52VZMwYgavKsfWEkkdgpHe3r1M9aTg3tzi5A0o7zKCyr1LW583eayUkEQTB0WlXUysrKoFAoUFpaCicnJ0uXQ7WoWhj5+/+Iq/7/yBIP2CMi09JoBTw/fy8KSisMrgOqOjV0cOYLVjM7wlnt+jHm7zdngKjJEsvCSCICZg3pgLj1Z6q1W+upIamNBKHtW1u6DKvGAERNFhdGEhFPDVF9MQBRk8WFkUTi5evSHP8YHoiQdq2tauaHGg4DEDVZYlsYSUT/Y+l7yFDTxwBETVYv31ZQKeyfuDCS98wgIqLf46MwqMmS2kiQOCzA4DZrXRhJRESmwQBETdrgQBWWjekONye5Xru7wp6XwBMRUY14CoyavMGBKoT5ueiu9kqN7snTXkREVCvOAJFV+G3YCWnHG4YREVHtGICIiIhIdBpFAFq6dCl8fHxgb2+PkJAQHDt2rNb+JSUliI2NhUqlglwuh7+/P3bs2KHb7uPjA4lEUu0VGxtr7qEQERFRE2DxNUAbNmxAfHw8UlJSEBISgsWLFyMiIgI5OTlQKpXV+qvVagwcOBBKpRJpaWnw8PDAtWvX4OzsrOtz/PhxaDQa3fusrCwMHDgQo0aNaoghERERUSNn8QC0cOFCTJgwAePGjQMApKSkYPv27Vi1ahVmzZpVrf+qVatQXFyMQ4cOwc7ODsDjGZ/fcnV11Xs/b948tG/fHn379jXPIIiIiKhJsegpMLVajRMnTiA8PFzXZmNjg/DwcBw+fNjgPunp6QgNDUVsbCzc3NwQGBiIpKQkvRmf33/GunXrMH78+BrvGlpZWYmysjK9FxEREVkviwag27dvQ6PRwM3NTa/dzc0NBQUFBve5fPky0tLSoNFosGPHDrz33ntYsGABPvzwQ4P9v/rqK5SUlCA6OrrGOpKTk6FQKHQvT0/Peo+JiIiIGr9GsQjaGFqtFkqlEitWrEBwcDAiIyPx7rvvIiUlxWD/zz77DEOGDEGbNm1qPGZCQgJKS0t1r+vXr5urfCIiImoELLoGyMXFBVKpFIWFhXrthYWFcHd3N7iPSqWCnZ0dpFKprq1jx44oKCiAWq2GTCbTtV+7dg179uzBli1baq1DLpdDLpfX2oeIiIish0VngGQyGYKDg5GZmalr02q1yMzMRGhoqMF9wsLCcPHiRWi1Wl1bbm4uVCqVXvgBgNTUVCiVSrz44ovmGQARERE1SRY/BRYfH4+VK1dizZo1OHfuHGJiYnDv3j3dVWFRUVFISEjQ9Y+JiUFxcTHi4uKQm5uL7du3Iykpqdo9frRaLVJTUzF27FjY2lr8YjciIiJqRCyeDCIjI3Hr1i3Mnj0bBQUF6Nq1KzIyMnQLo/Py8mBj87+c5unpiV27dmH69OkICgqCh4cH4uLiMHPmTL3j7tmzB3l5eRg/fnyDjoeIiIgaP4kgCIKli2hsysrKoFAoUFpaCicnJ0uXQ3VwX/0IAbN3AQCy349AM5nFsz0RETUwY/5+868ENRoarYBjV4pRVF4BpaM9evnyoaZERGQeDEDUKGRk5WPutmzkl1bo2lQKeyQOC8DgQJUFKyMiImtk8UXQRBlZ+YhZd1Iv/ABAQWkFYtadREZWvoUqIyIia8UZILIojVbA3G3ZMLQQraotMf1nhPm51Ho67L7a8KNQiIiIDGEAIos6dqW42szP7xWWVaLznG8bqCIiIhIDngIjiyoqrz38GKuHd0s42Emf3JGIiESNM0BkUUpH+zr1S43uiZB2rZ7Yz8FOComEV44REVHtGIDIonr5toJKYY+C0gqD64AkANwV9ujj78pL4omIyGR4CowsSmojQeKwAIPbquJO4rAAhh8iIjIpBiCyuMGBKiwb0x1uTnK9dneFPZaN6c77ABERkcnxFBg1CoMDVQjzc9Fd7ZUa3ZOnvYiIyGw4A0SNxm/DTkg7PgaDiIjMhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEx9bSBVDjp9EKOHalGEXlFVA62qOXbytIbSSWLouIiKjeGICoVhlZ+Zi7LRv5pRW6NpXCHonDAjA4UGXByoiIiOqPp8CoRhlZ+YhZd1Iv/ABAQWkFYtadREZWvoUqIyIiejqcASKDNFoBc7dlQzCwraotMf1nhPm5mOx02H21xiTHISIiehIGIDLo2JXiajM/v1dYVonOc75toIqIiIhMh6fAyKCi8trDjzn18G4JBzupxT6fiIisH2eAyCClo32d+qVG90RIu1Ym/WwHOykkEl5lRkRE5sMARAb18m0FlcIeBaUVBtcBSQC4K+zRx9+Vl8QTEVGTw1NgZJDURoLEYQEGt1XFncRhAQw/RETUJDEAUY0GB6qwbEx3uDnJ9drdFfZYNqY77wNERERNFk+BUa0GB6oQ5ueiu9orNbonT3sREVGTxxkgeqLfhp2QdnwMBhERNX0MQERERCQ6DEBEREQkOgxAREREJDoMQERERCQ6DEBEREQkOgxAREREJDoMQERERCQ6Fg9AS5cuhY+PD+zt7RESEoJjx47V2r+kpASxsbFQqVSQy+Xw9/fHjh079PrcuHEDY8aMQevWreHg4IDOnTvjxx9/NOcwiIiIqAmx6J2gN2zYgPj4eKSkpCAkJASLFy9GREQEcnJyoFQqq/VXq9UYOHAglEol0tLS4OHhgWvXrsHZ2VnX586dOwgLC0P//v2xc+dOuLq64sKFC2jZsmUDjoyIiIgaM4sGoIULF2LChAkYN24cACAlJQXbt2/HqlWrMGvWrGr9V61aheLiYhw6dAh2dnYAAB8fH70+8+fPh6enJ1JTU3Vtvr6+5hsEERERNTkWOwWmVqtx4sQJhIeH/68YGxuEh4fj8OHDBvdJT09HaGgoYmNj4ebmhsDAQCQlJUGj0ej16dGjB0aNGgWlUolu3bph5cqVZh9PU6XRCjh86Vd8ffoGDl/6FRqtYOmSiIiIzM5iM0C3b9+GRqOBm5ubXrubmxvOnz9vcJ/Lly9j7969GD16NHbs2IGLFy9i0qRJePjwIRITE3V9li1bhvj4ePztb3/D8ePHMXXqVMhkMowdO9bgcSsrK1FZWal7X1ZWZqJRNm4ZWfmYuy0b+aUVujaVwh6JwwL4pHciIrJqFl8EbQytVgulUokVK1YgODgYkZGRePfdd5GSkqLXp3v37khKSkK3bt0wceJETJgwQa/P7yUnJ0OhUOhenp6eDTEci8rIykfMupN64QcACkorELPuJDKy8i1UGRERkflZLAC5uLhAKpWisLBQr72wsBDu7u4G91GpVPD394dUKtW1dezYEQUFBVCr1bo+AQEBevt17NgReXl5NdaSkJCA0tJS3ev69ev1HVaToNEKmLstG4ZOdgn/90pM/xnlFQ9xX/0I99UaAz2JiIiaLoudApPJZAgODkZmZiaGDx8O4PHsTWZmJiZPnmxwn7CwMHz55ZfQarWwsXmc3XJzc6FSqSCTyXR9cnJy9PbLzc2Ft7d3jbXI5XLI5XITjKppOHaluNrMz+8VllWi85xvG6giIiKihmXRU2Dx8fFYuXIl1qxZg3PnziEmJgb37t3TXRUWFRWFhIQEXf+YmBgUFxcjLi4Oubm52L59O5KSkhAbG6vrM336dBw5cgRJSUm4ePEivvzyS6xYsUKvj9gVldcefmrSw7slHOykT+5IRETUyFn0MvjIyEjcunULs2fPRkFBAbp27YqMjAzdwui8vDzdTA8AeHp6YteuXZg+fTqCgoLg4eGBuLg4zJw5U9enZ8+e2Lp1KxISEvD+++/D19cXixcvxujRoxt8fI2V0tG+Tv1So3sipF0r3XsHOykkEom5yiIiImowEkEQjLru+fLly2jXrp256mkUysrKoFAoUFpaCicnJ0uXY3IarYDn5+9FQWmFwXVAEgDuCnscnPkCpDYMPERE1DQY8/fb6FNgfn5+6N+/P9atW4eKivqdSiHLktpIkDgswOC2qriTOCyA4YeIiKyW0QHo5MmTCAoKQnx8PNzd3fHWW2898fld1PgMDlRh2ZjucHPSX/ztrrDHsjHdeR8gIiKyakafAqvy6NEjpKenY/Xq1cjIyIC/vz/Gjx+PP/3pT3B1dTV1nQ3K2k+B/VZ5xUPd1V6p0T3Rx9+VMz9ERNQkmfUUWBVbW1u8+uqr2LRpE+bPn4+LFy/inXfegaenJ6KiopCfzxvpNQW/DTsh7Vox/BARkSjUOwD9+OOPmDRpElQqFRYuXIh33nkHly5dwu7du3Hz5k288sorpqyTiIiIyGSMvgx+4cKFSE1NRU5ODoYOHYq1a9di6NChusvVfX19sXr16mpPaSciIiJqLIwOQMuWLcP48eMRHR0NlcrwQlmlUonPPvvsqYsjIiIiMgejA9CFCxee2Ke2J68TERERWZrRa4BSU1OxadOmau2bNm3CmjVrTFIUERERkTkZHYCSk5Ph4uJSrV2pVCIpKckkRRERERGZk9EBKC8vD76+vtXavb29kZeXZ5KiiIiIiMzJ6ACkVCrx008/VWs/c+YMWrdubZKiiIiIiMzJ6AD0xhtvYOrUqdi3bx80Gg00Gg327t2LuLg4vP766+aokYiIiMikjL4K7IMPPsDVq1cxYMAA2No+3l2r1SIqKoprgIiIiKhJMDoAyWQybNiwAR988AHOnDkDBwcHdO7cGd7e3uaoj4iIiMjkjA5AVfz9/eHv72/KWoiIiIgaRL0C0C+//IL09HTk5eVBrVbrbVu4cKFJCiMiIiIyF6MDUGZmJl5++WW0a9cO58+fR2BgIK5evQpBENC9e3dz1EhERERkUkZfBZaQkIB33nkHZ8+ehb29PTZv3ozr16+jb9++GDVqlDlqJCIiIjIpowPQuXPnEBUVBQCwtbXFgwcP0KJFC7z//vuYP3++yQskIiIiMjWjA1Dz5s11635UKhUuXbqk23b79m3TVUZERERkJkavAXruuedw8OBBdOzYEUOHDsWMGTNw9uxZbNmyBc8995w5aiQiIiIyKaMD0MKFC3H37l0AwNy5c3H37l1s2LABzzzzDK8AIyIioibBqACk0Wjwyy+/ICgoCMDj02EpKSlmKYyIiIjIXIxaAySVSjFo0CDcuXPHXPUQERERmZ3Ri6ADAwNx+fJlc9RCRERE1CCMDkAffvgh3nnnHXzzzTfIz89HWVmZ3ouIiIiosTN6EfTQoUMBAC+//DIkEomuXRAESCQSaDQa01VHREREZAZGB6B9+/aZow4iIiKiBmN0AOrbt6856iAiIiJqMEYHoO+++67W7X369Kl3MUREREQNwegA1K9fv2ptv10LxDVARERE1NgZfRXYnTt39F5FRUXIyMhAz5498e2335qjRiIiIiKTMnoGSKFQVGsbOHAgZDIZ4uPjceLECZMURkRERGQuRs8A1cTNzQ05OTmmOhwRERGR2Rg9A/TTTz/pvRcEAfn5+Zg3bx66du1qqrqIiIiIzMboANS1a1dIJBIIgqDX/txzz2HVqlUmK4yIiIjIXIwOQFeuXNF7b2NjA1dXV9jb25usKCIiIiJzMjoAeXt7m6MOIiIiogZj9CLoqVOnYsmSJdXaP/nkE0ybNs0UNRERERGZldEBaPPmzQgLC6vW/oc//AFpaWkmKYqIiIjInIwOQL/++qvBewE5OTnh9u3bJimKiIiIyJyMDkB+fn7IyMio1r5z5060a9fOJEURERERmZPRi6Dj4+MxefJk3Lp1Cy+88AIAIDMzEwsWLMDixYtNXR8RERGRyRkdgMaPH4/Kykr84x//wAcffAAA8PHxwbJlyxAVFWXyAomIiIhMzegABAAxMTGIiYnBrVu34ODggBYtWpi6LiIiIiKzqdeNEB89eoRnnnkGrq6uuvYLFy7Azs4OPj4+pqyPiIiIyOSMXgQdHR2NQ4cOVWs/evQooqOjTVETERERkVkZHYBOnTpl8D5Azz33HE6fPm2KmoiIiIjMyugAJJFIUF5eXq29tLQUGo3GJEURERERmZPRAahPnz5ITk7WCzsajQbJycl4/vnnTVocERERkTkYvQh6/vz56NOnDzp06IDevXsDAL7//nuUlZVh7969Ji+QiIiIyNSMngEKCAjATz/9hNdeew1FRUUoLy9HVFQUzp8/j8DAQHPUSERERGRSRgcgAGjTpg2SkpKwfft2pKWlYfbs2bCxscEnn3xSryKWLl0KHx8f2NvbIyQkBMeOHau1f0lJCWJjY6FSqSCXy+Hv748dO3bots+ZMwcSiUTv9eyzz9arNiIiIrI+9boR4m9lZmbis88+w9atW9GsWTNMnjzZqP03bNiA+Ph4pKSkICQkBIsXL0ZERARycnKgVCqr9Ver1Rg4cCCUSiXS0tLg4eGBa9euwdnZWa9fp06dsGfPHt17W9unHioRERFZiXrNAF2/fh3vv/8+fH19MWjQIADA1q1bUVBQYPSxFi5ciAkTJmDcuHEICAhASkoKmjVrhlWrVhnsv2rVKhQXF+Orr75CWFgYfHx80LdvX3Tp0kWvn62tLdzd3XUvFxcX4wdKREREVqnOAejhw4fYtGkTIiIi0KFDB5w+fRofffQRbGxs8Pe//x2DBw+GnZ2dUR+uVqtx4sQJhIeH/68gGxuEh4fj8OHDBvdJT09HaGgoYmNj4ebmhsDAQCQlJVW7BP/ChQto06YN2rVrh9GjRyMvL8+o2oiIiMh61fm8kIeHB5599lmMGTMG69evR8uWLQEAb7zxRr0//Pbt29BoNHBzc9Nrd3Nzw/nz5w3uc/nyZezduxejR4/Gjh07cPHiRUyaNAkPHz5EYmIiACAkJASrV69Ghw4dkJ+fj7lz56J3797IysqCo6NjtWNWVlaisrJS976srKzeYyIiIqLGr84B6NGjR7oFxVKp1Jw11Uqr1UKpVGLFihWQSqUIDg7GjRs38NFHH+kC0JAhQ3T9g4KCEBISAm9vb2zcuBFvvvlmtWMmJydj7ty5DTYGIiIisqw6nwK7efMmJk6ciP/+979wd3fHyJEjsXXrVkgkknp/uIuLC6RSKQoLC/XaCwsL4e7ubnAflUoFf39/vRDWsWNHFBQUQK1WG9zH2dkZ/v7+uHjxosHtCQkJKC0t1b2uX79ezxERERFRU1DnAGRvb4/Ro0dj7969OHv2LDp27IipU6fi0aNH+Mc//oHdu3cb/SgMmUyG4OBgZGZm6tq0Wi0yMzMRGhpqcJ+wsDBcvHgRWq1W15abmwuVSgWZTGZwn7t37+LSpUtQqVQGt8vlcjg5Oem9iIiIyHrV6yqw9u3b48MPP8S1a9ewfft2VFZW4qWXXqq2lqcu4uPjsXLlSqxZswbnzp1DTEwM7t27h3HjxgEAoqKikJCQoOsfExOD4uJixMXFITc3F9u3b0dSUhJiY2N1fd555x0cOHAAV69exaFDhzBixAhIpdKnWq9ERERE1uOpbo5jY2ODIUOGYMiQIbh16xY+//xzo48RGRmJW7duYfbs2SgoKEDXrl2RkZGhC1N5eXmwsflfTvP09MSuXbswffp0BAUFwcPDA3FxcZg5c6auzy+//II33ngDv/76K1xdXfH888/jyJEjcHV1fZrhEhERkZWQCIIgWLqIxqasrAwKhQKlpaVWfzrsvvoRAmbvAgBkvx+BZjLeMJKIiJomY/5+1+sUGBEREVFTxgBEREREosMARERERKLDAERERESiY/SKV41Gg9WrVyMzMxNFRUV69+MBgL1795qsOCIiIiJzMDoAxcXFYfXq1XjxxRcRGBj4VHeCJiIiIrIEowPQ+vXrsXHjRgwdOtQc9VA9abQCjl0pRlF5BZSO9ujl2wpSG4ZTIiIiQ4wOQDKZDH5+fuaoheopIysfc7dlI7+0QtemUtgjcVgABgcafvwHERGRmBm9CHrGjBn4+OOPwfsnNg4ZWfmIWXdSL/wAQEFpBWLWnURGVr6FKiMiImq8jJ4BOnjwIPbt24edO3eiU6dOsLOz09u+ZcsWkxVHtdNoBczdlg1DUbSqLTH9Z4T5udR4Ouy+2rgH2BIREVkDowOQs7MzRowYYY5ayEjHrhRXm/n5vcKySnSe820DVURERNQ0GB2AUlNTzVEHPYGhRc5F5bWHH2P08G4JBzupyY5HRETUmNX7yZe3bt1CTk4OAKBDhw580roZ1bTI+fWennXaPzW6J0Lataq1j4OdlLc0ICIi0TA6AN27dw9TpkzB2rVrdTdBlEqliIqKwr///W80a9bM5EWKWdUi59+v8ykorcCiPRfg3MwOpfcfGlwHJAHgrrBHH39XXhJPRET0G0ZfBRYfH48DBw5g27ZtKCkpQUlJCb7++mscOHAAM2bMMEeNolWXRc6CINQYfgAgcVgAww8REdHvGD0DtHnzZqSlpaFfv366tqFDh8LBwQGvvfYali1bZsr6RK0ui5xLHzwy2O7O+wARERHVyOgAdP/+fbi5uVVrVyqVuH//vkmKosfqs8jZ16U5/jE8ECHtWnPmh4iIqAZGB6DQ0FAkJiZi7dq1sLe3BwA8ePAAc+fORWhoqMkLFCuNVsDt8so69f3tImcuZiYiInoyowPQxx9/jIiICLRt2xZdunQBAJw5cwb29vbYtWuXyQsUI0NXfRnCRc5ERET1Y3QACgwMxIULF/DFF1/g/PnzAIA33ngDo0ePhoODg8kLFJuarvr6PS5yJiIiqr963QeoWbNmmDBhgqlrETWNVsCRS79i1uazTww/ABc5ExERPY06BaD09HQMGTIEdnZ2SE9Pr7Xvyy+/bJLCxKSup7yqzIzogIl923Pmh4iIqJ7qFICGDx+OgoICKJVKDB8+vMZ+EokEGg0frmmMup7y+i2Vsz3DDxER0VOoUwCquuPz7/9NT6e2Gx3Wxs2Ja62IiIiehtF3gl67di0qK6tfnq1Wq7F27VqTFCUWdbnR4W9J8PgZYL18a3+uFxEREdXO6AA0btw4lJaWVmsvLy/HuHHjTFKUWBhzo0Ne9UVERGQ6Rl8FJgiCwRvt/fLLL1AoFCYpSiyUjvZ17survoiIiEynzgGoW7dukEgkkEgkGDBgAGxt/7erRqPBlStXMHjwYLMUaa16+baCSmGPgtKKGtcBOTvYYeno7niOj7YgIiIymToHoKqrv06fPo2IiAi0aNFCt00mk8HHxwcjR440eYHWTGojQeKwAMSsO1ltW1XUmTeyM8L8XBq2MCIiIisnEQTBqIuQ1qxZg8jISN1zwKxRWVkZFAoFSktL4eTkZPbPy8jKR2L6zygs+9/ichVPeRERERnFmL/fRgcgMWjoAAQA5RUP0XnOtwAeP9yUz/ciIiIyjjF/v41eBK3RaLBo0SJs3LgReXl5UKvVetuLi4uNPSQBemEnpF0rhh8iIiIzMvoy+Llz52LhwoWIjIxEaWkp4uPj8eqrr8LGxgZz5swxQ4lEREREpmV0APriiy+wcuVKzJgxA7a2tnjjjTfw6aefYvbs2Thy5Ig5aiQiIiIyKaMDUEFBATp37gwAaNGihe6miC+99BK2b99u2uqIiIiIzMDoANS2bVvk5+cDANq3b49vv328cPf48eOQy+WmrY6IiIjIDIwOQCNGjEBmZiYAYMqUKXjvvffwzDPPICoqCuPHjzd5gURERESmZvRVYPPmzdP9OzIyEl5eXjh8+DCeeeYZDBs2zKTFEREREZmD0QHo90JDQxEaGmqKWoiIiIgaRJ0CUHp6ep0P+PLLL9e7GCIiIqKGUKcAVPUcsCoSiQS/v4F01RPiNRqNaSojIiIiMpM6LYLWarW617fffouuXbti586dKCkpQUlJCXbu3Inu3bsjIyPD3PUSERERPTWj1wBNmzYNKSkpeP7553VtERERaNasGSZOnIhz586ZtEAiIiIiUzP6MvhLly7B2dm5WrtCocDVq1dNUBIRERGReRkdgHr27In4+HgUFhbq2goLC/GXv/wFvXr1MmlxREREROZgdABatWoV8vPz4eXlBT8/P/j5+cHLyws3btzAZ599Zo4aiYiIiEzK6DVAfn5++Omnn7B7926cP38eANCxY0eEh4frrgQjIiIiaszqdSNEiUSCQYMGYdCgQaauh4iIiMjs6hSAlixZgokTJ8Le3h5Lliypte/UqVNNUhgRERGRudQpAC1atAijR4+Gvb09Fi1aVGM/iUTCAERERESNXp0C0JUrVwz+m4iIiKgpMvoqMCIiIqKmrk4zQPHx8XU+4MKFC+tdDBEREVFDqNMM0KlTp+r0On36dL2KWLp0KXx8fGBvb4+QkBAcO3as1v4lJSWIjY2FSqWCXC6Hv78/duzYYbDvvHnzIJFIMG3atHrVRkRERNanTjNA+/btM1sBGzZsQHx8PFJSUhASEoLFixcjIiICOTk5UCqV1fqr1WoMHDgQSqUSaWlp8PDwwLVr1ww+nuP48eNYvnw5goKCzFY/ERERNT0WXwO0cOFCTJgwAePGjUNAQABSUlLQrFkzrFq1ymD/VatWobi4GF999RXCwsLg4+ODvn37okuXLnr97t69i9GjR2PlypVo2bJlQwyFiIiImoh63Qjxxx9/xMaNG5GXlwe1Wq23bcuWLXU+jlqtxokTJ5CQkKBrs7GxQXh4OA4fPmxwn/T0dISGhiI2NhZff/01XF1d8cc//hEzZ86EVCrV9YuNjcWLL76I8PBwfPjhh7XWUVlZicrKSt37srKyOo+BiIiImh6jZ4DWr1+PP/zhDzh37hy2bt2Khw8f4ueff8bevXuhUCiMOtbt27eh0Wjg5uam1+7m5oaCggKD+1y+fBlpaWnQaDTYsWMH3nvvPSxYsEAv5Kxfvx4nT55EcnJynepITk6GQqHQvTw9PY0aBxERETUtRgegpKQkLFq0CNu2bYNMJsPHH3+M8+fP47XXXoOXl5c5atSj1WqhVCqxYsUKBAcHIzIyEu+++y5SUlIAANevX0dcXBy++OIL2Nvb1+mYCQkJKC0t1b2uX79uziEQERGRhRkdgC5duoQXX3wRACCTyXDv3j1IJBJMnz4dK1asMOpYLi4ukEqlKCws1GsvLCyEu7u7wX1UKhX8/f31Tnd17NgRBQUFulNqRUVF6N69O2xtbWFra4sDBw5gyZIlsLW1hUajqXZMuVwOJycnvRcRERFZL6MDUMuWLVFeXg4A8PDwQFZWFoDHl6bfv3/fqGPJZDIEBwcjMzNT16bVapGZmYnQ0FCD+4SFheHixYvQarW6ttzcXKhUKshkMgwYMABnz57F6dOnda8ePXpg9OjROH36tF5wIiIiInEyehF0nz59sHv3bnTu3BmjRo1CXFwc9u7di927d2PAgAFGFxAfH4+xY8eiR48e6NWrFxYvXox79+5h3LhxAICoqCh4eHjo1vPExMTgk08+QVxcHKZMmYILFy4gKSlJ9wwyR0dHBAYG6n1G8+bN0bp162rtDU2jFXDsSjGKyiugdLRHL99WkNpILFoTERGRGNU5AGVlZSEwMBCffPIJKioqAADvvvsu7OzscOjQIYwcORJ///vfjS4gMjISt27dwuzZs1FQUICuXbsiIyNDtzA6Ly8PNjb/m6jy9PTErl27MH36dAQFBcHDwwNxcXGYOXOm0Z/dkDKy8jF3WzbySyt0bSqFPRKHBWBwoMqClREREYmPRBAEoS4dbWxs0LNnT/z5z3/G66+/DkdHR3PXZjFlZWVQKBQoLS01yXqgjKx8xKw7id9/0VVzP8vGdEcff1cEzN4FAMh+PwLNZPW6QwEREZFoGfP3u85rgA4cOIBOnTphxowZUKlUGDt2LL7//vunLtbaabQC5m7LrhZ+AED4v1di+s8or3jUwJURERGJV50DUO/evbFq1Srk5+fj3//+N65evYq+ffvC398f8+fPr/G+PWJ37Eqx3mkvQwrLKhGSlFlrHyIiIjIdo68Ca968OcaNG4cDBw4gNzcXo0aNwtKlS+Hl5YWXX37ZHDU2aUXltYef3+vh3RIOdrxSjYiIyJyeaqGJn58f/va3v8Hb2xsJCQnYvn27qeqyGkrHut2MMTW6J0LatYKDnRQSCa8MIyIiMqd6B6DvvvsOq1atwubNm2FjY4PXXnsNb775pilrswq9fFtBpbBHQWmFwXVAEgDuCnv08XflJfFEREQNxKhTYDdv3kRSUhL8/f3Rr18/XLx4EUuWLMHNmzexcuVKPPfcc+aqs8mS2kiQOCzA4LaquJM4LIDhh4iIqAHVeQZoyJAh2LNnD1xcXBAVFYXx48ejQ4cO5qzNagwOVGHZmO5ITP8ZhWX/e+q8O+8DREREZBF1DkB2dnZIS0vDSy+9xMdJ1MPgQBXC/FzQec63AB6v+eFpLyIiIsuocwBKT083Zx2i8NuwE9KOj8EgIiKyFKMvgyciIiJq6hiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQaRQBaunQpfHx8YG9vj5CQEBw7dqzW/iUlJYiNjYVKpYJcLoe/vz927Nih275s2TIEBQXByckJTk5OCA0Nxc6dO809DCIiImoibC1dwIYNGxAfH4+UlBSEhIRg8eLFiIiIQE5ODpRKZbX+arUaAwcOhFKpRFpaGjw8PHDt2jU4Ozvr+rRt2xbz5s3DM888A0EQsGbNGrzyyis4deoUOnXq1ICjIyIiosZIIgiCYMkCQkJC0LNnT3zyyScAAK1WC09PT0yZMgWzZs2q1j8lJQUfffQRzp8/Dzs7uzp/TqtWrfDRRx/hzTfffGLfsrIyKBQKlJaWwsnJqe6DeYL76kcImL0LAJD9fgSaySyeP4mIiKyGMX+/LXoKTK1W48SJEwgPD9e12djYIDw8HIcPHza4T3p6OkJDQxEbGws3NzcEBgYiKSkJGo3GYH+NRoP169fj3r17CA0NNdinsrISZWVlei8iIiKyXhYNQLdv34ZGo4Gbm5teu5ubGwoKCgzuc/nyZaSlpUGj0WDHjh147733sGDBAnz44Yd6/c6ePYsWLVpALpfj7bffxtatWxEQEGDwmMnJyVAoFLqXp6enaQZIREREjVKjWARtDK1WC6VSiRUrViA4OBiRkZF49913kZKSotevQ4cOOH36NI4ePYqYmBiMHTsW2dnZBo+ZkJCA0tJS3ev69esNMRQiIiKyEIsuQnFxcYFUKkVhYaFee2FhIdzd3Q3uo1KpYGdnB6lUqmvr2LEjCgoKoFarIZPJAAAymQx+fn4AgODgYBw/fhwff/wxli9fXu2YcrkccrncVMMiIiKiRs6iM0AymQzBwcHIzMzUtWm1WmRmZta4XicsLAwXL16EVqvVteXm5kKlUunCjyFarRaVlZWmK56IiIiaLIufAouPj8fKlSuxZs0anDt3DjExMbh37x7GjRsHAIiKikJCQoKuf0xMDIqLixEXF4fc3Fxs374dSUlJiI2N1fVJSEjAd999h6tXr+Ls2bNISEjA/v37MXr06AYfHxERETU+Fr8OOzIyErdu3cLs2bNRUFCArl27IiMjQ7cwOi8vDzY2/8tpnp6e2LVrF6ZPn46goCB4eHggLi4OM2fO1PUpKipCVFQU8vPzoVAoEBQUhF27dmHgwIENPj4iIiJqfCx+H6DGiPcBIiIianqazH2AiIiIiCyBAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAYiIiIhEhwGIiIiIRIcBiIiIiESHAagBabSC7t9HLxfrvSciIqKGwwDUQDKy8hG+8IDu/bjVx/H8/L3IyMq3YFVERETixADUADKy8hGz7iQKyyr12gtKKxCz7iRDEBERUQNjADIzjVbA3G3ZMHSyq6pt7rZsng4jIiJqQAxAZnbsSjHySytq3C4AyC+twLErxQ1XFBERkcgxAJlZUXnN4ac+/YiIiOjpMQCZmdLR3qT9iIiI6OkxAJlZL99WUCnsIalhuwSASmGPXr6tGrIsIiIiUWMAMjOpjQSJwwIAoFoIqnqfOCwAUpuaIhIRERGZGgNQAxgcqMKyMd3hrtA/zeWusMeyMd0xOFBlocqIiIjEydbSBYjF4EAVBga449iVYhSVV0Dp+Pi0F2d+iIiIGh4DUAOS2kgQ2r61pcsgIiISPZ4CIyIiItFhACIiIiLRYQAiIiIi0WkUAWjp0qXw8fGBvb09QkJCcOzYsVr7l5SUIDY2FiqVCnK5HP7+/tixY4due3JyMnr27AlHR0colUoMHz4cOTk55h4GERERNREWD0AbNmxAfHw8EhMTcfLkSXTp0gUREREoKioy2F+tVmPgwIG4evUq0tLSkJOTg5UrV8LDw0PX58CBA4iNjcWRI0ewe/duPHz4EIMGDcK9e/caalhERETUiEkEQbDoY8hDQkLQs2dPfPLJJwAArVYLT09PTJkyBbNmzarWPyUlBR999BHOnz8POzu7On3GrVu3oFQqceDAAfTp0+eJ/cvKyqBQKFBaWgonJyfjBkREREQWYczfb4vOAKnVapw4cQLh4eG6NhsbG4SHh+Pw4cMG90lPT0doaChiY2Ph5uaGwMBAJCUlQaPR1Pg5paWlAIBWrfi4CSIiIrLwfYBu374NjUYDNzc3vXY3NzecP3/e4D6XL1/G3r17MXr0aOzYsQMXL17EpEmT8PDhQyQmJlbrr9VqMW3aNISFhSEwMNDgMSsrK1FZWal7X1ZW9hSjIiIiosauyd0IUavVQqlUYsWKFZBKpQgODsaNGzfw0UcfGQxAsbGxyMrKwsGDB2s8ZnJyMubOnWvOsomIiKgRsWgAcnFxgVQqRWFhoV57YWEh3N3dDe6jUqlgZ2cHqVSqa+vYsSMKCgqgVqshk8l07ZMnT8Y333yD7777Dm3btq2xjoSEBMTHx+vel5aWwsvLizNBRERETUjV3+26LG+2aACSyWQIDg5GZmYmhg8fDuDxDE9mZiYmT55scJ+wsDB8+eWX0Gq1sLF5vIQpNzcXKpVKF34EQcCUKVOwdetW7N+/H76+vrXWIZfLIZfLde+rvkBPT8+nHSIRERE1sPLycigUilr7WPwqsA0bNmDs2LFYvnw5evXqhcWLF2Pjxo04f/483NzcEBUVBQ8PDyQnJwMArl+/jk6dOmHs2LGYMmUKLly4gPHjx2Pq1Kl49913AQCTJk3Cl19+ia+//hodOnTQfZZCoYCDg8MTa9Jqtbh58yYcHR0hkTz9w0rLysrg6emJ69evi+aqMo6ZY7ZWHDPHbK2sYcyCIKC8vBxt2rTRTZLUxOJrgCIjI3Hr1i3Mnj0bBQUF6Nq1KzIyMnQLo/Py8vQG4enpiV27dmH69OkICgqCh4cH4uLiMHPmTF2fZcuWAQD69eun91mpqamIjo5+Yk02Nja1njKrLycnpyb7H1V9ccziwDGLA8csDk19zE+a+ali8QAEPF6rU9Mpr/3791drCw0NxZEjR2o8noUntYiIiKiRs/idoImIiIgaGgNQA5DL5UhMTNRbaG3tOGZx4JjFgWMWB7GN2eKLoImIiIgaGmeAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgMxs6dKl8PHxgb29PUJCQnDs2DFLl2RS3333HYYNG4Y2bdpAIpHgq6++0tsuCAJmz54NlUoFBwcHhIeH48KFC5Yp1gSSk5PRs2dPODo6QqlUYvjw4cjJydHrU1FRgdjYWLRu3RotWrTAyJEjqz3vrilZtmwZgoKCdDdHCw0Nxc6dO3XbrW28hsybNw8SiQTTpk3TtVnbuOfMmQOJRKL3evbZZ3XbrW28VW7cuIExY8agdevWcHBwQOfOnfHjjz/qtlvb7zAfH59qP2eJRILY2FgA1vtzNoQByIw2bNiA+Ph4JCYm4uTJk+jSpQsiIiJQVFRk6dJM5t69e+jSpQuWLl1qcPs///lPLFmyBCkpKTh69CiaN2+OiIgIVFRUNHClpnHgwAHExsbiyJEj2L17Nx4+fIhBgwbh3r17uj7Tp0/Htm3bsGnTJhw4cAA3b97Eq6++asGqn07btm0xb948nDhxAj/++CNeeOEFvPLKK/j5558BWN94f+/48eNYvnw5goKC9NqtcdydOnVCfn6+7nXw4EHdNmsc7507dxAWFgY7Ozvs3LkT2dnZWLBgAVq2bKnrY22/w44fP673M969ezcAYNSoUQCs8+dcI4HMplevXkJsbKzuvUajEdq0aSMkJydbsCrzASBs3bpV916r1Qru7u7CRx99pGsrKSkR5HK58N///tcCFZpeUVGRAEA4cOCAIAiPx2dnZyds2rRJ1+fcuXMCAOHw4cOWKtPkWrZsKXz66adWP97y8nLhmWeeEXbv3i307dtXiIuLEwTBOn/OiYmJQpcuXQxus8bxCoIgzJw5U3j++edr3C6G32FxcXFC+/btBa1Wa7U/55pwBshM1Go1Tpw4gfDwcF2bjY0NwsPDcfjwYQtW1nCuXLmCgoICve9AoVAgJCTEar6D0tJSAECrVq0AACdOnMDDhw/1xvzss8/Cy8vLKsas0Wiwfv163Lt3D6GhoVY/3tjYWLz44ot64wOs9+d84cIFtGnTBu3atcPo0aORl5cHwHrHm56ejh49emDUqFFQKpXo1q0bVq5cqdtu7b/D1Go11q1bh/Hjx0MikVjtz7kmDEBmcvv2bWg0Gt1DXau4ubmhoKDAQlU1rKpxWut3oNVqMW3aNISFhSEwMBDA4zHLZDI4Ozvr9W3qYz579ixatGgBuVyOt99+G1u3bkVAQIDVjhcA1q9fj5MnTyI5ObnaNmscd0hICFavXo2MjAwsW7YMV65cQe/evVFeXm6V4wWAy5cvY9myZXjmmWewa9cuxMTEYOrUqVizZg0A6/8d9tVXX6GkpET3kHBr/TnXpFE8DJWoKYqNjUVWVpbeOglr1aFDB5w+fRqlpaVIS0vD2LFjceDAAUuXZTbXr19HXFwcdu/eDXt7e0uX0yCGDBmi+3dQUBBCQkLg7e2NjRs3wsHBwYKVmY9Wq0WPHj2QlJQEAOjWrRuysrKQkpKCsWPHWrg68/vss88wZMgQtGnTxtKlWARngMzExcUFUqm02ur5wsJCuLu7W6iqhlU1Tmv8DiZPnoxvvvkG+/btQ9u2bXXt7u7uUKvVKCkp0evf1Mcsk8ng5+eH4OBgJCcno0uXLvj444+tdrwnTpxAUVERunfvDltbW9ja2uLAgQNYsmQJbG1t4ebmZpXj/i1nZ2f4+/vj4sWLVvtzVqlUCAgI0Gvr2LGj7tSfNf8Ou3btGvbs2YM///nPujZr/TnXhAHITGQyGYKDg5GZmalr02q1yMzMRGhoqAUrazi+vr5wd3fX+w7Kyspw9OjRJvsdCIKAyZMnY+vWrdi7dy98fX31tgcHB8POzk5vzDk5OcjLy2uyYzZEq9WisrLSasc7YMAAnD17FqdPn9a9evTogdGjR+v+bY3j/q27d+/i0qVLUKlUVvtzDgsLq3Ybi9zcXHh7ewOwzt9hVVJTU6FUKvHiiy/q2qz151wjS6/Ctmbr168X5HK5sHr1aiE7O1uYOHGi4OzsLBQUFFi6NJMpLy8XTp06JZw6dUoAICxcuFA4deqUcO3aNUEQBGHevHmCs7Oz8PXXXws//fST8Morrwi+vr7CgwcPLFx5/cTExAgKhULYv3+/kJ+fr3vdv39f1+ftt98WvLy8hL179wo//vijEBoaKoSGhlqw6qcza9Ys4cCBA8KVK1eEn376SZg1a5YgkUiEb7/9VhAE6xtvTX57FZggWN+4Z8yYIezfv1+4cuWK8MMPPwjh4eGCi4uLUFRUJAiC9Y1XEATh2LFjgq2trfCPf/xDuHDhgvDFF18IzZo1E9atW6frY22/wwTh8RXJXl5ewsyZM6tts8afc00YgMzs3//+t+Dl5SXIZDKhV69ewpEjRyxdkknt27dPAFDtNXbsWEEQHl9G+t577wlubm6CXC4XBgwYIOTk5Fi26KdgaKwAhNTUVF2fBw8eCJMmTRJatmwpNGvWTBgxYoSQn59vuaKf0vjx4wVvb29BJpMJrq6uwoABA3ThRxCsb7w1+X0AsrZxR0ZGCiqVSpDJZIKHh4cQGRkpXLx4Ubfd2sZbZdu2bUJgYKAgl8uFZ599VlixYoXedmv7HSYIgrBr1y4BgMFxWOvP2RCJIAiCRaaeiIiIiCyEa4CIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIiIiEh0GICIiIhIdBiAiIiISHQYgIjoqe3fvx8SiUT3EMXVq1fD2dn5qY9rquOY63gA0K9fP0ybNs2kxzRGnz598OWXX9ap73PPPYfNmzebuSKipoEBiEhEUlJS4OjoiEePHuna7t69Czs7O/Tr10+vb1WouXTpktnq2bdvH4YOHYrWrVujWbNmCAgIwIwZM3Djxg2zfWZdXb16FRKJpNbX6tWrsWXLFnzwwQcWqTE9PR2FhYV4/fXX69T/73//O2bNmgWtVmvmyogaPwYgIhHp378/7t69ix9//FHX9v3338Pd3R1Hjx5FRUWFrn3fvn3w8vJC+/btzVLL8uXLER4eDnd3d2zevBnZ2dlISUlBaWkpFixYYJbPNIanpyfy8/N1rxkzZqBTp056bZGRkWjVqhUcHR0tUuOSJUswbtw42NjU7Vf5kCFDUF5ejp07d5q5MqLGjwGISEQ6dOgAlUqF/fv369r279+PV155Bb6+vjhy5Ihee//+/QEAn3/+OXr06AFHR0e4u7vjj3/8I4qKiupdxy+//IKpU6di6tSpWLVqFfr16wcfHx/06dMHn376KWbPnl3jvsuWLUP79u0hk8nQoUMHfP7553rbS0pK8NZbb8HNzQ329vYIDAzEN998Y/BYt27dQo8ePTBixAhUVlbqbZNKpXB3d9e9WrRoAVtbW702BweHaqfAfHx88OGHHyIqKgotWrSAt7c30tPTcevWLbzyyito0aIFgoKC9EIoABw8eBC9e/eGg4MDPD09MXXqVNy7d6/G7+HWrVvYu3cvhg0bpmsTBAFz5syBl5cX5HI52rRpg6lTp+qNaejQoVi/fn2NxyUSCwYgIpHp378/9u3bp3u/b98+9OvXD3379tW1P3jwAEePHtUFoIcPH+KDDz7AmTNn8NVXX+Hq1auIjo6udw2bNm2CWq3GX//6V4Pba1qns3XrVsTFxWHGjBnIysrCW2+9hXHjxunq1mq1GDJkCH744QesW7cO2dnZmDdvHqRSabVjXb9+Hb1790ZgYCDS0tIgl8vrPZ7fW7RoEcLCwnDq1Cm8+OKL+NOf/oSoqCiMGTMGJ0+eRPv27REVFYWqZ1FfunQJgwcPxsiRI/HTTz9hw4YNOHjwICZPnlzjZxw8eBDNmjVDx44ddW2bN2/GokWLsHz5cly4cAFfffUVOnfurLdfr1698P3335tsrERNlmUfRk9EDW3lypVC8+bNhYcPHwplZWWCra2tUFRUJHz55ZdCnz59BEEQhMzMTAGAcO3aNYPHOH78uABAKC8vFwRBEPbt2ycAEO7cuSMIgiCkpqYKCoWixhpiYmIEJyenJ9b6++P84Q9/ECZMmKDXZ9SoUcLQoUMFQRCEXbt2CTY2NkJOTk6txzt//rzg6ekpTJ06VdBqtU+sQxAEITExUejSpUu19r59+wpxcXG6997e3sKYMWN07/Pz8wUAwnvvvadrO3z4sABAyM/PFwRBEN58801h4sSJesf9/vvvBRsbG+HBgwcG61m0aJHQrl07vbYFCxYI/v7+glqtrnEcX3/9tWBjYyNoNJoa+xCJAWeAiESmX79+uHfvHo4fP47vv/8e/v7+cHV1Rd++fXXrgPbv34927drBy8sLAHDixAkMGzYMXl5ecHR0RN++fQEAeXl59apBEARIJBKj9zt37hzCwsL02sLCwnDu3DkAwOnTp9G2bVv4+/vXeIwHDx6gd+/eePXVV/Hxxx/Xq44nCQoK0v3bzc0NAPRmYqraqk4jnjlzBqtXr0aLFi10r4iICGi1Wly5cqXGcdjb2+u1jRo1Cg8ePEC7du0wYcIEbN26VW/BOwA4ODhAq9VWO+VHJDYMQEQi4+fnh7Zt22Lfvn3Yt2+fLsy0adMGnp6eOHToEPbt24cXXngBAHDv3j1ERETAyckJX3zxBY4fP46tW7cCANRqdb1q8Pf3R2lpKfLz800zqP/j4ODwxD5yuRzh4eH45ptvzHa1mZ2dne7fVQHLUFvV1Vh3797FW2+9hdOnT+teZ86cwYULF2pchO7i4oI7d+7otXl6eiInJwf/+c9/4ODggEmTJqFPnz54+PChrk9xcTGaN29ep++KyJoxABGJUP/+/bF//37s379f7/L3Pn36YOfOnTh27Jhu/c/58+fx66+/Yt68eejduzeeffbZp1oADQD/7//9P8hkMvzzn/80uL3qfkK/17FjR/zwww96bT/88AMCAgIAPJ55+eWXX5Cbm1vjZ9vY2ODzzz9HcHAw+vfvj5s3b9ZvECbUvXt3ZGdnw8/Pr9pLJpMZ3Kdbt24oKCioFoIcHBwwbNgwLFmyBPv378fhw4dx9uxZ3fasrCx069bNrOMhagpsLV0AETW8/v37IzY2Fg8fPtTNAAFA3759MXnyZKjVal0A8vLygkwmw7///W+8/fbbyMrKeur73nh6emLRokWYPHkyysrKEBUVBR8fH/zyyy9Yu3YtWrRoYfBS+L/85S947bXX0K1bN4SHh2Pbtm3YsmUL9uzZo6u/T58+GDlyJBYuXAg/Pz+cP38eEokEgwcP1h1HKpXiiy++wBtvvIEXXngB+/fvh7u7+1ON6WnMnDkTzz33HCZPnow///nPaN68ObKzs7F792588sknBvfp1q0bXFxc8MMPP+Cll14C8PhGjxqNBiEhIWjWrBnWrVsHBwcHeHt76/b7/vvvMWjQoAYZF1FjxhkgIhHq378/Hjx4AD8/P916FOBxgCgvL9ddLg8Arq6uWL16NTZt2oSAgADMmzcP//rXv566hkmTJuHbb7/FjRs3MGLECDz77LP485//DCcnJ7zzzjsG9xk+fDg+/vhj/Otf/0KnTp2wfPlypKam6s1ibd68GT179sQbb7yBgIAA/PWvf4VGo6l2LFtbW/z3v/9Fp06d8MILLzz1rNbTCAoKwoEDB5Cbm4vevXujW7dumD17Ntq0aVPjPlKpFOPGjcMXX3yha3N2dsbKlSsRFhaGoKAg7NmzB9u2bUPr1q0BADdu3MChQ4cwbtw4s4+JqLGTCML/XYdJRERNSkFBATp16oSTJ0/qzfLUZObMmbhz5w5WrFjRANURNW6cASIiaqLc3d3x2Wef1flqPKVSabHHdhA1NpwBIiIiItHhDBARERGJDgMQERERiQ4DEBEREYkOAxARERGJDgMQERERiQ4DEBEREYkOAxARERGJDgMQERERiQ4DEBEREYnO/wcZWnMBFsR3JwAAAABJRU5ErkJggg==", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "plt.title('Learning Curve')\n", + "plt.xlabel('Wall Clock Time (s)')\n", + "plt.ylabel('Validation Accuracy')\n", + "plt.scatter(time_history, 1 - np.array(valid_loss_history))\n", + "plt.step(time_history, 1 - np.array(best_valid_loss_history), where='post')\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3. Comparison with alternatives\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Default LightGBM" + ] + }, + { + "cell_type": "code", + "execution_count": 53, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:27.7753221Z", + "execution_start_time": "2023-04-09T03:13:27.4870777Z", + "livy_statement_state": "available", + "parent_msg_id": "249fba84-ec7c-4801-9dac-861ffa0d0290", + "queued_time": "2023-04-09T03:10:35.4112806Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 79 + }, + "text/plain": [ + "StatementMeta(automl, 7, 79, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from lightgbm import LGBMClassifier\n", + "lgbm = LGBMClassifier()" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:29.4430851Z", + "execution_start_time": "2023-04-09T03:13:28.0142422Z", + "livy_statement_state": "available", + "parent_msg_id": "635ca27a-7ae7-44e9-9d57-f81b36236398", + "queued_time": "2023-04-09T03:10:35.511851Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 80 + }, + "text/plain": [ + "StatementMeta(automl, 7, 80, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
LGBMClassifier()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" + ], + "text/plain": [ + "LGBMClassifier()" + ] + }, + "execution_count": 33, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "lgbm.fit(X_train, y_train)" + ] + }, + { + "cell_type": "code", + "execution_count": 55, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:30.0093622Z", + "execution_start_time": "2023-04-09T03:13:29.7202855Z", + "livy_statement_state": "available", + "parent_msg_id": "608a77ce-d7b2-4921-adff-d1618a8316ad", + "queued_time": "2023-04-09T03:10:35.6550041Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 81 + }, + "text/plain": [ + "StatementMeta(automl, 7, 81, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "y_pred_lgbm = lgbm.predict(X_test)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Default XGBoost" + ] + }, + { + "cell_type": "code", + "execution_count": 56, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:30.5721373Z", + "execution_start_time": "2023-04-09T03:13:30.2846919Z", + "livy_statement_state": "available", + "parent_msg_id": "4b08eacb-4745-48d9-b223-ec5fbdab69ab", + "queued_time": "2023-04-09T03:10:35.7535047Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 82 + }, + "text/plain": [ + "StatementMeta(automl, 7, 82, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from xgboost import XGBClassifier\n", + "xgb = XGBClassifier()\n", + "cat_columns = X_train.select_dtypes(include=['category']).columns\n", + "X = X_train.copy()\n", + "X[cat_columns] = X[cat_columns].apply(lambda x: x.cat.codes)\n", + "y_train_xgb = y_train.astype(\"int\")" + ] + }, + { + "cell_type": "code", + "execution_count": 57, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:38.5603565Z", + "execution_start_time": "2023-04-09T03:13:30.8138989Z", + "livy_statement_state": "available", + "parent_msg_id": "7536603f-0254-4f00-aac1-73d67d529a05", + "queued_time": "2023-04-09T03:10:35.8542308Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 83 + }, + "text/plain": [ + "StatementMeta(automl, 7, 83, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
XGBClassifier(base_score=0.5, booster='gbtree', callbacks=None,\n",
+              "              colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1,\n",
+              "              early_stopping_rounds=None, enable_categorical=False,\n",
+              "              eval_metric=None, gamma=0, gpu_id=-1, grow_policy='depthwise',\n",
+              "              importance_type=None, interaction_constraints='',\n",
+              "              learning_rate=0.300000012, max_bin=256, max_cat_to_onehot=4,\n",
+              "              max_delta_step=0, max_depth=6, max_leaves=0, min_child_weight=1,\n",
+              "              missing=nan, monotone_constraints='()', n_estimators=100,\n",
+              "              n_jobs=0, num_parallel_tree=1, predictor='auto', random_state=0,\n",
+              "              reg_alpha=0, reg_lambda=1, ...)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" + ], + "text/plain": [ + "XGBClassifier(base_score=0.5, booster='gbtree', callbacks=None,\n", + " colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1,\n", + " early_stopping_rounds=None, enable_categorical=False,\n", + " eval_metric=None, gamma=0, gpu_id=-1, grow_policy='depthwise',\n", + " importance_type=None, interaction_constraints='',\n", + " learning_rate=0.300000012, max_bin=256, max_cat_to_onehot=4,\n", + " max_delta_step=0, max_depth=6, max_leaves=0, min_child_weight=1,\n", + " missing=nan, monotone_constraints='()', n_estimators=100,\n", + " n_jobs=0, num_parallel_tree=1, predictor='auto', random_state=0,\n", + " reg_alpha=0, reg_lambda=1, ...)" + ] + }, + "execution_count": 39, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "xgb.fit(X, y_train_xgb)" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:39.158293Z", + "execution_start_time": "2023-04-09T03:13:38.8646861Z", + "livy_statement_state": "available", + "parent_msg_id": "6cc9c9ae-70a1-4233-8d7e-87b0f49cfe84", + "queued_time": "2023-04-09T03:10:35.9526459Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 84 + }, + "text/plain": [ + "StatementMeta(automl, 7, 84, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "X = X_test.copy()\n", + "X[cat_columns] = X[cat_columns].apply(lambda x: x.cat.codes)\n", + "y_pred_xgb = xgb.predict(X)\n", + "y_test_xgb = y_test.astype(\"int\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": 59, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:40.1931477Z", + "execution_start_time": "2023-04-09T03:13:39.4172862Z", + "livy_statement_state": "available", + "parent_msg_id": "ce07a96a-a8a2-43f1-b7fc-c76eb204382e", + "queued_time": "2023-04-09T03:10:36.0501561Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 85 + }, + "text/plain": [ + "StatementMeta(automl, 7, 85, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "default xgboost accuracy = 0.6676060098186078\n", + "default lgbm accuracy = 0.6602346380315323\n", + "flaml (10 min) accuracy = 0.6732939797991784\n" + ] + } + ], + "source": [ + "print('default xgboost accuracy', '=', 1 - sklearn_metric_loss_score('accuracy', y_pred_xgb, y_test_xgb))\n", + "print('default lgbm accuracy', '=', 1 - sklearn_metric_loss_score('accuracy', y_pred_lgbm, y_test))\n", + "print('flaml (2 min) accuracy', '=', 1 - sklearn_metric_loss_score('accuracy', y_pred, y_test))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "## 4. Customized Learner" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "Some experienced automl users may have a preferred model to tune or may already have a reasonably by-hand-tuned model before launching the automl experiment. They need to select optimal configurations for the customized model mixed with standard built-in learners. \n", + "\n", + "FLAML can easily incorporate customized/new learners (preferably with sklearn API) provided by users in a real-time manner, as demonstrated below." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "### Example of Regularized Greedy Forest\n", + "\n", + "[Regularized Greedy Forest](https://arxiv.org/abs/1109.0887) (RGF) is a machine learning method currently not included in FLAML. The RGF has many tuning parameters, the most critical of which are: `[max_leaf, n_iter, n_tree_search, opt_interval, min_samples_leaf]`. To run a customized/new learner, the user needs to provide the following information:\n", + "* an implementation of the customized/new learner\n", + "* a list of hyperparameter names and types\n", + "* rough ranges of hyperparameters (i.e., upper/lower bounds)\n", + "* choose initial value corresponding to low cost for cost-related hyperparameters (e.g., initial value for max_leaf and n_iter should be small)\n", + "\n", + "In this example, the above information for RGF is wrapped in a python class called *MyRegularizedGreedyForest* that exposes the hyperparameters." + ] + }, + { + "cell_type": "code", + "execution_count": 60, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:50.122632Z", + "execution_start_time": "2023-04-09T03:13:40.4359303Z", + "livy_statement_state": "available", + "parent_msg_id": "4855a514-2527-4852-95e2-743f509bf2c7", + "queued_time": "2023-04-09T03:10:36.1656825Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 86 + }, + "text/plain": [ + "StatementMeta(automl, 7, 86, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting rgf-python\n", + " Using cached rgf_python-3.12.0-py3-none-manylinux1_x86_64.whl (757 kB)\n", + "Requirement already satisfied: joblib in /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages (from rgf-python) (1.0.1)\n", + "Requirement already satisfied: scikit-learn>=0.18 in /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages (from rgf-python) (0.23.2)\n", + "Requirement already satisfied: numpy>=1.13.3 in /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages (from scikit-learn>=0.18->rgf-python) (1.19.4)\n", + "Requirement already satisfied: threadpoolctl>=2.0.0 in /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages (from scikit-learn>=0.18->rgf-python) (2.1.0)\n", + "Requirement already satisfied: scipy>=0.19.1 in /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages (from scikit-learn>=0.18->rgf-python) (1.5.3)\n", + "Installing collected packages: rgf-python\n", + "Successfully installed rgf-python-3.12.0\n" + ] + } + ], + "source": [ + "!pip install rgf-python " + ] + }, + { + "cell_type": "code", + "execution_count": 61, + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:50.6337005Z", + "execution_start_time": "2023-04-09T03:13:50.3672163Z", + "livy_statement_state": "available", + "parent_msg_id": "6f475eea-c02b-491f-a85e-e696dfdf6882", + "queued_time": "2023-04-09T03:10:36.2639428Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 87 + }, + "text/plain": [ + "StatementMeta(automl, 7, 87, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "''' SKLearnEstimator is the super class for a sklearn learner '''\n", + "from flaml.model import SKLearnEstimator\n", + "from flaml import tune\n", + "from flaml.data import CLASSIFICATION\n", + "\n", + "\n", + "class MyRegularizedGreedyForest(SKLearnEstimator):\n", + " def __init__(self, task='binary', **config):\n", + " '''Constructor\n", + " \n", + " Args:\n", + " task: A string of the task type, one of\n", + " 'binary', 'multiclass', 'regression'\n", + " config: A dictionary containing the hyperparameter names\n", + " and 'n_jobs' as keys. n_jobs is the number of parallel threads.\n", + " '''\n", + "\n", + " super().__init__(task, **config)\n", + "\n", + " '''task=binary or multi for classification task'''\n", + " if task in CLASSIFICATION:\n", + " from rgf.sklearn import RGFClassifier\n", + "\n", + " self.estimator_class = RGFClassifier\n", + " else:\n", + " from rgf.sklearn import RGFRegressor\n", + " \n", + " self.estimator_class = RGFRegressor\n", + "\n", + " @classmethod\n", + " def search_space(cls, data_size, task):\n", + " '''[required method] search space\n", + "\n", + " Returns:\n", + " A dictionary of the search space. \n", + " Each key is the name of a hyperparameter, and value is a dict with\n", + " its domain (required) and low_cost_init_value, init_value,\n", + " cat_hp_cost (if applicable).\n", + " e.g.,\n", + " {'domain': tune.randint(lower=1, upper=10), 'init_value': 1}.\n", + " '''\n", + " space = { \n", + " 'max_leaf': {'domain': tune.lograndint(lower=4, upper=data_size[0]), 'init_value': 4, 'low_cost_init_value': 4},\n", + " 'n_iter': {'domain': tune.lograndint(lower=1, upper=data_size[0]), 'init_value': 1, 'low_cost_init_value': 1},\n", + " 'n_tree_search': {'domain': tune.lograndint(lower=1, upper=32768), 'init_value': 1, 'low_cost_init_value': 1},\n", + " 'opt_interval': {'domain': tune.lograndint(lower=1, upper=10000), 'init_value': 100},\n", + " 'learning_rate': {'domain': tune.loguniform(lower=0.01, upper=20.0)},\n", + " 'min_samples_leaf': {'domain': tune.lograndint(lower=1, upper=20), 'init_value': 20},\n", + " }\n", + " return space\n", + "\n", + " @classmethod\n", + " def size(cls, config):\n", + " '''[optional method] memory size of the estimator in bytes\n", + " \n", + " Args:\n", + " config - the dict of the hyperparameter config\n", + "\n", + " Returns:\n", + " A float of the memory size required by the estimator to train the\n", + " given config\n", + " '''\n", + " max_leaves = int(round(config['max_leaf']))\n", + " n_estimators = int(round(config['n_iter']))\n", + " return (max_leaves * 3 + (max_leaves - 1) * 4 + 1.0) * n_estimators * 8\n", + "\n", + " @classmethod\n", + " def cost_relative2lgbm(cls):\n", + " '''[optional method] relative cost compared to lightgbm\n", + " '''\n", + " return 1.0\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "### Add Customized Learner and Run FLAML AutoML\n", + "\n", + "After adding RGF into the list of learners, we run automl by tuning hyperpameters of RGF as well as the default learners. " + ] + }, + { + "cell_type": "code", + "execution_count": 62, + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:13:51.1287115Z", + "execution_start_time": "2023-04-09T03:13:50.8741632Z", + "livy_statement_state": "available", + "parent_msg_id": "702a9e5c-a880-483b-985c-4ebbcbde5e07", + "queued_time": "2023-04-09T03:10:36.3578919Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 88 + }, + "text/plain": [ + "StatementMeta(automl, 7, 88, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "automl = AutoML()\n", + "automl.add_learner(learner_name='RGF', learner_class=MyRegularizedGreedyForest)" + ] + }, + { + "cell_type": "code", + "execution_count": 63, + "metadata": { + "slideshow": { + "slide_type": "slide" + }, + "tags": [] + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:14:03.5802415Z", + "execution_start_time": "2023-04-09T03:13:51.3699652Z", + "livy_statement_state": "available", + "parent_msg_id": "2e5e85aa-8e78-4d78-a275-c6a160a7b415", + "queued_time": "2023-04-09T03:10:36.4663752Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 89 + }, + "text/plain": [ + "StatementMeta(automl, 7, 89, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[flaml.automl.automl: 04-09 03:13:51] {2726} INFO - task = classification\n", + "[flaml.automl.automl: 04-09 03:13:51] {2728} INFO - Data split method: stratified\n", + "[flaml.automl.automl: 04-09 03:13:51] {2731} INFO - Evaluation method: holdout\n", + "[flaml.automl.automl: 04-09 03:13:51] {2858} INFO - Minimizing error metric: 1-accuracy\n", + "[flaml.automl.automl: 04-09 03:13:51] {3004} INFO - List of ML learners in AutoML Run: ['RGF', 'lgbm', 'rf', 'xgboost']\n", + "[flaml.automl.automl: 04-09 03:13:51] {3334} INFO - iteration 0, current learner RGF\n", + "[flaml.automl.automl: 04-09 03:13:52] {3472} INFO - Estimated sufficient time budget=173368s. Estimated necessary time budget=173s.\n", + "[flaml.automl.automl: 04-09 03:13:52] {3519} INFO - at 0.9s,\testimator RGF's best error=0.3840,\tbest estimator RGF's best error=0.3840\n", + "[flaml.automl.automl: 04-09 03:13:52] {3334} INFO - iteration 1, current learner RGF\n", + "[flaml.automl.automl: 04-09 03:13:52] {3519} INFO - at 1.2s,\testimator RGF's best error=0.3840,\tbest estimator RGF's best error=0.3840\n", + "[flaml.automl.automl: 04-09 03:13:52] {3334} INFO - iteration 2, current learner RGF\n", + "[flaml.automl.automl: 04-09 03:13:52] {3519} INFO - at 1.6s,\testimator RGF's best error=0.3840,\tbest estimator RGF's best error=0.3840\n", + "[flaml.automl.automl: 04-09 03:13:52] {3334} INFO - iteration 3, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:52] {3519} INFO - at 1.6s,\testimator lgbm's best error=0.3777,\tbest estimator lgbm's best error=0.3777\n", + "[flaml.automl.automl: 04-09 03:13:52] {3334} INFO - iteration 4, current learner RGF\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.0s,\testimator RGF's best error=0.3840,\tbest estimator lgbm's best error=0.3777\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 5, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.1s,\testimator lgbm's best error=0.3777,\tbest estimator lgbm's best error=0.3777\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 6, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.1s,\testimator lgbm's best error=0.3777,\tbest estimator lgbm's best error=0.3777\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 7, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.1s,\testimator lgbm's best error=0.3661,\tbest estimator lgbm's best error=0.3661\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 8, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.2s,\testimator lgbm's best error=0.3661,\tbest estimator lgbm's best error=0.3661\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 9, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.2s,\testimator lgbm's best error=0.3633,\tbest estimator lgbm's best error=0.3633\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 10, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.2s,\testimator lgbm's best error=0.3633,\tbest estimator lgbm's best error=0.3633\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 11, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.3s,\testimator lgbm's best error=0.3633,\tbest estimator lgbm's best error=0.3633\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 12, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.3s,\testimator lgbm's best error=0.3613,\tbest estimator lgbm's best error=0.3613\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 13, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.4s,\testimator lgbm's best error=0.3613,\tbest estimator lgbm's best error=0.3613\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 14, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:53] {3519} INFO - at 2.5s,\testimator lgbm's best error=0.3591,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:53] {3334} INFO - iteration 15, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:54] {3519} INFO - at 2.7s,\testimator lgbm's best error=0.3591,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:54] {3334} INFO - iteration 16, current learner RGF\n", + "[flaml.automl.automl: 04-09 03:13:54] {3519} INFO - at 3.1s,\testimator RGF's best error=0.3840,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:54] {3334} INFO - iteration 17, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:54] {3519} INFO - at 3.2s,\testimator lgbm's best error=0.3591,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:54] {3334} INFO - iteration 18, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:54] {3519} INFO - at 3.4s,\testimator lgbm's best error=0.3591,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:54] {3334} INFO - iteration 19, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:54] {3519} INFO - at 3.5s,\testimator lgbm's best error=0.3591,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:54] {3334} INFO - iteration 20, current learner RGF\n", + "[flaml.automl.automl: 04-09 03:13:55] {3519} INFO - at 4.0s,\testimator RGF's best error=0.3766,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:55] {3334} INFO - iteration 21, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:55] {3519} INFO - at 4.1s,\testimator lgbm's best error=0.3591,\tbest estimator lgbm's best error=0.3591\n", + "[flaml.automl.automl: 04-09 03:13:55] {3334} INFO - iteration 22, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:55] {3519} INFO - at 4.5s,\testimator lgbm's best error=0.3514,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:55] {3334} INFO - iteration 23, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 4.7s,\testimator xgboost's best error=0.3787,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 24, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 4.8s,\testimator xgboost's best error=0.3765,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 25, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 4.8s,\testimator rf's best error=0.3816,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 26, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 4.9s,\testimator rf's best error=0.3724,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 27, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 4.9s,\testimator rf's best error=0.3724,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 28, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 5.0s,\testimator xgboost's best error=0.3765,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 29, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 5.0s,\testimator xgboost's best error=0.3765,\tbest estimator lgbm's best error=0.3514\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 30, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:56] {3519} INFO - at 5.4s,\testimator lgbm's best error=0.3511,\tbest estimator lgbm's best error=0.3511\n", + "[flaml.automl.automl: 04-09 03:13:56] {3334} INFO - iteration 31, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:57] {3519} INFO - at 5.7s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:57] {3334} INFO - iteration 32, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:57] {3519} INFO - at 5.9s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:57] {3334} INFO - iteration 33, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:57] {3519} INFO - at 6.0s,\testimator rf's best error=0.3724,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:57] {3334} INFO - iteration 34, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:57] {3519} INFO - at 6.3s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:57] {3334} INFO - iteration 35, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:57] {3519} INFO - at 6.6s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:57] {3334} INFO - iteration 36, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:57] {3519} INFO - at 6.7s,\testimator xgboost's best error=0.3699,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:57] {3334} INFO - iteration 37, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:58] {3519} INFO - at 6.7s,\testimator rf's best error=0.3724,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:58] {3334} INFO - iteration 38, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:58] {3519} INFO - at 6.8s,\testimator xgboost's best error=0.3699,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:58] {3334} INFO - iteration 39, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:58] {3519} INFO - at 7.1s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:58] {3334} INFO - iteration 40, current learner rf\n", + "[flaml.automl.automl: 04-09 03:13:58] {3519} INFO - at 7.3s,\testimator rf's best error=0.3724,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:58] {3334} INFO - iteration 41, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:58] {3519} INFO - at 7.4s,\testimator xgboost's best error=0.3657,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:58] {3334} INFO - iteration 42, current learner RGF\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 7.7s,\testimator RGF's best error=0.3766,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 43, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 7.8s,\testimator xgboost's best error=0.3657,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 44, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 7.8s,\testimator xgboost's best error=0.3657,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 45, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 7.9s,\testimator xgboost's best error=0.3657,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 46, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 8.1s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 47, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 8.3s,\testimator xgboost's best error=0.3657,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 48, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 8.4s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 49, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:13:59] {3519} INFO - at 8.5s,\testimator lgbm's best error=0.3497,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:13:59] {3334} INFO - iteration 50, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:14:00] {3519} INFO - at 8.7s,\testimator xgboost's best error=0.3657,\tbest estimator lgbm's best error=0.3497\n", + "[flaml.automl.automl: 04-09 03:14:00] {3334} INFO - iteration 51, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:01] {3519} INFO - at 10.5s,\testimator lgbm's best error=0.3448,\tbest estimator lgbm's best error=0.3448\n", + "[flaml.automl.automl: 04-09 03:14:03] {3783} INFO - retrain lgbm for 1.6s\n", + "[flaml.automl.automl: 04-09 03:14:03] {3790} INFO - retrained model: LGBMClassifier(colsample_bytree=0.6649148062238498,\n", + " learning_rate=0.06500463168967066, max_bin=255,\n", + " min_child_samples=5, n_estimators=190, num_leaves=20,\n", + " reg_alpha=0.0017271108100233477, reg_lambda=0.00468154746700776,\n", + " verbose=-1)\n", + "[flaml.automl.automl: 04-09 03:14:03] {3034} INFO - fit succeeded\n", + "[flaml.automl.automl: 04-09 03:14:03] {3035} INFO - Time taken to find the best model: 10.480074405670166\n" + ] + } + ], + "source": [ + "settings = {\n", + " \"time_budget\": 10, # total running time in seconds\n", + " \"metric\": 'accuracy', \n", + " \"estimator_list\": ['RGF', 'lgbm', 'rf', 'xgboost'], # list of ML learners\n", + " \"task\": 'classification', # task type \n", + " \"log_file_name\": 'airlines_experiment_custom_learner.log', # flaml log file \n", + " \"log_training_metric\": True, # whether to log training metric\n", + "}\n", + "\n", + "automl.fit(X_train=X_train, y_train=y_train, **settings)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 5. Customized Metric\n", + "\n", + "It's also easy to customize the optimization metric. As an example, we demonstrate with a custom metric function which combines training loss and validation loss as the final loss to minimize." + ] + }, + { + "cell_type": "code", + "execution_count": 64, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:14:04.1303148Z", + "execution_start_time": "2023-04-09T03:14:03.8308127Z", + "livy_statement_state": "available", + "parent_msg_id": "e1ced49a-d49a-4496-8ded-58deb936d247", + "queued_time": "2023-04-09T03:10:36.6448318Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 90 + }, + "text/plain": [ + "StatementMeta(automl, 7, 90, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "def custom_metric(X_val, y_val, estimator, labels, X_train, y_train,\n", + " weight_val=None, weight_train=None, config=None,\n", + " groups_val=None, groups_train=None):\n", + " from sklearn.metrics import log_loss\n", + " import time\n", + " start = time.time()\n", + " y_pred = estimator.predict_proba(X_val)\n", + " pred_time = (time.time() - start) / len(X_val)\n", + " val_loss = log_loss(y_val, y_pred, labels=labels,\n", + " sample_weight=weight_val)\n", + " y_pred = estimator.predict_proba(X_train)\n", + " train_loss = log_loss(y_train, y_pred, labels=labels,\n", + " sample_weight=weight_train)\n", + " alpha = 0.5\n", + " return val_loss * (1 + alpha) - alpha * train_loss, {\n", + " \"val_loss\": val_loss, \"train_loss\": train_loss, \"pred_time\": pred_time\n", + " }\n", + " # two elements are returned:\n", + " # the first element is the metric to minimize as a float number,\n", + " # the second element is a dictionary of the metrics to log" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can then pass this custom metric function to automl's `fit` method." + ] + }, + { + "cell_type": "code", + "execution_count": 65, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": "2023-04-09T03:14:16.3791532Z", + "execution_start_time": "2023-04-09T03:14:04.3643576Z", + "livy_statement_state": "available", + "parent_msg_id": "e472943a-3204-41fc-a723-5f39f302b04c", + "queued_time": "2023-04-09T03:10:36.8448553Z", + "session_id": "7", + "session_start_time": null, + "spark_jobs": null, + "spark_pool": "automl", + "state": "finished", + "statement_id": 91 + }, + "text/plain": [ + "StatementMeta(automl, 7, 91, Finished, Available)" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[flaml.automl.automl: 04-09 03:14:04] {2726} INFO - task = classification\n", + "[flaml.automl.automl: 04-09 03:14:04] {2728} INFO - Data split method: stratified\n", + "[flaml.automl.automl: 04-09 03:14:04] {2731} INFO - Evaluation method: holdout\n", + "[flaml.automl.automl: 04-09 03:14:04] {2858} INFO - Minimizing error metric: customized metric\n", + "[flaml.automl.automl: 04-09 03:14:04] {3004} INFO - List of ML learners in AutoML Run: ['lgbm', 'rf', 'xgboost', 'extra_tree', 'xgb_limitdepth', 'lrl1']\n", + "[flaml.automl.automl: 04-09 03:14:04] {3334} INFO - iteration 0, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:04] {3472} INFO - Estimated sufficient time budget=11191s. Estimated necessary time budget=258s.\n", + "[flaml.automl.automl: 04-09 03:14:04] {3519} INFO - at 0.4s,\testimator lgbm's best error=0.6647,\tbest estimator lgbm's best error=0.6647\n", + "[flaml.automl.automl: 04-09 03:14:04] {3334} INFO - iteration 1, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:04] {3519} INFO - at 0.5s,\testimator lgbm's best error=0.6647,\tbest estimator lgbm's best error=0.6647\n", + "[flaml.automl.automl: 04-09 03:14:04] {3334} INFO - iteration 2, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:04] {3519} INFO - at 0.5s,\testimator lgbm's best error=0.6491,\tbest estimator lgbm's best error=0.6491\n", + "[flaml.automl.automl: 04-09 03:14:04] {3334} INFO - iteration 3, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 0.7s,\testimator xgboost's best error=0.6845,\tbest estimator lgbm's best error=0.6491\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 4, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 0.8s,\testimator extra_tree's best error=0.6678,\tbest estimator lgbm's best error=0.6491\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 5, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 0.8s,\testimator lgbm's best error=0.6423,\tbest estimator lgbm's best error=0.6423\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 6, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 0.9s,\testimator lgbm's best error=0.6423,\tbest estimator lgbm's best error=0.6423\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 7, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 0.9s,\testimator lgbm's best error=0.6423,\tbest estimator lgbm's best error=0.6423\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 8, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 0.9s,\testimator lgbm's best error=0.6400,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 9, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.0s,\testimator lgbm's best error=0.6400,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 10, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.2s,\testimator xgboost's best error=0.6845,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 11, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.3s,\testimator extra_tree's best error=0.6576,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 12, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.3s,\testimator rf's best error=0.6614,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 13, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.4s,\testimator rf's best error=0.6523,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 14, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.4s,\testimator rf's best error=0.6523,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 15, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.5s,\testimator xgboost's best error=0.6503,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 16, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:05] {3519} INFO - at 1.6s,\testimator lgbm's best error=0.6400,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:05] {3334} INFO - iteration 17, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 1.8s,\testimator extra_tree's best error=0.6576,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 18, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 1.8s,\testimator lgbm's best error=0.6400,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 19, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.0s,\testimator xgboost's best error=0.6486,\tbest estimator lgbm's best error=0.6400\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 20, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.1s,\testimator lgbm's best error=0.6335,\tbest estimator lgbm's best error=0.6335\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 21, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.1s,\testimator lgbm's best error=0.6335,\tbest estimator lgbm's best error=0.6335\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 22, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.2s,\testimator lgbm's best error=0.6335,\tbest estimator lgbm's best error=0.6335\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 23, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.3s,\testimator lgbm's best error=0.6335,\tbest estimator lgbm's best error=0.6335\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 24, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.4s,\testimator rf's best error=0.6523,\tbest estimator lgbm's best error=0.6335\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 25, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.5s,\testimator extra_tree's best error=0.6576,\tbest estimator lgbm's best error=0.6335\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 26, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:06] {3519} INFO - at 2.6s,\testimator lgbm's best error=0.6335,\tbest estimator lgbm's best error=0.6335\n", + "[flaml.automl.automl: 04-09 03:14:06] {3334} INFO - iteration 27, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:07] {3519} INFO - at 2.9s,\testimator lgbm's best error=0.6328,\tbest estimator lgbm's best error=0.6328\n", + "[flaml.automl.automl: 04-09 03:14:07] {3334} INFO - iteration 28, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:07] {3519} INFO - at 3.0s,\testimator extra_tree's best error=0.6576,\tbest estimator lgbm's best error=0.6328\n", + "[flaml.automl.automl: 04-09 03:14:07] {3334} INFO - iteration 29, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:07] {3519} INFO - at 3.1s,\testimator extra_tree's best error=0.6443,\tbest estimator lgbm's best error=0.6328\n", + "[flaml.automl.automl: 04-09 03:14:07] {3334} INFO - iteration 30, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:07] {3519} INFO - at 3.4s,\testimator lgbm's best error=0.6241,\tbest estimator lgbm's best error=0.6241\n", + "[flaml.automl.automl: 04-09 03:14:07] {3334} INFO - iteration 31, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:07] {3519} INFO - at 3.7s,\testimator lgbm's best error=0.6241,\tbest estimator lgbm's best error=0.6241\n", + "[flaml.automl.automl: 04-09 03:14:07] {3334} INFO - iteration 32, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:08] {3519} INFO - at 4.0s,\testimator lgbm's best error=0.6206,\tbest estimator lgbm's best error=0.6206\n", + "[flaml.automl.automl: 04-09 03:14:08] {3334} INFO - iteration 33, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:08] {3519} INFO - at 4.1s,\testimator extra_tree's best error=0.6443,\tbest estimator lgbm's best error=0.6206\n", + "[flaml.automl.automl: 04-09 03:14:08] {3334} INFO - iteration 34, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:08] {3519} INFO - at 4.4s,\testimator lgbm's best error=0.6206,\tbest estimator lgbm's best error=0.6206\n", + "[flaml.automl.automl: 04-09 03:14:08] {3334} INFO - iteration 35, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:09] {3519} INFO - at 4.7s,\testimator lgbm's best error=0.6206,\tbest estimator lgbm's best error=0.6206\n", + "[flaml.automl.automl: 04-09 03:14:09] {3334} INFO - iteration 36, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:09] {3519} INFO - at 4.8s,\testimator extra_tree's best error=0.6416,\tbest estimator lgbm's best error=0.6206\n", + "[flaml.automl.automl: 04-09 03:14:09] {3334} INFO - iteration 37, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:09] {3519} INFO - at 5.3s,\testimator lgbm's best error=0.6185,\tbest estimator lgbm's best error=0.6185\n", + "[flaml.automl.automl: 04-09 03:14:09] {3334} INFO - iteration 38, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:09] {3519} INFO - at 5.4s,\testimator rf's best error=0.6458,\tbest estimator lgbm's best error=0.6185\n", + "[flaml.automl.automl: 04-09 03:14:09] {3334} INFO - iteration 39, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:10] {3519} INFO - at 6.0s,\testimator lgbm's best error=0.6156,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:10] {3334} INFO - iteration 40, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:10] {3519} INFO - at 6.4s,\testimator lgbm's best error=0.6156,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:10] {3334} INFO - iteration 41, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:10] {3519} INFO - at 6.6s,\testimator rf's best error=0.6458,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:10] {3334} INFO - iteration 42, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:11] {3519} INFO - at 7.1s,\testimator lgbm's best error=0.6156,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:11] {3334} INFO - iteration 43, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:11] {3519} INFO - at 7.3s,\testimator rf's best error=0.6425,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:11] {3334} INFO - iteration 44, current learner extra_tree\n", + "[flaml.automl.automl: 04-09 03:14:11] {3519} INFO - at 7.4s,\testimator extra_tree's best error=0.6416,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:11] {3334} INFO - iteration 45, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:11] {3519} INFO - at 7.6s,\testimator rf's best error=0.6384,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:11] {3334} INFO - iteration 46, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:12] {3519} INFO - at 8.1s,\testimator lgbm's best error=0.6156,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:12] {3334} INFO - iteration 47, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:12] {3519} INFO - at 8.3s,\testimator rf's best error=0.6384,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:12] {3334} INFO - iteration 48, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:13] {3519} INFO - at 9.0s,\testimator lgbm's best error=0.6156,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:13] {3334} INFO - iteration 49, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:14:13] {3519} INFO - at 9.1s,\testimator xgb_limitdepth's best error=0.6682,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:13] {3334} INFO - iteration 50, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:14:13] {3519} INFO - at 9.2s,\testimator xgb_limitdepth's best error=0.6682,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:13] {3334} INFO - iteration 51, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:14:13] {3519} INFO - at 9.3s,\testimator xgb_limitdepth's best error=0.6542,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:13] {3334} INFO - iteration 52, current learner xgboost\n", + "[flaml.automl.automl: 04-09 03:14:13] {3519} INFO - at 9.3s,\testimator xgboost's best error=0.6486,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:13] {3334} INFO - iteration 53, current learner rf\n", + "[flaml.automl.automl: 04-09 03:14:13] {3519} INFO - at 9.4s,\testimator rf's best error=0.6384,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:13] {3334} INFO - iteration 54, current learner lgbm\n", + "[flaml.automl.automl: 04-09 03:14:14] {3519} INFO - at 9.8s,\testimator lgbm's best error=0.6156,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:14] {3334} INFO - iteration 55, current learner xgb_limitdepth\n", + "[flaml.automl.automl: 04-09 03:14:14] {3519} INFO - at 10.0s,\testimator xgb_limitdepth's best error=0.6496,\tbest estimator lgbm's best error=0.6156\n", + "[flaml.automl.automl: 04-09 03:14:14] {3783} INFO - retrain lgbm for 0.3s\n", + "[flaml.automl.automl: 04-09 03:14:14] {3790} INFO - retrained model: LGBMClassifier(colsample_bytree=0.9031374907114736,\n", + " learning_rate=0.3525398690474661, max_bin=1023,\n", + " min_child_samples=4, n_estimators=22, num_leaves=69,\n", + " reg_alpha=0.0060777294606297145, reg_lambda=37.65858370595088,\n", + " verbose=-1)\n", + "[flaml.automl.automl: 04-09 03:14:14] {3034} INFO - fit succeeded\n", + "[flaml.automl.automl: 04-09 03:14:14] {3035} INFO - Time taken to find the best model: 5.982900142669678\n" + ] + } + ], + "source": [ + "automl = AutoML()\n", + "settings = {\n", + " \"time_budget\": 10, # total running time in seconds\n", + " \"metric\": custom_metric, # pass the custom metric funtion here\n", + " \"task\": 'classification', # task type\n", + " \"log_file_name\": 'airlines_experiment_custom_metric.log', # flaml log file\n", + "}\n", + "\n", + "automl.fit(X_train=X_train, y_train=y_train, **settings)" + ] + } + ], + "metadata": { + "description": null, + "kernelspec": { + "display_name": "Synapse PySpark", + "name": "synapse_pyspark" + }, + "language_info": { + "name": "python" + }, + "save_output": true, + "synapse_widget": { + "state": {}, + "version": "0.1" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/notebook/tune_synapseml.ipynb b/notebook/tune_synapseml.ipynb new file mode 100644 index 00000000000..c0f8523fee1 --- /dev/null +++ b/notebook/tune_synapseml.ipynb @@ -0,0 +1,1109 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Hyperparameter Tuning with FLAML\n", + "\n", + "| | | | |\n", + "|-----|--------|--------|--------|\n", + "|![synapse](https://microsoft.github.io/SynapseML/img/logo.svg)| \"drawing\" | \n", + "\n", + "\n", + "\n", + "In this notebook, we use FLAML to finetune a SynapseML LightGBM regression model for predicting house price. We use [*california_housing* dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_california_housing.html#sklearn.datasets.fetch_california_housing). The data consists of 20640 entries with 8 features.\n", + "\n", + "The result shows that with **2 mins** of tuning, FLAML **improved** the metric R^2 **from 0.71 to 0.81**.\n", + "\n", + "We will perform the task in following steps:\n", + "- **Setup** environment\n", + "- **Prepare** train and test datasets\n", + "- **Train** with initial parameters\n", + "- **Finetune** with FLAML\n", + "- **Check** results\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## 1. Setup environment\n", + "\n", + "In this step, we first install FLAML and MLFlow, then setup mlflow autologging to make sure we've the proper environment for the task. " + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "jupyter": { + "outputs_hidden": true + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "d48224ad-8201-4266-b8e0-8e9c198e9dd0", + "queued_time": "2023-04-09T13:53:09.4702521Z", + "session_id": null, + "session_start_time": "2023-04-09T13:53:09.5127728Z", + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": {}, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting flaml[synapse]==1.1.3\n", + " Downloading FLAML-1.1.3-py3-none-any.whl (224 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m224.2/224.2 KB\u001b[0m \u001b[31m10.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting xgboost==1.6.1\n", + " Downloading xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl (192.9 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m192.9/192.9 MB\u001b[0m \u001b[31m34.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting pandas==1.5.1\n", + " Downloading pandas-1.5.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m12.2/12.2 MB\u001b[0m \u001b[31m8.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m:00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting numpy==1.23.4\n", + " Downloading numpy-1.23.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m17.1/17.1 MB\u001b[0m \u001b[31m135.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting openml\n", + " Downloading openml-0.13.1.tar.gz (127 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m127.6/127.6 KB\u001b[0m \u001b[31m70.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25hCollecting scipy>=1.4.1\n", + " Downloading scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m34.5/34.5 MB\u001b[0m \u001b[31m120.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting lightgbm>=2.3.1\n", + " Downloading lightgbm-3.3.5-py3-none-manylinux1_x86_64.whl (2.0 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.0/2.0 MB\u001b[0m \u001b[31m170.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting scikit-learn>=0.24\n", + " Downloading scikit_learn-1.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m9.8/9.8 MB\u001b[0m \u001b[31m186.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting pyspark>=3.0.0\n", + " Downloading pyspark-3.3.2.tar.gz (281.4 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m281.4/281.4 MB\u001b[0m \u001b[31m26.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25hCollecting joblibspark>=0.5.0\n", + " Downloading joblibspark-0.5.1-py3-none-any.whl (15 kB)\n", + "Collecting optuna==2.8.0\n", + " Downloading optuna-2.8.0-py3-none-any.whl (301 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m302.0/302.0 KB\u001b[0m \u001b[31m104.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting python-dateutil>=2.8.1\n", + " Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m247.7/247.7 KB\u001b[0m \u001b[31m98.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting pytz>=2020.1\n", + " Downloading pytz-2023.3-py2.py3-none-any.whl (502 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m502.3/502.3 KB\u001b[0m \u001b[31m126.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting alembic\n", + " Downloading alembic-1.10.3-py3-none-any.whl (212 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m212.3/212.3 KB\u001b[0m \u001b[31m88.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting colorlog\n", + " Downloading colorlog-6.7.0-py2.py3-none-any.whl (11 kB)\n", + "Collecting tqdm\n", + " Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.1/77.1 KB\u001b[0m \u001b[31m39.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting cliff\n", + " Downloading cliff-4.2.0-py3-none-any.whl (81 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m81.0/81.0 KB\u001b[0m \u001b[31m37.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting sqlalchemy>=1.1.0\n", + " Downloading SQLAlchemy-2.0.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.8 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.8/2.8 MB\u001b[0m \u001b[31m190.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting cmaes>=0.8.2\n", + " Downloading cmaes-0.9.1-py3-none-any.whl (21 kB)\n", + "Collecting packaging>=20.0\n", + " Downloading packaging-23.0-py3-none-any.whl (42 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m42.7/42.7 KB\u001b[0m \u001b[31m25.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting liac-arff>=2.4.0\n", + " Downloading liac-arff-2.5.0.tar.gz (13 kB)\n", + " Preparing metadata (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25hCollecting xmltodict\n", + " Downloading xmltodict-0.13.0-py2.py3-none-any.whl (10.0 kB)\n", + "Collecting requests\n", + " Downloading requests-2.28.2-py3-none-any.whl (62 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.8/62.8 KB\u001b[0m \u001b[31m25.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting minio\n", + " Downloading minio-7.1.14-py3-none-any.whl (77 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.2/77.2 KB\u001b[0m \u001b[31m40.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting pyarrow\n", + " Downloading pyarrow-11.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (35.0 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m35.0/35.0 MB\u001b[0m \u001b[31m119.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m00:01\u001b[0m00:01\u001b[0m\n", + "\u001b[?25hCollecting joblib>=0.14\n", + " Downloading joblib-1.2.0-py3-none-any.whl (297 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m298.0/298.0 KB\u001b[0m \u001b[31m104.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting wheel\n", + " Downloading wheel-0.40.0-py3-none-any.whl (64 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m64.5/64.5 KB\u001b[0m \u001b[31m35.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting py4j==0.10.9.5\n", + " Downloading py4j-0.10.9.5-py2.py3-none-any.whl (199 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m199.7/199.7 KB\u001b[0m \u001b[31m88.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting six>=1.5\n", + " Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)\n", + "Collecting threadpoolctl>=2.0.0\n", + " Downloading threadpoolctl-3.1.0-py3-none-any.whl (14 kB)\n", + "Collecting urllib3\n", + " Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m140.9/140.9 KB\u001b[0m \u001b[31m70.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting certifi\n", + " Downloading certifi-2022.12.7-py3-none-any.whl (155 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m155.3/155.3 KB\u001b[0m \u001b[31m78.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting charset-normalizer<4,>=2\n", + " Downloading charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (195 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m195.9/195.9 KB\u001b[0m \u001b[31m86.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting idna<4,>=2.5\n", + " Downloading idna-3.4-py3-none-any.whl (61 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m61.5/61.5 KB\u001b[0m \u001b[31m34.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting greenlet!=0.4.17\n", + " Downloading greenlet-2.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (618 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m618.5/618.5 KB\u001b[0m \u001b[31m137.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting typing-extensions>=4.2.0\n", + " Downloading typing_extensions-4.5.0-py3-none-any.whl (27 kB)\n", + "Collecting Mako\n", + " Downloading Mako-1.2.4-py3-none-any.whl (78 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m78.7/78.7 KB\u001b[0m \u001b[31m44.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting importlib-resources\n", + " Downloading importlib_resources-5.12.0-py3-none-any.whl (36 kB)\n", + "Collecting importlib-metadata\n", + " Downloading importlib_metadata-6.2.0-py3-none-any.whl (21 kB)\n", + "Collecting stevedore>=2.0.1\n", + " Downloading stevedore-5.0.0-py3-none-any.whl (49 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.6/49.6 KB\u001b[0m \u001b[31m27.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting PyYAML>=3.12\n", + " Downloading PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (701 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m701.2/701.2 KB\u001b[0m \u001b[31m136.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting autopage>=0.4.0\n", + " Downloading autopage-0.5.1-py3-none-any.whl (29 kB)\n", + "Collecting cmd2>=1.0.0\n", + " Downloading cmd2-2.4.3-py3-none-any.whl (147 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m147.2/147.2 KB\u001b[0m \u001b[31m71.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting PrettyTable>=0.7.2\n", + " Downloading prettytable-3.6.0-py3-none-any.whl (27 kB)\n", + "Collecting attrs>=16.3.0\n", + " Downloading attrs-22.2.0-py3-none-any.whl (60 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m60.0/60.0 KB\u001b[0m \u001b[31m38.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting pyperclip>=1.6\n", + " Downloading pyperclip-1.8.2.tar.gz (20 kB)\n", + " Preparing metadata (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25hCollecting wcwidth>=0.1.7\n", + " Downloading wcwidth-0.2.6-py2.py3-none-any.whl (29 kB)\n", + "Collecting zipp>=0.5\n", + " Downloading zipp-3.15.0-py3-none-any.whl (6.8 kB)\n", + "Collecting pbr!=2.1.0,>=2.0.0\n", + " Downloading pbr-5.11.1-py2.py3-none-any.whl (112 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m112.7/112.7 KB\u001b[0m \u001b[31m59.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hCollecting MarkupSafe>=0.9.2\n", + " Downloading MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)\n", + "Building wheels for collected packages: openml, liac-arff, pyspark, pyperclip\n", + " Building wheel for openml (setup.py) ... \u001b[?25l-\b \b\\\b \bdone\n", + "\u001b[?25h Created wheel for openml: filename=openml-0.13.1-py3-none-any.whl size=142787 sha256=a8434d2ac76ac96031814803c3e41204c26927e9f4429117e59a494e4b592adb\n", + " Stored in directory: /home/trusted-service-user/.cache/pip/wheels/c4/1c/5e/5775d391b42f19ce45a465873d8ce87da9ea56f0cd3af920c4\n", + " Building wheel for liac-arff (setup.py) ... \u001b[?25l-\b \bdone\n", + "\u001b[?25h Created wheel for liac-arff: filename=liac_arff-2.5.0-py3-none-any.whl size=11731 sha256=07dd6471e0004d4f00aec033896502af0b23e073f0c43e95afa97db2b545ce83\n", + " Stored in directory: /home/trusted-service-user/.cache/pip/wheels/a2/de/68/bf3972de3ecb31e32bef59a7f4c75f0687a3674c476b347c14\n", + " Building wheel for pyspark (setup.py) ... \u001b[?25l-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \b\\\b \b|\b \b/\b \b-\b \bdone\n", + "\u001b[?25h Created wheel for pyspark: filename=pyspark-3.3.2-py2.py3-none-any.whl size=281824026 sha256=a0064b8d2ed7587f48ff6c4bc6afd36c683af7c568084f16ebd143aa6955a0a8\n", + " Stored in directory: /home/trusted-service-user/.cache/pip/wheels/b1/59/a0/a1a0624b5e865fd389919c1a10f53aec9b12195d6747710baf\n", + " Building wheel for pyperclip (setup.py) ... \u001b[?25l-\b \b\\\b \bdone\n", + "\u001b[?25h Created wheel for pyperclip: filename=pyperclip-1.8.2-py3-none-any.whl size=11107 sha256=b3ad4639c1af2d7f2e4c5c8c0e40b4ff849b5c5b26730285f3d7ad320badd2c3\n", + " Stored in directory: /home/trusted-service-user/.cache/pip/wheels/7f/1a/65/84ff8c386bec21fca6d220ea1f5498a0367883a78dd5ba6122\n", + "Successfully built openml liac-arff pyspark pyperclip\n", + "Installing collected packages: wcwidth, pytz, pyperclip, py4j, zipp, xmltodict, wheel, urllib3, typing-extensions, tqdm, threadpoolctl, six, PyYAML, pyspark, PrettyTable, pbr, packaging, numpy, MarkupSafe, liac-arff, joblib, idna, greenlet, colorlog, charset-normalizer, certifi, autopage, attrs, stevedore, sqlalchemy, scipy, requests, python-dateutil, pyarrow, minio, Mako, joblibspark, importlib-resources, importlib-metadata, cmd2, cmaes, xgboost, scikit-learn, pandas, cliff, alembic, optuna, openml, lightgbm, flaml\n", + " Attempting uninstall: wcwidth\n", + " Found existing installation: wcwidth 0.2.5\n", + " Not uninstalling wcwidth at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'wcwidth'. No files were found to uninstall.\n", + " Attempting uninstall: pytz\n", + " Found existing installation: pytz 2021.1\n", + " Not uninstalling pytz at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'pytz'. No files were found to uninstall.\n", + " Attempting uninstall: pyperclip\n", + " Found existing installation: pyperclip 1.8.2\n", + " Not uninstalling pyperclip at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'pyperclip'. No files were found to uninstall.\n", + " Attempting uninstall: py4j\n", + " Found existing installation: py4j 0.10.9.3\n", + " Not uninstalling py4j at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'py4j'. No files were found to uninstall.\n", + " Attempting uninstall: zipp\n", + " Found existing installation: zipp 3.5.0\n", + " Not uninstalling zipp at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'zipp'. No files were found to uninstall.\n", + " Attempting uninstall: wheel\n", + " Found existing installation: wheel 0.36.2\n", + " Not uninstalling wheel at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'wheel'. No files were found to uninstall.\n", + " Attempting uninstall: urllib3\n", + " Found existing installation: urllib3 1.26.4\n", + " Not uninstalling urllib3 at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'urllib3'. No files were found to uninstall.\n", + " Attempting uninstall: typing-extensions\n", + " Found existing installation: typing-extensions 3.10.0.0\n", + " Not uninstalling typing-extensions at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'typing-extensions'. No files were found to uninstall.\n", + " Attempting uninstall: tqdm\n", + " Found existing installation: tqdm 4.61.2\n", + " Not uninstalling tqdm at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'tqdm'. No files were found to uninstall.\n", + " Attempting uninstall: threadpoolctl\n", + " Found existing installation: threadpoolctl 2.1.0\n", + " Not uninstalling threadpoolctl at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'threadpoolctl'. No files were found to uninstall.\n", + " Attempting uninstall: six\n", + " Found existing installation: six 1.16.0\n", + " Not uninstalling six at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'six'. No files were found to uninstall.\n", + " Attempting uninstall: PyYAML\n", + " Found existing installation: PyYAML 5.4.1\n", + " Not uninstalling pyyaml at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'PyYAML'. No files were found to uninstall.\n", + " Attempting uninstall: pyspark\n", + " Found existing installation: pyspark 3.2.1\n", + " Not uninstalling pyspark at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'pyspark'. No files were found to uninstall.\n", + " Attempting uninstall: PrettyTable\n", + " Found existing installation: prettytable 2.4.0\n", + " Not uninstalling prettytable at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'prettytable'. No files were found to uninstall.\n", + " Attempting uninstall: packaging\n", + " Found existing installation: packaging 21.0\n", + " Not uninstalling packaging at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'packaging'. No files were found to uninstall.\n", + " Attempting uninstall: numpy\n", + " Found existing installation: numpy 1.19.4\n", + " Not uninstalling numpy at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'numpy'. No files were found to uninstall.\n", + " Attempting uninstall: MarkupSafe\n", + " Found existing installation: MarkupSafe 2.0.1\n", + " Not uninstalling markupsafe at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'MarkupSafe'. No files were found to uninstall.\n", + " Attempting uninstall: liac-arff\n", + " Found existing installation: liac-arff 2.5.0\n", + " Not uninstalling liac-arff at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'liac-arff'. No files were found to uninstall.\n", + " Attempting uninstall: joblib\n", + " Found existing installation: joblib 1.0.1\n", + " Not uninstalling joblib at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'joblib'. No files were found to uninstall.\n", + " Attempting uninstall: idna\n", + " Found existing installation: idna 2.10\n", + " Not uninstalling idna at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'idna'. No files were found to uninstall.\n", + " Attempting uninstall: greenlet\n", + " Found existing installation: greenlet 1.1.0\n", + " Not uninstalling greenlet at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'greenlet'. No files were found to uninstall.\n", + " Attempting uninstall: certifi\n", + " Found existing installation: certifi 2021.5.30\n", + " Not uninstalling certifi at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'certifi'. No files were found to uninstall.\n", + " Attempting uninstall: attrs\n", + " Found existing installation: attrs 21.2.0\n", + " Not uninstalling attrs at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'attrs'. No files were found to uninstall.\n", + " Attempting uninstall: sqlalchemy\n", + " Found existing installation: SQLAlchemy 1.4.20\n", + " Not uninstalling sqlalchemy at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'SQLAlchemy'. No files were found to uninstall.\n", + " Attempting uninstall: scipy\n", + " Found existing installation: scipy 1.5.3\n", + " Not uninstalling scipy at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'scipy'. No files were found to uninstall.\n", + " Attempting uninstall: requests\n", + " Found existing installation: requests 2.25.1\n", + " Not uninstalling requests at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'requests'. No files were found to uninstall.\n", + " Attempting uninstall: python-dateutil\n", + " Found existing installation: python-dateutil 2.8.1\n", + " Not uninstalling python-dateutil at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'python-dateutil'. No files were found to uninstall.\n", + " Attempting uninstall: pyarrow\n", + " Found existing installation: pyarrow 3.0.0\n", + " Not uninstalling pyarrow at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'pyarrow'. No files were found to uninstall.\n", + " Attempting uninstall: importlib-resources\n", + " Found existing installation: importlib-resources 5.10.0\n", + " Not uninstalling importlib-resources at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'importlib-resources'. No files were found to uninstall.\n", + " Attempting uninstall: importlib-metadata\n", + " Found existing installation: importlib-metadata 4.6.1\n", + " Not uninstalling importlib-metadata at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'importlib-metadata'. No files were found to uninstall.\n", + " Attempting uninstall: xgboost\n", + " Found existing installation: xgboost 1.4.0\n", + " Not uninstalling xgboost at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'xgboost'. No files were found to uninstall.\n", + " Attempting uninstall: scikit-learn\n", + " Found existing installation: scikit-learn 0.23.2\n", + " Not uninstalling scikit-learn at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'scikit-learn'. No files were found to uninstall.\n", + " Attempting uninstall: pandas\n", + " Found existing installation: pandas 1.2.3\n", + " Not uninstalling pandas at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'pandas'. No files were found to uninstall.\n", + " Attempting uninstall: lightgbm\n", + " Found existing installation: lightgbm 3.2.1\n", + " Not uninstalling lightgbm at /home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages, outside environment /nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39\n", + " Can't uninstall 'lightgbm'. No files were found to uninstall.\n", + "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", + "tensorflow 2.4.1 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.\n", + "tensorflow 2.4.1 requires typing-extensions~=3.7.4, but you have typing-extensions 4.5.0 which is incompatible.\n", + "pmdarima 1.8.2 requires numpy~=1.19.0, but you have numpy 1.23.4 which is incompatible.\n", + "koalas 1.8.0 requires numpy<1.20.0,>=1.14, but you have numpy 1.23.4 which is incompatible.\n", + "gevent 21.1.2 requires greenlet<2.0,>=0.4.17; platform_python_implementation == \"CPython\", but you have greenlet 2.0.2 which is incompatible.\n", + "azureml-dataset-runtime 1.34.0 requires pyarrow<4.0.0,>=0.17.0, but you have pyarrow 11.0.0 which is incompatible.\n", + "azureml-core 1.34.0 requires urllib3<=1.26.6,>=1.23, but you have urllib3 1.26.15 which is incompatible.\u001b[0m\u001b[31m\n", + "\u001b[0mSuccessfully installed Mako-1.2.4 MarkupSafe-2.1.2 PrettyTable-3.6.0 PyYAML-6.0 alembic-1.10.3 attrs-22.2.0 autopage-0.5.1 certifi-2022.12.7 charset-normalizer-3.1.0 cliff-4.2.0 cmaes-0.9.1 cmd2-2.4.3 colorlog-6.7.0 flaml-1.1.3 greenlet-2.0.2 idna-3.4 importlib-metadata-6.2.0 importlib-resources-5.12.0 joblib-1.2.0 joblibspark-0.5.1 liac-arff-2.5.0 lightgbm-3.3.5 minio-7.1.14 numpy-1.23.4 openml-0.13.1 optuna-2.8.0 packaging-23.0 pandas-1.5.1 pbr-5.11.1 py4j-0.10.9.5 pyarrow-11.0.0 pyperclip-1.8.2 pyspark-3.3.2 python-dateutil-2.8.2 pytz-2023.3 requests-2.28.2 scikit-learn-1.2.2 scipy-1.10.1 six-1.16.0 sqlalchemy-2.0.9 stevedore-5.0.0 threadpoolctl-3.1.0 tqdm-4.65.0 typing-extensions-4.5.0 urllib3-1.26.15 wcwidth-0.2.6 wheel-0.40.0 xgboost-1.6.1 xmltodict-0.13.0 zipp-3.15.0\n", + "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.0.1 is available.\n", + "You should consider upgrading via the '/nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", + "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" + ] + }, + { + "data": {}, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Warning: PySpark kernel has been restarted to use updated packages.\n", + "\n" + ] + } + ], + "source": [ + "%pip install flaml[synapse]==1.1.3 xgboost==1.6.1 pandas==1.5.1 numpy==1.23.4 openml --force-reinstall" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Uncomment `_init_spark()` if run in local spark env." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def _init_spark():\n", + " import pyspark\n", + "\n", + " spark = (\n", + " pyspark.sql.SparkSession.builder.appName(\"MyApp\")\n", + " .master(\"local[2]\")\n", + " .config(\n", + " \"spark.jars.packages\",\n", + " (\n", + " \"com.microsoft.azure:synapseml_2.12:0.10.2,\"\n", + " \"org.apache.hadoop:hadoop-azure:3.3.5,\"\n", + " \"com.microsoft.azure:azure-storage:8.6.6\"\n", + " ),\n", + " )\n", + " .config(\"spark.jars.repositories\", \"https://mmlspark.azureedge.net/maven\")\n", + " .config(\"spark.sql.debug.maxToStringFields\", \"100\")\n", + " .getOrCreate()\n", + " )\n", + " return spark\n", + "\n", + "# spark = _init_spark()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## 2. Prepare train and test datasets\n", + "In this step, we first download the dataset with sklearn.datasets, then convert it into a spark dataframe. After that, we split the dataset into train, validation and test datasets." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "b48443c1-a512-4624-b047-1a04eeba9a9d", + "queued_time": "2023-04-09T13:53:09.3733824Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py:471: FutureWarning: iteritems is deprecated and will be removed in a future version. Use .items instead.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Dataframe has 20640 rows\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "import pandas as pd\n", + "from sklearn.datasets import fetch_california_housing\n", + "\n", + "data = fetch_california_housing()\n", + "\n", + "feature_cols = [\"f\" + str(i) for i in range(data.data.shape[1])]\n", + "header = [\"target\"] + feature_cols\n", + "df = spark.createDataFrame(\n", + " pd.DataFrame(data=np.column_stack((data.target, data.data)), columns=header)\n", + ").repartition(1)\n", + "\n", + "print(\"Dataframe has {} rows\".format(df.count()))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "Here, we split the datasets randomly." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "0600f529-d1d0-4132-a55c-24464a10a9c3", + "queued_time": "2023-04-09T13:53:09.3762563Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "Row(target=0.14999, features=DenseVector([2.1, 19.0, 3.7744, 1.4573, 490.0, 2.9878, 36.4, -117.02]))" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from pyspark.ml.feature import VectorAssembler\n", + "\n", + "# Convert features into a single vector column\n", + "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", + "data = featurizer.transform(df)[\"target\", \"features\"]\n", + "\n", + "train_data, test_data = data.randomSplit([0.85, 0.15], seed=41)\n", + "train_data_sub, val_data_sub = train_data.randomSplit([0.85, 0.15], seed=41)\n", + "\n", + "train_data.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## 3. Train with initial parameters\n", + "In this step, we prepare a train function which can accept different config of parameters. And we train a model with initial parameters." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "3c41f117-9de6-4f81-b9fe-697842cb7d87", + "queued_time": "2023-04-09T13:53:09.377987Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from synapse.ml.lightgbm import LightGBMRegressor\n", + "from pyspark.ml.evaluation import RegressionEvaluator\n", + "\n", + "def train(alpha, learningRate, numLeaves, numIterations, train_data=train_data_sub, val_data=val_data_sub):\n", + " \"\"\"\n", + " This train() function:\n", + " - takes hyperparameters as inputs (for tuning later)\n", + " - returns the R2 score on the validation dataset\n", + "\n", + " Wrapping code as a function makes it easier to reuse the code later for tuning.\n", + " \"\"\"\n", + "\n", + " lgr = LightGBMRegressor(\n", + " objective=\"quantile\",\n", + " alpha=alpha,\n", + " learningRate=learningRate,\n", + " numLeaves=numLeaves,\n", + " labelCol=\"target\",\n", + " numIterations=numIterations,\n", + " )\n", + "\n", + " model = lgr.fit(train_data)\n", + "\n", + " # Define an evaluation metric and evaluate the model on the validation dataset.\n", + " predictions = model.transform(val_data)\n", + " evaluator = RegressionEvaluator(predictionCol=\"prediction\", labelCol=\"target\", metricName=\"r2\")\n", + " eval_metric = evaluator.evaluate(predictions)\n", + "\n", + " return model, eval_metric" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "Here, we train a model with default parameters." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "b936d629-6efc-4582-a4cc-24b55a8f1260", + "queued_time": "2023-04-09T13:53:09.3794418Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "R2 of initial model on test dataset is: 0.7086364659469071\n" + ] + } + ], + "source": [ + "init_model, init_eval_metric = train(alpha=0.2, learningRate=0.3, numLeaves=31, numIterations=100, train_data=train_data, val_data=test_data)\n", + "print(\"R2 of initial model on test dataset is: \", init_eval_metric)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## 4. Tune with FLAML\n", + "\n", + "In this step, we configure the search space for hyperparameters, and use FLAML to tune the model over the parameters." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "5785d2f4-5945-45ec-865d-1cf62f1365f2", + "queued_time": "2023-04-09T13:53:09.3808794Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/dask/dataframe/backends.py:187: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", + " _numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)\n", + "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/dask/dataframe/backends.py:187: FutureWarning: pandas.Float64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", + " _numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)\n", + "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/dask/dataframe/backends.py:187: FutureWarning: pandas.UInt64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", + " _numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Failure while loading azureml_run_type_providers. Failed to load entrypoint azureml.scriptrun = azureml.core.script_run:ScriptRun._from_run_dto with exception (urllib3 1.26.15 (/nfs4/pyenv-78360147-4170-4df6-b8c9-313b8eb68e39/lib/python3.8/site-packages), Requirement.parse('urllib3<=1.26.6,>=1.23')).\n" + ] + } + ], + "source": [ + "import flaml\n", + "import time\n", + "\n", + "# define the search space\n", + "params = {\n", + " \"alpha\": flaml.tune.uniform(0, 1),\n", + " \"learningRate\": flaml.tune.uniform(0.001, 1),\n", + " \"numLeaves\": flaml.tune.randint(30, 100),\n", + " \"numIterations\": flaml.tune.randint(100, 300),\n", + "}\n", + "\n", + "# define the tune function\n", + "def flaml_tune(config):\n", + " _, metric = train(**config)\n", + " return {\"r2\": metric}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "Here, we optimize the hyperparameters with FLAML. We set the total tuning time to 120 seconds." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "7f984630-2cd4-46f6-a029-df857503ac59", + "queued_time": "2023-04-09T13:53:09.3823941Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[flaml.tune.tune: 04-09 13:58:26] {523} INFO - Using search algorithm BlendSearch.\n", + "No low-cost partial config given to the search algorithm. For cost-frugal search, consider providing low-cost values for cost-related hps via 'low_cost_partial_config'. More info can be found at https://microsoft.github.io/FLAML/docs/FAQ#about-low_cost_partial_config-in-tune\n", + "You passed a `space` parameter to OptunaSearch that contained unresolved search space definitions. OptunaSearch should however be instantiated with fully configured search spaces only. To use Ray Tune's automatic search space conversion, pass the space definition as part of the `config` argument to `tune.run()` instead.\n", + "[flaml.tune.tune: 04-09 13:58:26] {811} INFO - trial 1 config: {'alpha': 0.09743207287894917, 'learningRate': 0.64761881525086, 'numLeaves': 30, 'numIterations': 172}\n", + "[flaml.tune.tune: 04-09 13:58:29] {215} INFO - result: {'r2': 0.687704619858422, 'training_iteration': 0, 'config': {'alpha': 0.09743207287894917, 'learningRate': 0.64761881525086, 'numLeaves': 30, 'numIterations': 172}, 'config/alpha': 0.09743207287894917, 'config/learningRate': 0.64761881525086, 'config/numLeaves': 30, 'config/numIterations': 172, 'experiment_tag': 'exp', 'time_total_s': 2.9537112712860107}\n", + "[flaml.tune.tune: 04-09 13:58:29] {811} INFO - trial 2 config: {'alpha': 0.771320643266746, 'learningRate': 0.021731197410042098, 'numLeaves': 74, 'numIterations': 249}\n", + "[flaml.tune.tune: 04-09 13:58:34] {215} INFO - result: {'r2': 0.8122065159182567, 'training_iteration': 0, 'config': {'alpha': 0.771320643266746, 'learningRate': 0.021731197410042098, 'numLeaves': 74, 'numIterations': 249}, 'config/alpha': 0.771320643266746, 'config/learningRate': 0.021731197410042098, 'config/numLeaves': 74, 'config/numIterations': 249, 'experiment_tag': 'exp', 'time_total_s': 5.294095993041992}\n", + "[flaml.tune.tune: 04-09 13:58:34] {811} INFO - trial 3 config: {'alpha': 0.4985070123025904, 'learningRate': 0.2255718488853168, 'numLeaves': 43, 'numIterations': 252}\n", + "[flaml.tune.tune: 04-09 13:58:38] {215} INFO - result: {'r2': 0.8601164308675, 'training_iteration': 0, 'config': {'alpha': 0.4985070123025904, 'learningRate': 0.2255718488853168, 'numLeaves': 43, 'numIterations': 252}, 'config/alpha': 0.4985070123025904, 'config/learningRate': 0.2255718488853168, 'config/numLeaves': 43, 'config/numIterations': 252, 'experiment_tag': 'exp', 'time_total_s': 3.6809208393096924}\n", + "[flaml.tune.tune: 04-09 13:58:38] {811} INFO - trial 4 config: {'alpha': 0.5940316589938806, 'learningRate': 0.22926504794631342, 'numLeaves': 35, 'numIterations': 279}\n", + "[flaml.tune.tune: 04-09 13:58:41] {215} INFO - result: {'r2': 0.8645092967530056, 'training_iteration': 0, 'config': {'alpha': 0.5940316589938806, 'learningRate': 0.22926504794631342, 'numLeaves': 35, 'numIterations': 279}, 'config/alpha': 0.5940316589938806, 'config/learningRate': 0.22926504794631342, 'config/numLeaves': 35, 'config/numIterations': 279, 'experiment_tag': 'exp', 'time_total_s': 3.345020294189453}\n", + "[flaml.tune.tune: 04-09 13:58:41] {811} INFO - trial 5 config: {'alpha': 0.16911083656253545, 'learningRate': 0.08925147435983626, 'numLeaves': 77, 'numIterations': 290}\n", + "[flaml.tune.tune: 04-09 13:58:47] {215} INFO - result: {'r2': 0.7628328927228814, 'training_iteration': 0, 'config': {'alpha': 0.16911083656253545, 'learningRate': 0.08925147435983626, 'numLeaves': 77, 'numIterations': 290}, 'config/alpha': 0.16911083656253545, 'config/learningRate': 0.08925147435983626, 'config/numLeaves': 77, 'config/numIterations': 290, 'experiment_tag': 'exp', 'time_total_s': 5.498648643493652}\n", + "[flaml.tune.tune: 04-09 13:58:47] {811} INFO - trial 6 config: {'alpha': 0.7613139607545752, 'learningRate': 0.001, 'numLeaves': 82, 'numIterations': 244}\n", + "[flaml.tune.tune: 04-09 13:58:52] {215} INFO - result: {'r2': 0.05495941941983151, 'training_iteration': 0, 'config': {'alpha': 0.7613139607545752, 'learningRate': 0.001, 'numLeaves': 82, 'numIterations': 244}, 'config/alpha': 0.7613139607545752, 'config/learningRate': 0.001, 'config/numLeaves': 82, 'config/numIterations': 244, 'experiment_tag': 'exp', 'time_total_s': 5.299764394760132}\n", + "[flaml.tune.tune: 04-09 13:58:52] {811} INFO - trial 7 config: {'alpha': 0.003948266327914451, 'learningRate': 0.5126800711223909, 'numLeaves': 86, 'numIterations': 222}\n", + "[flaml.tune.tune: 04-09 13:58:57] {215} INFO - result: {'r2': -0.13472888652710457, 'training_iteration': 0, 'config': {'alpha': 0.003948266327914451, 'learningRate': 0.5126800711223909, 'numLeaves': 86, 'numIterations': 222}, 'config/alpha': 0.003948266327914451, 'config/learningRate': 0.5126800711223909, 'config/numLeaves': 86, 'config/numIterations': 222, 'experiment_tag': 'exp', 'time_total_s': 4.852660417556763}\n", + "[flaml.tune.tune: 04-09 13:58:57] {811} INFO - trial 8 config: {'alpha': 0.7217553174317995, 'learningRate': 0.2925841921024625, 'numLeaves': 94, 'numIterations': 242}\n", + "[flaml.tune.tune: 04-09 13:59:02] {215} INFO - result: {'r2': 0.841125964017654, 'training_iteration': 0, 'config': {'alpha': 0.7217553174317995, 'learningRate': 0.2925841921024625, 'numLeaves': 94, 'numIterations': 242}, 'config/alpha': 0.7217553174317995, 'config/learningRate': 0.2925841921024625, 'config/numLeaves': 94, 'config/numIterations': 242, 'experiment_tag': 'exp', 'time_total_s': 5.44955039024353}\n", + "[flaml.tune.tune: 04-09 13:59:02] {811} INFO - trial 9 config: {'alpha': 0.8650568165408982, 'learningRate': 0.20965040368499302, 'numLeaves': 92, 'numIterations': 221}\n", + "[flaml.tune.tune: 04-09 13:59:07] {215} INFO - result: {'r2': 0.764342272362222, 'training_iteration': 0, 'config': {'alpha': 0.8650568165408982, 'learningRate': 0.20965040368499302, 'numLeaves': 92, 'numIterations': 221}, 'config/alpha': 0.8650568165408982, 'config/learningRate': 0.20965040368499302, 'config/numLeaves': 92, 'config/numIterations': 221, 'experiment_tag': 'exp', 'time_total_s': 4.9519362449646}\n", + "[flaml.tune.tune: 04-09 13:59:07] {811} INFO - trial 10 config: {'alpha': 0.5425443680112613, 'learningRate': 0.14302787755392543, 'numLeaves': 56, 'numIterations': 234}\n", + "[flaml.tune.tune: 04-09 13:59:11] {215} INFO - result: {'r2': 0.8624550670698988, 'training_iteration': 0, 'config': {'alpha': 0.5425443680112613, 'learningRate': 0.14302787755392543, 'numLeaves': 56, 'numIterations': 234}, 'config/alpha': 0.5425443680112613, 'config/learningRate': 0.14302787755392543, 'config/numLeaves': 56, 'config/numIterations': 234, 'experiment_tag': 'exp', 'time_total_s': 3.658425807952881}\n", + "[flaml.tune.tune: 04-09 13:59:11] {811} INFO - trial 11 config: {'alpha': 0.5736011364335467, 'learningRate': 0.28259755916943197, 'numLeaves': 48, 'numIterations': 218}\n", + "[flaml.tune.tune: 04-09 13:59:14] {215} INFO - result: {'r2': 0.8605136490358005, 'training_iteration': 0, 'config': {'alpha': 0.5736011364335467, 'learningRate': 0.28259755916943197, 'numLeaves': 48, 'numIterations': 218}, 'config/alpha': 0.5736011364335467, 'config/learningRate': 0.28259755916943197, 'config/numLeaves': 48, 'config/numIterations': 218, 'experiment_tag': 'exp', 'time_total_s': 3.052793502807617}\n", + "[flaml.tune.tune: 04-09 13:59:14] {811} INFO - trial 12 config: {'alpha': 0.5114875995889758, 'learningRate': 0.003458195938418919, 'numLeaves': 64, 'numIterations': 250}\n", + "[flaml.tune.tune: 04-09 13:59:18] {215} INFO - result: {'r2': 0.570491367756149, 'training_iteration': 0, 'config': {'alpha': 0.5114875995889758, 'learningRate': 0.003458195938418919, 'numLeaves': 64, 'numIterations': 250}, 'config/alpha': 0.5114875995889758, 'config/learningRate': 0.003458195938418919, 'config/numLeaves': 64, 'config/numIterations': 250, 'experiment_tag': 'exp', 'time_total_s': 4.374900579452515}\n", + "[flaml.tune.tune: 04-09 13:59:18] {811} INFO - trial 13 config: {'alpha': 0.4545232529799527, 'learningRate': 0.12259729414043312, 'numLeaves': 52, 'numIterations': 268}\n", + "[flaml.tune.tune: 04-09 13:59:22] {215} INFO - result: {'r2': 0.8548999617455493, 'training_iteration': 0, 'config': {'alpha': 0.4545232529799527, 'learningRate': 0.12259729414043312, 'numLeaves': 52, 'numIterations': 268}, 'config/alpha': 0.4545232529799527, 'config/learningRate': 0.12259729414043312, 'config/numLeaves': 52, 'config/numIterations': 268, 'experiment_tag': 'exp', 'time_total_s': 4.0238401889801025}\n", + "[flaml.tune.tune: 04-09 13:59:22] {811} INFO - trial 14 config: {'alpha': 0.6305654830425699, 'learningRate': 0.16345846096741776, 'numLeaves': 60, 'numIterations': 200}\n", + "[flaml.tune.tune: 04-09 13:59:26] {215} INFO - result: {'r2': 0.8601984046769122, 'training_iteration': 0, 'config': {'alpha': 0.6305654830425699, 'learningRate': 0.16345846096741776, 'numLeaves': 60, 'numIterations': 200}, 'config/alpha': 0.6305654830425699, 'config/learningRate': 0.16345846096741776, 'config/numLeaves': 60, 'config/numIterations': 200, 'experiment_tag': 'exp', 'time_total_s': 3.4227209091186523}\n", + "[flaml.tune.tune: 04-09 13:59:26] {811} INFO - trial 15 config: {'alpha': 0.37308018496384865, 'learningRate': 0.2146450219293334, 'numLeaves': 51, 'numIterations': 230}\n", + "[flaml.tune.tune: 04-09 13:59:29] {215} INFO - result: {'r2': 0.8447822051728697, 'training_iteration': 0, 'config': {'alpha': 0.37308018496384865, 'learningRate': 0.2146450219293334, 'numLeaves': 51, 'numIterations': 230}, 'config/alpha': 0.37308018496384865, 'config/learningRate': 0.2146450219293334, 'config/numLeaves': 51, 'config/numIterations': 230, 'experiment_tag': 'exp', 'time_total_s': 3.3695919513702393}\n", + "[flaml.tune.tune: 04-09 13:59:29] {811} INFO - trial 16 config: {'alpha': 0.7120085510586739, 'learningRate': 0.07141073317851748, 'numLeaves': 61, 'numIterations': 238}\n", + "[flaml.tune.tune: 04-09 13:59:33] {215} INFO - result: {'r2': 0.8502914796218052, 'training_iteration': 0, 'config': {'alpha': 0.7120085510586739, 'learningRate': 0.07141073317851748, 'numLeaves': 61, 'numIterations': 238}, 'config/alpha': 0.7120085510586739, 'config/learningRate': 0.07141073317851748, 'config/numLeaves': 61, 'config/numIterations': 238, 'experiment_tag': 'exp', 'time_total_s': 3.8938868045806885}\n", + "[flaml.tune.tune: 04-09 13:59:33] {811} INFO - trial 17 config: {'alpha': 0.6950187212596339, 'learningRate': 0.04860046789642168, 'numLeaves': 56, 'numIterations': 216}\n", + "[flaml.tune.tune: 04-09 13:59:36] {215} INFO - result: {'r2': 0.8507495957886304, 'training_iteration': 0, 'config': {'alpha': 0.6950187212596339, 'learningRate': 0.04860046789642168, 'numLeaves': 56, 'numIterations': 216}, 'config/alpha': 0.6950187212596339, 'config/learningRate': 0.04860046789642168, 'config/numLeaves': 56, 'config/numIterations': 216, 'experiment_tag': 'exp', 'time_total_s': 3.4858739376068115}\n", + "[flaml.tune.tune: 04-09 13:59:36] {811} INFO - trial 18 config: {'alpha': 0.3900700147628886, 'learningRate': 0.23745528721142917, 'numLeaves': 56, 'numIterations': 252}\n", + "[flaml.tune.tune: 04-09 13:59:40] {215} INFO - result: {'r2': 0.8448561963142436, 'training_iteration': 0, 'config': {'alpha': 0.3900700147628886, 'learningRate': 0.23745528721142917, 'numLeaves': 56, 'numIterations': 252}, 'config/alpha': 0.3900700147628886, 'config/learningRate': 0.23745528721142917, 'config/numLeaves': 56, 'config/numIterations': 252, 'experiment_tag': 'exp', 'time_total_s': 3.8567142486572266}\n", + "[flaml.tune.tune: 04-09 13:59:40] {811} INFO - trial 19 config: {'alpha': 0.6652445360947545, 'learningRate': 0.035981262663243294, 'numLeaves': 63, 'numIterations': 225}\n", + "[flaml.tune.tune: 04-09 13:59:44] {215} INFO - result: {'r2': 0.8513605547375983, 'training_iteration': 0, 'config': {'alpha': 0.6652445360947545, 'learningRate': 0.035981262663243294, 'numLeaves': 63, 'numIterations': 225}, 'config/alpha': 0.6652445360947545, 'config/learningRate': 0.035981262663243294, 'config/numLeaves': 63, 'config/numIterations': 225, 'experiment_tag': 'exp', 'time_total_s': 3.984147071838379}\n", + "[flaml.tune.tune: 04-09 13:59:44] {811} INFO - trial 20 config: {'alpha': 0.419844199927768, 'learningRate': 0.25007449244460755, 'numLeaves': 49, 'numIterations': 243}\n", + "[flaml.tune.tune: 04-09 13:59:48] {215} INFO - result: {'r2': 0.8489881682927205, 'training_iteration': 0, 'config': {'alpha': 0.419844199927768, 'learningRate': 0.25007449244460755, 'numLeaves': 49, 'numIterations': 243}, 'config/alpha': 0.419844199927768, 'config/learningRate': 0.25007449244460755, 'config/numLeaves': 49, 'config/numIterations': 243, 'experiment_tag': 'exp', 'time_total_s': 3.3616762161254883}\n", + "[flaml.tune.tune: 04-09 13:59:48] {811} INFO - trial 21 config: {'alpha': 0.6440889733602198, 'learningRate': 0.028339066191258172, 'numLeaves': 65, 'numIterations': 240}\n", + "[flaml.tune.tune: 04-09 13:59:52] {215} INFO - result: {'r2': 0.8495512334801718, 'training_iteration': 0, 'config': {'alpha': 0.6440889733602198, 'learningRate': 0.028339066191258172, 'numLeaves': 65, 'numIterations': 240}, 'config/alpha': 0.6440889733602198, 'config/learningRate': 0.028339066191258172, 'config/numLeaves': 65, 'config/numIterations': 240, 'experiment_tag': 'exp', 'time_total_s': 4.202790021896362}\n", + "[flaml.tune.tune: 04-09 13:59:52] {811} INFO - trial 22 config: {'alpha': 0.44099976266230273, 'learningRate': 0.2577166889165927, 'numLeaves': 47, 'numIterations': 228}\n", + "[flaml.tune.tune: 04-09 13:59:55] {215} INFO - result: {'r2': 0.8488734669877886, 'training_iteration': 0, 'config': {'alpha': 0.44099976266230273, 'learningRate': 0.2577166889165927, 'numLeaves': 47, 'numIterations': 228}, 'config/alpha': 0.44099976266230273, 'config/learningRate': 0.2577166889165927, 'config/numLeaves': 47, 'config/numIterations': 228, 'experiment_tag': 'exp', 'time_total_s': 3.127204656600952}\n", + "[flaml.tune.tune: 04-09 13:59:55] {811} INFO - trial 23 config: {'alpha': 0.42121699403087287, 'learningRate': 0.001, 'numLeaves': 59, 'numIterations': 230}\n", + "[flaml.tune.tune: 04-09 13:59:59] {215} INFO - result: {'r2': 0.06286187614238248, 'training_iteration': 0, 'config': {'alpha': 0.42121699403087287, 'learningRate': 0.001, 'numLeaves': 59, 'numIterations': 230}, 'config/alpha': 0.42121699403087287, 'config/learningRate': 0.001, 'config/numLeaves': 59, 'config/numIterations': 230, 'experiment_tag': 'exp', 'time_total_s': 4.033763885498047}\n", + "[flaml.tune.tune: 04-09 13:59:59] {811} INFO - trial 24 config: {'alpha': 0.6638717419916497, 'learningRate': 0.2948532436523798, 'numLeaves': 53, 'numIterations': 238}\n", + "[flaml.tune.tune: 04-09 14:00:02] {215} INFO - result: {'r2': 0.8498368376396829, 'training_iteration': 0, 'config': {'alpha': 0.6638717419916497, 'learningRate': 0.2948532436523798, 'numLeaves': 53, 'numIterations': 238}, 'config/alpha': 0.6638717419916497, 'config/learningRate': 0.2948532436523798, 'config/numLeaves': 53, 'config/numIterations': 238, 'experiment_tag': 'exp', 'time_total_s': 3.476837396621704}\n", + "[flaml.tune.tune: 04-09 14:00:02] {811} INFO - trial 25 config: {'alpha': 0.5053650827127543, 'learningRate': 0.2864282425481766, 'numLeaves': 57, 'numIterations': 207}\n", + "[flaml.tune.tune: 04-09 14:00:06] {215} INFO - result: {'r2': 0.8638166525272971, 'training_iteration': 0, 'config': {'alpha': 0.5053650827127543, 'learningRate': 0.2864282425481766, 'numLeaves': 57, 'numIterations': 207}, 'config/alpha': 0.5053650827127543, 'config/learningRate': 0.2864282425481766, 'config/numLeaves': 57, 'config/numIterations': 207, 'experiment_tag': 'exp', 'time_total_s': 3.355837106704712}\n", + "[flaml.tune.tune: 04-09 14:00:06] {811} INFO - trial 26 config: {'alpha': 0.6747046166960979, 'learningRate': 0.10854042236738932, 'numLeaves': 32, 'numIterations': 253}\n", + "[flaml.tune.tune: 04-09 14:00:09] {215} INFO - result: {'r2': 0.8547648297991456, 'training_iteration': 0, 'config': {'alpha': 0.6747046166960979, 'learningRate': 0.10854042236738932, 'numLeaves': 32, 'numIterations': 253}, 'config/alpha': 0.6747046166960979, 'config/learningRate': 0.10854042236738932, 'config/numLeaves': 32, 'config/numIterations': 253, 'experiment_tag': 'exp', 'time_total_s': 2.7572436332702637}\n", + "[flaml.tune.tune: 04-09 14:00:09] {811} INFO - trial 27 config: {'alpha': 0.5784538183227009, 'learningRate': 0.375517980519932, 'numLeaves': 96, 'numIterations': 263}\n", + "[flaml.tune.tune: 04-09 14:00:14] {215} INFO - result: {'r2': 0.8512614628125035, 'training_iteration': 0, 'config': {'alpha': 0.5784538183227009, 'learningRate': 0.375517980519932, 'numLeaves': 96, 'numIterations': 263}, 'config/alpha': 0.5784538183227009, 'config/learningRate': 0.375517980519932, 'config/numLeaves': 96, 'config/numIterations': 263, 'experiment_tag': 'exp', 'time_total_s': 5.738212823867798}\n", + "[flaml.tune.tune: 04-09 14:00:14] {811} INFO - trial 28 config: {'alpha': 0.46593191048243093, 'learningRate': 0.2244884500377041, 'numLeaves': 99, 'numIterations': 269}\n", + "[flaml.tune.tune: 04-09 14:00:20] {215} INFO - result: {'r2': 0.86197268492276, 'training_iteration': 0, 'config': {'alpha': 0.46593191048243093, 'learningRate': 0.2244884500377041, 'numLeaves': 99, 'numIterations': 269}, 'config/alpha': 0.46593191048243093, 'config/learningRate': 0.2244884500377041, 'config/numLeaves': 99, 'config/numIterations': 269, 'experiment_tag': 'exp', 'time_total_s': 5.934798240661621}\n", + "[flaml.tune.tune: 04-09 14:00:20] {811} INFO - trial 29 config: {'alpha': 0.5784538183227009, 'learningRate': 0.375517980519932, 'numLeaves': 95, 'numIterations': 263}\n", + "[flaml.tune.tune: 04-09 14:00:26] {215} INFO - result: {'r2': 0.8524397365306237, 'training_iteration': 0, 'config': {'alpha': 0.5784538183227009, 'learningRate': 0.375517980519932, 'numLeaves': 95, 'numIterations': 263}, 'config/alpha': 0.5784538183227009, 'config/learningRate': 0.375517980519932, 'config/numLeaves': 95, 'config/numIterations': 263, 'experiment_tag': 'exp', 'time_total_s': 5.699255704879761}\n" + ] + } + ], + "source": [ + "analysis = flaml.tune.run(\n", + " flaml_tune,\n", + " params,\n", + " time_budget_s=120, # tuning in 120 seconds\n", + " num_samples=100,\n", + " metric=\"r2\",\n", + " mode=\"max\",\n", + " verbose=5,\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "a17d5766-6cd3-4428-a1b2-7a3694ea5116", + "queued_time": "2023-04-09T13:53:09.3839884Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Best config: {'alpha': 0.5940316589938806, 'learningRate': 0.22926504794631342, 'numLeaves': 35, 'numIterations': 279}\n" + ] + } + ], + "source": [ + "flaml_config = analysis.best_config\n", + "print(\"Best config: \", flaml_config)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## 5. Check results\n", + "In this step, we retrain the model using the \"best\" hyperparamters on the full training dataset, and use the test dataset to compare evaluation metrics for the initial and \"best\" model." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [ + { + "data": { + "application/vnd.livy.statement-meta+json": { + "execution_finish_time": null, + "execution_start_time": null, + "livy_statement_state": null, + "parent_msg_id": "8f4ef6a0-e516-449f-b4e4-59bb9dcffe09", + "queued_time": "2023-04-09T13:53:09.3856221Z", + "session_id": null, + "session_start_time": null, + "spark_jobs": null, + "spark_pool": null, + "state": "waiting", + "statement_id": null + }, + "text/plain": [ + "StatementMeta(, , , Waiting, )" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "On the test dataset, the initial (untuned) model achieved R^2: 0.7086364659469071\n", + "On the test dataset, the final flaml (tuned) model achieved R^2: 0.8094330941991653\n" + ] + } + ], + "source": [ + "flaml_model, flaml_metric = train(train_data=train_data, val_data=test_data, **flaml_config)\n", + "\n", + "print(\"On the test dataset, the initial (untuned) model achieved R^2: \", init_eval_metric)\n", + "print(\"On the test dataset, the final flaml (tuned) model achieved R^2: \", flaml_metric)" + ] + } + ], + "metadata": { + "description": null, + "kernelspec": { + "display_name": "Synapse PySpark", + "name": "synapse_pyspark" + }, + "language_info": { + "name": "python" + }, + "save_output": true, + "synapse_widget": { + "state": {}, + "version": "0.1" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/README.md b/tutorials/README.md new file mode 100644 index 00000000000..8fe8d8ff7a2 --- /dev/null +++ b/tutorials/README.md @@ -0,0 +1,4 @@ +Please find tutorials on FLAML below: +- [PyData Seattle 2023](flaml-tutorial-pydata-23.md) +- [A hands-on tutorial on FLAML presented at KDD 2022](flaml-tutorial-kdd-22.md) +- [A lab forum on FLAML at AAAI 2023](flaml-tutorial-aaai-23.md) diff --git a/tutorials/flaml-tutorial-aaai-23.md b/tutorials/flaml-tutorial-aaai-23.md new file mode 100644 index 00000000000..038fcd2839a --- /dev/null +++ b/tutorials/flaml-tutorial-aaai-23.md @@ -0,0 +1,67 @@ +# AAAI 2023 Lab Forum - LSHP2: Automated Machine Learning & Tuning with FLAML + +## Session Information + +**Date and Time**: February 8, 2023 at 2-6pm ET. + +Location: Walter E. Washington Convention Center, Washington DC, USA + +Duration: 4 hours (3.5 hours + 0.5 hour break) + +For the most up-to-date information, see the [AAAI'23 Program Agenda](https://aaai.org/Conferences/AAAI-23/aaai23tutorials/) + +## [Lab Forum Slides](https://1drv.ms/b/s!Ao3suATqM7n7iokCQbF7jUUYwOqGqQ?e=cMnilV) + +## What Will You Learn? + +- What FLAML is and how to use FLAML to + - find accurate ML models with low computational resources for common ML tasks + - tune hyperparameters generically +- How to leverage the flexible and rich customization choices + - finish the last mile for deployment + - create new applications +- Code examples, demos, use cases +- Research & development opportunities + +## Session Agenda + +### **Part 1. Overview of FLAML** + +- Overview of AutoML and FLAML +- Basic usages of FLAML + - Task-oriented AutoML + - [Documentation](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML) + - [Notebook: A classification task with AutoML](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/automl_classification.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/automl_classification.ipynb) + - Tune User-Defined-functions with FLAML + - [Documentation](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function) + - [Notebook: Tune user-defined function](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/tune_demo.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/tune_demo.ipynb) + - Zero-shot AutoML + - [Documentation](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML) + - [Notebook: Zeroshot AutoML](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/zeroshot_lightgbm.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/zeroshot_lightgbm.ipynb) +- [ML.NET demo](https://learn.microsoft.com/dotnet/machine-learning/tutorials/predict-prices-with-model-builder) + +Break (15m) + +### **Part 2. Deep Dive into FLAML** +- The Science Behind FLAML’s Success + - [Economical hyperparameter optimization methods in FLAML](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function/#hyperparameter-optimization-algorithm) + - [Other research in FLAML](https://microsoft.github.io/FLAML/docs/Research) + +- Maximize the Power of FLAML through Customization and Advanced Functionalities + - [Notebook: Customize your AutoML with FLAML](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/customize_your_automl_with_flaml.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/customize_your_automl_with_flaml.ipynb) + - [Notebook: Further acceleration of AutoML with FLAML](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/further_acceleration_of_automl_with_flaml.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/further_acceleration_of_automl_with_flaml.ipynb) + - [Notebook: Neural network model tuning with FLAML ](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/tune_pytorch.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/tune_pytorch.ipynb) + + +### **Part 3. New features in FLAML** +- Natural language processing + - [Notebook: AutoML for NLP tasks](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/automl_nlp.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/automl_nlp.ipynb) +- Time Series Forecasting + - [Notebook: AutoML for Time Series Forecast tasks](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/automl_time_series_forecast.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/automl_time_series_forecast.ipynb) +- Targeted Hyperparameter Optimization With Lexicographic Objectives + - [Documentation](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function/#lexicographic-objectives) + - [Notebook: Find accurate and fast neural networks with lexicographic objectives](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/tune_lexicographic.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/tune_lexicographic.ipynb) +- Online AutoML + - [Notebook: Online AutoML with Vowpal Wabbit](https://github.com/microsoft/FLAML/blob/tutorial-aaai23/notebook/autovw.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial-aaai23/notebook/autovw.ipynb) +- Fair AutoML +### Challenges and open problems diff --git a/tutorials/flaml-tutorial-kdd-22.md b/tutorials/flaml-tutorial-kdd-22.md new file mode 100644 index 00000000000..c2502471cd0 --- /dev/null +++ b/tutorials/flaml-tutorial-kdd-22.md @@ -0,0 +1,48 @@ +# KDD 2022 Hands-on Tutorial - Automated Machine Learning & Tuning with FLAML + +## Session Information + +Date: August 16, 2022 +Time: 9:30 AM ET +Location: 101 +Duration: 3 hours + +For the most up-to-date information, see the [SIGKDD'22 Program Agenda](https://kdd.org/kdd2022/handsOnTutorial.html) + +## [Tutorial Slides](https://1drv.ms/b/s!Ao3suATqM7n7ioQF8xT8BbRdyIf_Ww?e=qQysIf) + +## What Will You Learn? + +- What FLAML is and how to use it to find accurate ML models with low computational resources for common machine learning tasks +- How to leverage the flexible and rich customization choices to: + - Finish the last mile for deployment + - Create new applications +- Code examples, demos, and use cases +- Research & development opportunities + +## Session Agenda + +### Part 1 + +- Overview of AutoML and FLAML +- Task-oriented AutoML with FLAML + - [Notebook: A classification task with AutoML](https://github.com/microsoft/FLAML/blob/tutorial/notebook/automl_classification.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/automl_classification.ipynb) + - [Notebook: A regression task with AuotML using LightGBM as the learner](https://github.com/microsoft/FLAML/blob/tutorial/notebook/automl_lightgbm.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/automl_lightgbm.ipynb) +- [ML.NET demo](https://docs.microsoft.com/dotnet/machine-learning/tutorials/predict-prices-with-model-builder) +- Tune user defined functions with FLAML + - [Notebook: Basic tuning procedures and advanced tuning options](https://github.com/microsoft/FLAML/blob/tutorial/notebook/tune_demo.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/tune_demo.ipynb) + - [Notebook: Tune pytorch](https://github.com/microsoft/FLAML/blob/tutorial/notebook/tune_pytorch.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/tune_pytorch.ipynb) +- Q & A + +### Part 2 + +- Zero-shot AutoML + - [Notebook: Zeroshot AutoML](https://github.com/microsoft/FLAML/blob/tutorial/notebook/zeroshot_lightgbm.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/zeroshot_lightgbm.ipynb) +- Time series forecasting + - [Notebook: AutoML for Time Series Forecast tasks](https://github.com/microsoft/FLAML/blob/tutorial/notebook/automl_time_series_forecast.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/automl_time_series_forecast.ipynb) +- Natural language processing + - [Notebook: AutoML for NLP tasks](https://github.com/microsoft/FLAML/blob/tutorial/notebook/automl_nlp.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/automl_nlp.ipynb) +- Online AutoML + - [Notebook: Online AutoML with Vowpal Wabbit](https://github.com/microsoft/FLAML/blob/tutorial/notebook/autovw.ipynb); [Open In Colab](https://colab.research.google.com/github/microsoft/FLAML/blob/tutorial/notebook/autovw.ipynb) +- Fair AutoML +- Challenges and open problems diff --git a/tutorials/flaml-tutorial-pydata-23.md b/tutorials/flaml-tutorial-pydata-23.md new file mode 100644 index 00000000000..96c0374a0d5 --- /dev/null +++ b/tutorials/flaml-tutorial-pydata-23.md @@ -0,0 +1,40 @@ +# PyData Seattle 2023 - Automated Machine Learning & Tuning with FLAML + +## Session Information + +**Date and Time**: 04-26, 09:00–10:30 PT. + +Location: Microsoft Conference Center, Seattle, WA. + +Duration: 1.5 hours + +For the most up-to-date information, see the [PyData Seattle 2023 Agenda](https://seattle2023.pydata.org/cfp/talk/BYRA8H/) + +## [Lab Forum Slides](https://drive.google.com/file/d/14uG0N7jnf18-wizeWWfmXcBUARTQn61w/view?usp=share_link) + +## What Will You Learn? + +In this session, we will provide an in-depth and hands-on tutorial on Automated Machine Learning & Tuning with a fast python library named FLAML. We will start with an overview of the AutoML problem and the FLAML library. We will then introduce the hyperparameter optimization methods empowering the strong performance of FLAML. We will also demonstrate how to make the best use of FLAML to perform automated machine learning and hyperparameter tuning in various applications with the help of rich customization choices and advanced functionalities provided by FLAML. At last, we will share several new features of the library based on our latest research and development work around FLAML and close the tutorial with open problems and challenges learned from AutoML practice. + +## Tutorial Outline + +### **Part 1. Overview** +- Overview of AutoML & Hyperparameter Tuning + +### **Part 2. Introduction to FLAML** +- Introduction to FLAML +- AutoML and Hyperparameter Tuning with FLAML + - [Notebook: AutoML with FLAML Library](https://github.com/microsoft/FLAML/blob/d047c79352a2b5d32b72f4323dadfa2be0db8a45/notebook/automl_flight_delays.ipynb) + - [Notebook: Hyperparameter Tuning with FLAML](https://github.com/microsoft/FLAML/blob/d047c79352a2b5d32b72f4323dadfa2be0db8a45/notebook/tune_synapseml.ipynb) + +### **Part 3. Deep Dive into FLAML** +- Advanced Functionalities +- Parallelization with Apache Spark + - [Notebook: FLAML AutoML on Apache Spark](https://github.com/microsoft/FLAML/blob/d047c79352a2b5d32b72f4323dadfa2be0db8a45/notebook/automl_bankrupt_synapseml.ipynb) + +### **Part 4. New features in FLAML** +- Targeted Hyperparameter Optimization With Lexicographic Objectives + - [Notebook: Tune models with lexicographic preference across objectives](https://github.com/microsoft/FLAML/blob/7ae410c8eb967e2084b2e7dbe7d5fa2145a44b79/notebook/tune_lexicographic.ipynb) +- OpenAI GPT-3, GPT-4 and ChatGPT tuning + - [Notebook: Use FLAML to Tune OpenAI Models](https://github.com/microsoft/FLAML/blob/a0b318b12ee8288db54b674904655307f9e201c2/notebook/autogen_openai_completion.ipynb) + - [Notebook: Use FLAML to Tune ChatGPT](https://github.com/microsoft/FLAML/blob/a0b318b12ee8288db54b674904655307f9e201c2/notebook/autogen_chatgpt_gpt4.ipynb)