diff --git a/0.4/.buildinfo b/0.4/.buildinfo new file mode 100644 index 00000000..d064bf89 --- /dev/null +++ b/0.4/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: 5d2632d2fbe792265ca773e6a51b93f0 +tags: d77d1c0d9ca2f4c8421862c7c5a0d620 diff --git a/0.4/_sources/api/collectors.rst.txt b/0.4/_sources/api/collectors.rst.txt new file mode 100644 index 00000000..ca2072fc --- /dev/null +++ b/0.4/_sources/api/collectors.rst.txt @@ -0,0 +1,42 @@ +Collectors & Extractors +======================= + +miplearn.classifiers.minprob +---------------------------- + +.. automodule:: miplearn.classifiers.minprob + :members: + :undoc-members: + :show-inheritance: + +miplearn.classifiers.singleclass +-------------------------------- + +.. automodule:: miplearn.classifiers.singleclass + :members: + :undoc-members: + :show-inheritance: + +miplearn.collectors.basic +------------------------- + +.. automodule:: miplearn.collectors.basic + :members: + :undoc-members: + :show-inheritance: + +miplearn.extractors.fields +-------------------------- + +.. automodule:: miplearn.extractors.fields + :members: + :undoc-members: + :show-inheritance: + +miplearn.extractors.AlvLouWeh2017 +--------------------------------- + +.. automodule:: miplearn.extractors.AlvLouWeh2017 + :members: + :undoc-members: + :show-inheritance: diff --git a/0.4/_sources/api/components.rst.txt b/0.4/_sources/api/components.rst.txt new file mode 100644 index 00000000..64b6c5bd --- /dev/null +++ b/0.4/_sources/api/components.rst.txt @@ -0,0 +1,44 @@ +Components +========== + +miplearn.components.primal.actions +---------------------------------- + +.. automodule:: miplearn.components.primal.actions + :members: + :undoc-members: + :show-inheritance: + +miplearn.components.primal.expert +---------------------------------- + +.. automodule:: miplearn.components.primal.expert + :members: + :undoc-members: + :show-inheritance: + +miplearn.components.primal.indep +---------------------------------- + +.. automodule:: miplearn.components.primal.indep + :members: + :undoc-members: + :show-inheritance: + +miplearn.components.primal.joint +---------------------------------- + +.. automodule:: miplearn.components.primal.joint + :members: + :undoc-members: + :show-inheritance: + +miplearn.components.primal.mem +---------------------------------- + +.. automodule:: miplearn.components.primal.mem + :members: + :undoc-members: + :show-inheritance: + + \ No newline at end of file diff --git a/0.4/_sources/api/helpers.rst.txt b/0.4/_sources/api/helpers.rst.txt new file mode 100644 index 00000000..d83450ff --- /dev/null +++ b/0.4/_sources/api/helpers.rst.txt @@ -0,0 +1,18 @@ +Helpers +======= + +miplearn.io +----------- + +.. automodule:: miplearn.io + :members: + :undoc-members: + :show-inheritance: + +miplearn.h5 +----------- + +.. automodule:: miplearn.h5 + :members: + :undoc-members: + :show-inheritance: diff --git a/0.4/_sources/api/problems.rst.txt b/0.4/_sources/api/problems.rst.txt new file mode 100644 index 00000000..a60a9689 --- /dev/null +++ b/0.4/_sources/api/problems.rst.txt @@ -0,0 +1,57 @@ +Benchmark Problems +================== + +miplearn.problems.binpack +------------------------- + +.. automodule:: miplearn.problems.binpack + :members: + +miplearn.problems.multiknapsack +------------------------------- + +.. automodule:: miplearn.problems.multiknapsack + :members: + +miplearn.problems.pmedian +------------------------- + +.. automodule:: miplearn.problems.pmedian + :members: + +miplearn.problems.setcover +-------------------------- + +.. automodule:: miplearn.problems.setcover + :members: + +miplearn.problems.setpack +------------------------- + +.. automodule:: miplearn.problems.setpack + :members: + +miplearn.problems.stab +---------------------- + +.. automodule:: miplearn.problems.stab + :members: + +miplearn.problems.tsp +--------------------- + +.. automodule:: miplearn.problems.tsp + :members: + +miplearn.problems.uc +-------------------- + +.. automodule:: miplearn.problems.uc + :members: + +miplearn.problems.vertexcover +----------------------------- + +.. automodule:: miplearn.problems.vertexcover + :members: + diff --git a/0.4/_sources/api/solvers.rst.txt b/0.4/_sources/api/solvers.rst.txt new file mode 100644 index 00000000..2337d924 --- /dev/null +++ b/0.4/_sources/api/solvers.rst.txt @@ -0,0 +1,26 @@ +Solvers +======= + +miplearn.solvers.abstract +------------------------- + +.. automodule:: miplearn.solvers.abstract + :members: + :undoc-members: + :show-inheritance: + +miplearn.solvers.gurobi +------------------------- + +.. automodule:: miplearn.solvers.gurobi + :members: + :undoc-members: + :show-inheritance: + +miplearn.solvers.learning +------------------------- + +.. automodule:: miplearn.solvers.learning + :members: + :undoc-members: + :show-inheritance: diff --git a/0.4/_sources/guide/collectors.ipynb.txt b/0.4/_sources/guide/collectors.ipynb.txt new file mode 100644 index 00000000..443802ed --- /dev/null +++ b/0.4/_sources/guide/collectors.ipynb.txt @@ -0,0 +1,288 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "505cea0b-5f5d-478a-9107-42bb5515937d", + "metadata": {}, + "source": [ + "# Training Data Collectors\n", + "The first step in solving mixed-integer optimization problems with the assistance of supervised machine learning methods is solving a large set of training instances and collecting the raw training data. In this section, we describe the various training data collectors included in MIPLearn. Additionally, the framework follows the convention of storing all training data in files with a specific data format (namely, HDF5). In this section, we briefly describe this format and the rationale for choosing it.\n", + "\n", + "## Overview\n", + "\n", + "In MIPLearn, a **collector** is a class that solves or analyzes the problem and collects raw data which may be later useful for machine learning methods. Collectors, by convention, take as input: (i) a list of problem data filenames, in gzipped pickle format, ending with `.pkl.gz`; (ii) a function that builds the optimization model, such as `build_tsp_model`. After processing is done, collectors store the training data in a HDF5 file located alongside with the problem data. For example, if the problem data is stored in file `problem.pkl.gz`, then the collector writes to `problem.h5`. Collectors are, in general, very time consuming, as they may need to solve the problem to optimality, potentially multiple times.\n", + "\n", + "## HDF5 Format\n", + "\n", + "MIPLearn stores all training data in [HDF5](HDF5) (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n", + "\n", + "- *Storage of multiple scalars, vectors and matrices in a single file* --- This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.\n", + "- *High-performance partial I/O* --- Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.\n", + "- *On-the-fly compression* --- HDF5 files can be transparently compressed, using the gzip method, which reduces storage requirements and accelerates network transfers.\n", + "- *Stable, portable and well-supported data format* --- Training data files are typically expensive to generate. Having a stable and well supported data format ensures that these files remain usable in the future, potentially even by other non-Python MIP/ML frameworks.\n", + "\n", + "MIPLearn currently uses HDF5 as simple key-value storage for numerical data; more advanced features of the format, such as metadata, are not currently used. Although files generated by MIPLearn can be read with any HDF5 library, such as [h5py][h5py], some convenience functions are provided to make the access more simple and less error-prone. Specifically, the class [H5File][H5File], which is built on top of h5py, provides the methods [put_scalar][put_scalar], [put_array][put_array], [put_sparse][put_sparse], [put_bytes][put_bytes] to store, respectively, scalar values, dense multi-dimensional arrays, sparse multi-dimensional arrays and arbitrary binary data. The corresponding *get* methods are also provided. Compared to pure h5py methods, these methods automatically perform type-checking and gzip compression. The example below shows their usage.\n", + "\n", + "[HDF5]: https://en.wikipedia.org/wiki/Hierarchical_Data_Format\n", + "[NCSA]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications\n", + "[h5py]: https://www.h5py.org/\n", + "[H5File]: ../../api/helpers/#miplearn.h5.H5File\n", + "[put_scalar]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "[put_array]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "[put_sparse]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "[put_bytes]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "f906fe9c", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-30T22:19:30.826123021Z", + "start_time": "2024-01-30T22:19:30.766066926Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "x1 = 1\n", + "x2 = hello world\n", + "x3 = [1 2 3]\n", + "x4 = [[0.37454012 0.9507143 0.7319939 ]\n", + " [0.5986585 0.15601864 0.15599452]\n", + " [0.05808361 0.8661761 0.601115 ]]\n", + "x5 = (3, 2)\t0.6803075671195984\n", + " (2, 3)\t0.4504992663860321\n", + " (0, 4)\t0.013264961540699005\n", + " (2, 0)\t0.9422017335891724\n", + " (2, 4)\t0.5632882118225098\n", + " (1, 2)\t0.38541650772094727\n", + " (1, 1)\t0.015966251492500305\n", + " (0, 3)\t0.2308938205242157\n", + " (4, 4)\t0.24102546274662018\n", + " (3, 1)\t0.6832635402679443\n", + " (1, 3)\t0.6099966764450073\n", + " (3, 0)\t0.83319491147995\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "import scipy.sparse\n", + "\n", + "from miplearn.h5 import H5File\n", + "\n", + "# Set random seed to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Create a new empty HDF5 file\n", + "with H5File(\"test.h5\", \"w\") as h5:\n", + " # Store a scalar\n", + " h5.put_scalar(\"x1\", 1)\n", + " h5.put_scalar(\"x2\", \"hello world\")\n", + "\n", + " # Store a dense array and a dense matrix\n", + " h5.put_array(\"x3\", np.array([1, 2, 3]))\n", + " h5.put_array(\"x4\", np.random.rand(3, 3))\n", + "\n", + " # Store a sparse matrix\n", + " h5.put_sparse(\"x5\", scipy.sparse.random(5, 5, 0.5))\n", + "\n", + "# Re-open the file we just created and print\n", + "# previously-stored data\n", + "with H5File(\"test.h5\", \"r\") as h5:\n", + " print(\"x1 =\", h5.get_scalar(\"x1\"))\n", + " print(\"x2 =\", h5.get_scalar(\"x2\"))\n", + " print(\"x3 =\", h5.get_array(\"x3\"))\n", + " print(\"x4 =\", h5.get_array(\"x4\"))\n", + " print(\"x5 =\", h5.get_sparse(\"x5\"))" + ] + }, + { + "cell_type": "markdown", + "id": "50441907", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "d0000c8d", + "metadata": {}, + "source": [ + "## Basic collector\n", + "\n", + "[BasicCollector][BasicCollector] is the most fundamental collector, and performs the following steps:\n", + "\n", + "1. Extracts all model data, such as objective function and constraint right-hand sides into numpy arrays, which can later be easily and efficiently accessed without rebuilding the model or invoking the solver;\n", + "2. Solves the linear relaxation of the problem and stores its optimal solution, basis status and sensitivity information, among other information;\n", + "3. Solves the original mixed-integer optimization problem to optimality and stores its optimal solution, along with solve statistics, such as number of explored nodes and wallclock time.\n", + "\n", + "Data extracted in Phases 1, 2 and 3 above are prefixed, respectively as `static_`, `lp_` and `mip_`. The entire set of fields is shown in the table below.\n", + "\n", + "[BasicCollector]: ../../api/collectors/#miplearn.collectors.basic.BasicCollector\n" + ] + }, + { + "cell_type": "markdown", + "id": "6529f667", + "metadata": {}, + "source": [ + "### Data fields\n", + "\n", + "| Field | Type | Description |\n", + "|-----------------------------------|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------|\n", + "| `static_constr_lhs` | `(nconstrs, nvars)` | Constraint left-hand sides, in sparse matrix format |\n", + "| `static_constr_names` | `(nconstrs,)` | Constraint names |\n", + "| `static_constr_rhs` | `(nconstrs,)` | Constraint right-hand sides |\n", + "| `static_constr_sense` | `(nconstrs,)` | Constraint senses (`\"<\"`, `\">\"` or `\"=\"`) |\n", + "| `static_obj_offset` | `float` | Constant value added to the objective function |\n", + "| `static_sense` | `str` | `\"min\"` if minimization problem or `\"max\"` otherwise |\n", + "| `static_var_lower_bounds` | `(nvars,)` | Variable lower bounds |\n", + "| `static_var_names` | `(nvars,)` | Variable names |\n", + "| `static_var_obj_coeffs` | `(nvars,)` | Objective coefficients |\n", + "| `static_var_types` | `(nvars,)` | Types of the decision variables (`\"C\"`, `\"B\"` and `\"I\"` for continuous, binary and integer, respectively) |\n", + "| `static_var_upper_bounds` | `(nvars,)` | Variable upper bounds |\n", + "| `lp_constr_basis_status` | `(nconstr,)` | Constraint basis status (`0` for basic, `-1` for non-basic) |\n", + "| `lp_constr_dual_values` | `(nconstr,)` | Constraint dual value (or shadow price) |\n", + "| `lp_constr_sa_rhs_{up,down}` | `(nconstr,)` | Sensitivity information for the constraint RHS |\n", + "| `lp_constr_slacks` | `(nconstr,)` | Constraint slack in the solution to the LP relaxation |\n", + "| `lp_obj_value` | `float` | Optimal value of the LP relaxation |\n", + "| `lp_var_basis_status` | `(nvars,)` | Variable basis status (`0`, `-1`, `-2` or `-3` for basic, non-basic at lower bound, non-basic at upper bound, and superbasic, respectively) |\n", + "| `lp_var_reduced_costs` | `(nvars,)` | Variable reduced costs |\n", + "| `lp_var_sa_{obj,ub,lb}_{up,down}` | `(nvars,)` | Sensitivity information for the variable objective coefficient, lower and upper bound. |\n", + "| `lp_var_values` | `(nvars,)` | Optimal solution to the LP relaxation |\n", + "| `lp_wallclock_time` | `float` | Time taken to solve the LP relaxation (in seconds) |\n", + "| `mip_constr_slacks` | `(nconstrs,)` | Constraint slacks in the best MIP solution |\n", + "| `mip_gap` | `float` | Relative MIP optimality gap |\n", + "| `mip_node_count` | `float` | Number of explored branch-and-bound nodes |\n", + "| `mip_obj_bound` | `float` | Dual bound |\n", + "| `mip_obj_value` | `float` | Value of the best MIP solution |\n", + "| `mip_var_values` | `(nvars,)` | Best MIP solution |\n", + "| `mip_wallclock_time` | `float` | Time taken to solve the MIP (in seconds) |" + ] + }, + { + "cell_type": "markdown", + "id": "f2894594", + "metadata": {}, + "source": [ + "### Example\n", + "\n", + "The example below shows how to generate a few random instances of the traveling salesman problem, store its problem data, run the collector and print some of the training data to screen." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "ac6f8c6f", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-30T22:19:30.826707866Z", + "start_time": "2024-01-30T22:19:30.825940503Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "lp_obj_value = 2909.0\n", + "mip_obj_value = 2921.0\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from glob import glob\n", + "\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanGenerator,\n", + " build_tsp_model_gurobipy,\n", + ")\n", + "from miplearn.io import write_pkl_gz\n", + "from miplearn.h5 import H5File\n", + "from miplearn.collectors.basic import BasicCollector\n", + "\n", + "# Set random seed to make example reproducible.\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate a few instances of the traveling salesman problem.\n", + "data = TravelingSalesmanGenerator(\n", + " n=randint(low=10, high=11),\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " gamma=uniform(loc=0.90, scale=0.20),\n", + " fix_cities=True,\n", + " round=True,\n", + ").generate(10)\n", + "\n", + "# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n", + "write_pkl_gz(data, \"data/tsp\")\n", + "\n", + "# Solve all instances and collect basic solution information.\n", + "# Process at most four instances in parallel.\n", + "bc = BasicCollector()\n", + "bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model_gurobipy, n_jobs=4)\n", + "\n", + "# Read and print some training data for the first instance.\n", + "with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n", + " print(\"lp_obj_value = \", h5.get_scalar(\"lp_obj_value\"))\n", + " print(\"mip_obj_value = \", h5.get_scalar(\"mip_obj_value\"))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "78f0b07a", + "metadata": { + "ExecuteTime": { + "start_time": "2024-01-30T22:19:30.826179789Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/guide/features.ipynb.txt b/0.4/_sources/guide/features.ipynb.txt new file mode 100644 index 00000000..495e8eaf --- /dev/null +++ b/0.4/_sources/guide/features.ipynb.txt @@ -0,0 +1,334 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "cdc6ebe9-d1d4-4de1-9b5a-4fc8ef57b11b", + "metadata": {}, + "source": [ + "# Feature Extractors\n", + "\n", + "In the previous page, we introduced *training data collectors*, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce **feature extractors**, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model." + ] + }, + { + "cell_type": "markdown", + "id": "b4026de5", + "metadata": {}, + "source": [ + "\n", + "## Overview\n", + "\n", + "Feature extraction is an important step of the process of building a machine learning model because it helps to reduce the complexity of the data and convert it into a format that is more easily processed. Previous research has proposed converting absolute variable coefficients, for example, into relative values which are invariant to various transformations, such as problem scaling, making them more amenable to learning. Various other transformations have also been described.\n", + "\n", + "In the framework, we treat data collection and feature extraction as two separate steps to accelerate the model development cycle. Specifically, collectors are typically time-consuming, as they often need to solve the problem to optimality, and therefore focus on collecting and storing all data that may or may not be relevant, in its raw format. Feature extractors, on the other hand, focus entirely on filtering the data and improving its representation, and are therefore much faster to run. Experimenting with new data representations, therefore, can be done without resolving the instances.\n", + "\n", + "In MIPLearn, extractors implement the abstract class [FeatureExtractor][FeatureExtractor], which has methods that take as input an [H5File][H5File] and produce either: (i) instance features, which describe the entire instances; (ii) variable features, which describe a particular decision variables; or (iii) constraint features, which describe a particular constraint. The extractor is free to implement only a subset of these methods, if it is known that it will not be used with a machine learning component that requires the other types of features.\n", + "\n", + "[FeatureExtractor]: ../../api/collectors/#miplearn.features.fields.FeaturesExtractor\n", + "[H5File]: ../../api/helpers/#miplearn.h5.H5File" + ] + }, + { + "cell_type": "markdown", + "id": "b2d9736c", + "metadata": {}, + "source": [ + "\n", + "## H5FieldsExtractor\n", + "\n", + "[H5FieldsExtractor][H5FieldsExtractor], the most simple extractor in MIPLearn, simple extracts data that is already available in the HDF5 file, assembles it into a matrix and returns it as-is. The fields used to build instance, variable and constraint features are user-specified. The class also performs checks to ensure that the shapes of the returned matrices make sense." + ] + }, + { + "cell_type": "markdown", + "id": "e8184dff", + "metadata": {}, + "source": [ + "### Example\n", + "\n", + "The example below demonstrates the usage of H5FieldsExtractor in a randomly generated instance of the multi-dimensional knapsack problem." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "ed9a18c8", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "instance features (11,) \n", + " [-1531.24308771 -350. -692. -454.\n", + " -709. -605. -543. -321.\n", + " -674. -571. -341. ]\n", + "variable features (10, 4) \n", + " [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43468018e+01]\n", + " [-1.53124309e+03 -6.92000000e+02 2.51703322e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504150e+01]\n", + " [-1.53124309e+03 -7.09000000e+02 1.11373022e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055283e+02]\n", + " [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693771e+02]\n", + " [-1.53124309e+03 -3.21000000e+02 1.07488781e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -6.74000000e+02 8.82293701e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -5.71000000e+02 0.00000000e+00 1.41129074e+02]\n", + " [-1.53124309e+03 -3.41000000e+02 1.28830120e-01 0.00000000e+00]]\n", + "constraint features (5, 3) \n", + " [[ 1.3100000e+03 -1.5978307e-01 0.0000000e+00]\n", + " [ 9.8800000e+02 -3.2881632e-01 0.0000000e+00]\n", + " [ 1.0040000e+03 -4.0601316e-01 0.0000000e+00]\n", + " [ 1.2690000e+03 -1.3659772e-01 0.0000000e+00]\n", + " [ 1.0070000e+03 -2.8800571e-01 0.0000000e+00]]\n" + ] + } + ], + "source": [ + "from glob import glob\n", + "from shutil import rmtree\n", + "\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "\n", + "from miplearn.collectors.basic import BasicCollector\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "from miplearn.h5 import H5File\n", + "from miplearn.io import write_pkl_gz\n", + "from miplearn.problems.multiknapsack import (\n", + " MultiKnapsackGenerator,\n", + " build_multiknapsack_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate some random multiknapsack instances\n", + "rmtree(\"data/multiknapsack/\", ignore_errors=True)\n", + "write_pkl_gz(\n", + " MultiKnapsackGenerator(\n", + " n=randint(low=10, high=11),\n", + " m=randint(low=5, high=6),\n", + " w=uniform(loc=0, scale=1000),\n", + " K=uniform(loc=100, scale=0),\n", + " u=uniform(loc=1, scale=0),\n", + " alpha=uniform(loc=0.25, scale=0),\n", + " w_jitter=uniform(loc=0.95, scale=0.1),\n", + " p_jitter=uniform(loc=0.75, scale=0.5),\n", + " fix_w=True,\n", + " ).generate(10),\n", + " \"data/multiknapsack\",\n", + ")\n", + "\n", + "# Run the basic collector\n", + "BasicCollector().collect(\n", + " glob(\"data/multiknapsack/*\"),\n", + " build_multiknapsack_model_gurobipy,\n", + " n_jobs=4,\n", + ")\n", + "\n", + "ext = H5FieldsExtractor(\n", + " # Use as instance features the value of the LP relaxation and the\n", + " # vector of objective coefficients.\n", + " instance_fields=[\n", + " \"lp_obj_value\",\n", + " \"static_var_obj_coeffs\",\n", + " ],\n", + " # For each variable, use as features the optimal value of the LP\n", + " # relaxation, the variable objective coefficient, the variable's\n", + " # value its reduced cost.\n", + " var_fields=[\n", + " \"lp_obj_value\",\n", + " \"static_var_obj_coeffs\",\n", + " \"lp_var_values\",\n", + " \"lp_var_reduced_costs\",\n", + " ],\n", + " # For each constraint, use as features the RHS, dual value and slack.\n", + " constr_fields=[\n", + " \"static_constr_rhs\",\n", + " \"lp_constr_dual_values\",\n", + " \"lp_constr_slacks\",\n", + " ],\n", + ")\n", + "\n", + "with H5File(\"data/multiknapsack/00000.h5\") as h5:\n", + " # Extract and print instance features\n", + " x1 = ext.get_instance_features(h5)\n", + " print(\"instance features\", x1.shape, \"\\n\", x1)\n", + "\n", + " # Extract and print variable features\n", + " x2 = ext.get_var_features(h5)\n", + " print(\"variable features\", x2.shape, \"\\n\", x2)\n", + "\n", + " # Extract and print constraint features\n", + " x3 = ext.get_constr_features(h5)\n", + " print(\"constraint features\", x3.shape, \"\\n\", x3)" + ] + }, + { + "cell_type": "markdown", + "id": "2da2e74e", + "metadata": {}, + "source": [ + "\n", + "[H5FieldsExtractor]: ../../api/collectors/#miplearn.features.fields.H5FieldsExtractor" + ] + }, + { + "cell_type": "markdown", + "id": "d879c0d3", + "metadata": {}, + "source": [ + "
\n", + "Warning\n", + "\n", + "You should ensure that the number of features remains the same for all relevant HDF5 files. In the previous example, to illustrate this issue, we used variable objective coefficients as instance features. While this is allowed, note that this requires all problem instances to have the same number of variables; otherwise the number of features would vary from instance to instance and MIPLearn would be unable to concatenate the matrices.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cd0ba071", + "metadata": {}, + "source": [ + "## AlvLouWeh2017Extractor\n", + "\n", + "Alvarez, Louveaux and Wehenkel (2017) proposed a set features to describe a particular decision variable in a given node of the branch-and-bound tree, and applied it to the problem of mimicking strong branching decisions. The class [AlvLouWeh2017Extractor][] implements a subset of these features (40 out of 64), which are available outside of the branch-and-bound tree. Some features are derived from the static defintion of the problem (i.e. from objective function and constraint data), while some features are derived from the solution to the LP relaxation. The features have been designed to be: (i) independent of the size of the problem; (ii) invariant with respect to irrelevant problem transformations, such as row and column permutation; and (iii) independent of the scale of the problem. We refer to the paper for a more complete description.\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "a1bc38fe", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "x1 (10, 40) \n", + " [[-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 6.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 6.00e-01 1.00e+00 1.75e+01 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 1.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 7.00e-01 1.00e+00 5.10e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 3.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 9.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 5.00e-01 1.00e+00 1.30e+01 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 2.00e-01 1.00e+00 9.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 8.00e-01 1.00e+00 3.40e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 7.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 6.00e-01 1.00e+00 3.80e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 8.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 7.00e-01 1.00e+00 3.30e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 3.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 1.00e+00 1.00e+00 5.70e+00 1.00e+00 1.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 6.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 8.00e-01 1.00e+00 6.80e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 4.00e-01 1.00e+00 6.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 8.00e-01 1.00e+00 1.40e+00 1.00e+00 1.00e-01\n", + " 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 5.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 5.00e-01 1.00e+00 7.60e+00 1.00e+00 1.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]]\n" + ] + } + ], + "source": [ + "from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n", + "from miplearn.h5 import H5File\n", + "\n", + "# Build the extractor\n", + "ext = AlvLouWeh2017Extractor()\n", + "\n", + "# Open previously-created multiknapsack training data\n", + "with H5File(\"data/multiknapsack/00000.h5\") as h5:\n", + " # Extract and print variable features\n", + " x1 = ext.get_var_features(h5)\n", + " print(\"x1\", x1.shape, \"\\n\", x1.round(1))" + ] + }, + { + "cell_type": "markdown", + "id": "286c9927", + "metadata": {}, + "source": [ + "
\n", + "References\n", + "\n", + "* **Alvarez, Alejandro Marcos.** *Computational and theoretical synergies between linear optimization and supervised machine learning.* (2016). University of Liège.\n", + "* **Alvarez, Alejandro Marcos, Quentin Louveaux, and Louis Wehenkel.** *A machine learning-based approximation of strong branching.* INFORMS Journal on Computing 29.1 (2017): 185-195.\n", + "\n", + "
" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/guide/primal.ipynb.txt b/0.4/_sources/guide/primal.ipynb.txt new file mode 100644 index 00000000..26464ce6 --- /dev/null +++ b/0.4/_sources/guide/primal.ipynb.txt @@ -0,0 +1,291 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "880cf4c7-d3c4-4b92-85c7-04a32264cdae", + "metadata": {}, + "source": [ + "# Primal Components\n", + "\n", + "In MIPLearn, a **primal component** is class that uses machine learning to predict a (potentially partial) assignment of values to the decision variables of the problem. Predicting high-quality primal solutions may be beneficial, as they allow the MIP solver to prune potentially large portions of the search space. Alternatively, if proof of optimality is not required, the MIP solver can be used to complete the partial solution generated by the machine learning model and and double-check its feasibility. MIPLearn allows both of these usage patterns.\n", + "\n", + "In this page, we describe the four primal components currently included in MIPLearn, which employ machine learning in different ways. Each component is highly configurable, and accepts an user-provided machine learning model, which it uses for all predictions. Each component can also be configured to provide the solution to the solver in multiple ways, depending on whether proof of optimality is required.\n", + "\n", + "## Primal component actions\n", + "\n", + "Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.\n", + "\n", + "The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart](SetWarmStart). The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n", + "\n", + "[SetWarmStart]: ../../api/components/#miplearn.components.primal.actions.SetWarmStart\n", + "\n", + "The second approach is to **fix the decision variables** to their predicted values, then solve a restricted optimization problem on the remaining variables. This approach is implemented by the class `FixVariables`. The main advantage is its potential speedup: if machine learning can accurately predict values for a significant portion of the decision variables, then the MIP solver can typically complete the solution in a small fraction of the time it would take to find the same solution from scratch. The main disadvantage of this approach is that it loses optimality guarantees; that is, the complete solution found by the MIP solver may no longer be globally optimal. Also, if the machine learning predictions are not sufficiently accurate, there might not even be a feasible assignment for the variables that were left free.\n", + "\n", + "Finally, the third approach, which tries to strike a balance between the two previous ones, is to **enforce proximity** to a given solution. This strategy is implemented by the class `EnforceProximity`. More precisely, given values $\\bar{x}_1,\\ldots,\\bar{x}_n$ for a subset of binary decision variables $x_1,\\ldots,x_n$, this approach adds the constraint\n", + "\n", + "$$\n", + "\\sum_{i : \\bar{x}_i=0} x_i + \\sum_{i : \\bar{x}_i=1} \\left(1 - x_i\\right) \\leq k,\n", + "$$\n", + "to the problem, where $k$ is a user-defined parameter, which indicates how many of the predicted variables are allowed to deviate from the machine learning suggestion. The main advantage of this approach, compared to fixing variables, is its tolerance to lower-quality machine learning predictions. Its main disadvantage is that it typically leads to smaller speedups, especially for larger values of $k$. This approach also loses optimality guarantees.\n", + "\n", + "## Memorizing primal component\n", + "\n", + "A simple machine learning strategy for the prediction of primal solutions is to memorize all distinct solutions seen during training, then try to predict, during inference time, which of those memorized solutions are most likely to be feasible and to provide a good objective value for the current instance. The most promising solutions may alternatively be combined into a single partial solution, which is then provided to the MIP solver. Both variations of this strategy are implemented by the `MemorizingPrimalComponent` class. Note that it is only applicable if the problem size, and in fact if the meaning of the decision variables, remains the same across problem instances.\n", + "\n", + "More precisely, let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. Given a new instance $I_{n+1}$, `MemorizingPrimalComponent` expects a user-provided binary classifier that assigns (through the `predict_proba` method, following scikit-learn's conventions) a score $\\delta_i$ to each solution $\\bar{x}^i$, such that solutions with higher score are more likely to be good solutions for $I_{n+1}$. The features provided to the classifier are the instance features computed by an user-provided extractor. Given these scores, the component then performs one of the following to actions, as decided by the user:\n", + "\n", + "1. Selects the top $k$ solutions with the highest scores and provides them to the solver; this is implemented by `SelectTopSolutions`, and it is typically used with the `SetWarmStart` action.\n", + "\n", + "2. Merges the top $k$ solutions into a single partial solution, then provides it to the solver. This is implemented by `MergeTopSolutions`. More precisely, suppose that the machine learning regressor ordered the solutions in the sequence $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_n}$, with the most promising solutions appearing first, and with ties being broken arbitrarily. The component starts by keeping only the $k$ most promising solutions $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_k}$. Then it computes, for each binary decision variable $x_l$, its average assigned value $\\tilde{x}_l$:\n", + "$$\n", + " \\tilde{x}_l = \\frac{1}{k} \\sum_{j=1}^k \\bar{x}^{i_j}_l.\n", + "$$\n", + " Finally, the component constructs a merged solution $y$, defined as:\n", + "$$\n", + " y_j = \\begin{cases}\n", + " 0 & \\text{ if } \\tilde{x}_l \\le \\theta_0 \\\\\n", + " 1 & \\text{ if } \\tilde{x}_l \\ge \\theta_1 \\\\\n", + " \\square & \\text{otherwise,}\n", + " \\end{cases}\n", + "$$\n", + " where $\\theta_0$ and $\\theta_1$ are user-specified parameters, and where $\\square$ indicates that the variable is left undefined. The solution $y$ is then provided by the solver using any of the three approaches defined in the previous section.\n", + "\n", + "The above specification of `MemorizingPrimalComponent` is meant to be as general as possible. Simpler strategies can be implemented by configuring this component in specific ways. For example, a simpler approach employed in the literature is to collect all optimal solutions, then provide the entire list of solutions to the solver as warm starts, without any filtering or post-processing. This strategy can be implemented with `MemorizingPrimalComponent` by using a model that returns a constant value for all solutions (e.g. [scikit-learn's DummyClassifier][DummyClassifier]), then selecting the top $n$ (instead of $k$) solutions. See example below. Another simple approach is taking the solution to the most similar instance, and using it, by itself, as a warm start. This can be implemented by using a model that computes distances between the current instance and the training ones (e.g. [scikit-learn's KNeighborsClassifier][KNeighborsClassifier]), then select the solution to the nearest one. See also example below. More complex strategies, of course, can also be configured.\n", + "\n", + "[DummyClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html\n", + "[KNeighborsClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html\n", + "\n", + "### Examples" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "253adbf4", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from sklearn.dummy import DummyClassifier\n", + "from sklearn.neighbors import KNeighborsClassifier\n", + "\n", + "from miplearn.components.primal.actions import (\n", + " SetWarmStart,\n", + " FixVariables,\n", + " EnforceProximity,\n", + ")\n", + "from miplearn.components.primal.mem import (\n", + " MemorizingPrimalComponent,\n", + " SelectTopSolutions,\n", + " MergeTopSolutions,\n", + ")\n", + "from miplearn.extractors.dummy import DummyExtractor\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "\n", + "# Configures a memorizing primal component that collects\n", + "# all distinct solutions seen during training and provides\n", + "# them to the solver without any filtering or post-processing.\n", + "comp1 = MemorizingPrimalComponent(\n", + " clf=DummyClassifier(),\n", + " extractor=DummyExtractor(),\n", + " constructor=SelectTopSolutions(1_000_000),\n", + " action=SetWarmStart(),\n", + ")\n", + "\n", + "# Configures a memorizing primal component that finds the\n", + "# training instance with the closest objective function, then\n", + "# fixes the decision variables to the values they assumed\n", + "# at the optimal solution for that instance.\n", + "comp2 = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=1),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_var_obj_coeffs\"],\n", + " ),\n", + " constructor=SelectTopSolutions(1),\n", + " action=FixVariables(),\n", + ")\n", + "\n", + "# Configures a memorizing primal component that finds the distinct\n", + "# solutions to the 10 most similar training problem instances,\n", + "# selects the 3 solutions that were most often optimal to these\n", + "# training instances, combines them into a single partial solution,\n", + "# then enforces proximity, allowing at most 3 variables to deviate\n", + "# from the machine learning suggestion.\n", + "comp3 = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=10),\n", + " extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n", + " constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n", + " action=EnforceProximity(3),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "f194a793", + "metadata": {}, + "source": [ + "## Independent vars primal component\n", + "\n", + "Instead of memorizing previously-seen primal solutions, it is also natural to use machine learning models to directly predict the values of the decision variables, constructing a solution from scratch. This approach has the benefit of potentially constructing novel high-quality solutions, never observed in the training data. Two variations of this strategy are supported by MIPLearn: (i) predicting the values of the decision variables independently, using multiple ML models; or (ii) predicting the values jointly, with a single model. We describe the first variation in this section, and the second variation in the next section.\n", + "\n", + "Let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. For each binary decision variable $x_j$, the component `IndependentVarsPrimalComponent` creates a copy of a user-provided binary classifier and trains it to predict the optimal value of $x_j$, given $\\bar{x}^1_j,\\ldots,\\bar{x}^n_j$ as training labels. The features provided to the model are the variable features computed by an user-provided extractor. During inference time, the component uses these $n$ binary classifiers to construct a solution and provides it to the solver using one of the available actions.\n", + "\n", + "Three issues often arise in practice when using this approach:\n", + "\n", + " 1. For certain binary variables $x_j$, it is frequently the case that its optimal value is either always zero or always one in the training dataset, which poses problems to some standard scikit-learn classifiers, since they do not expect a single class. The wrapper `SingleClassFix` can be used to fix this issue (see example below).\n", + "2. It is also frequently the case that machine learning classifier can only reliably predict the values of some variables with high accuracy, not all of them. In this situation, instead of computing a complete primal solution, it may be more beneficial to construct a partial solution containing values only for the variables for which the ML made a high-confidence prediction. The meta-classifier `MinProbabilityClassifier` can be used for this purpose. It asks the base classifier for the probability of the value being zero or one (using the `predict_proba` method) and erases from the primal solution all values whose probabilities are below a given threshold.\n", + "3. To make multiple copies of the provided ML classifier, MIPLearn uses the standard `sklearn.base.clone` method, which may not be suitable for classifiers from other frameworks. To handle this, it is possible to override the clone function using the `clone_fn` constructor argument.\n", + "\n", + "### Examples" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "3fc0b5d1", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from sklearn.linear_model import LogisticRegression\n", + "from miplearn.classifiers.minprob import MinProbabilityClassifier\n", + "from miplearn.classifiers.singleclass import SingleClassFix\n", + "from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n", + "from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "\n", + "# Configures a primal component that independently predicts the value of each\n", + "# binary variable using logistic regression and provides it to the solver as\n", + "# warm start. Erases predictions with probability less than 99%; applies\n", + "# single-class fix; and uses AlvLouWeh2017 features.\n", + "comp = IndependentVarsPrimalComponent(\n", + " base_clf=SingleClassFix(\n", + " MinProbabilityClassifier(\n", + " base_clf=LogisticRegression(),\n", + " thresholds=[0.99, 0.99],\n", + " ),\n", + " ),\n", + " extractor=AlvLouWeh2017Extractor(),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "45107a0c", + "metadata": {}, + "source": [ + "## Joint vars primal component\n", + "In the previous subsection, we used multiple machine learning models to independently predict the values of the binary decision variables. When these values are correlated, an alternative approach is to jointly predict the values of all binary variables using a single machine learning model. This strategy is implemented by `JointVarsPrimalComponent`. Compared to the previous ones, this component is much more straightforwad. It simply extracts instance features, using the user-provided feature extractor, then directly trains the user-provided binary classifier (using the `fit` method), without making any copies. The trained classifier is then used to predict entire solutions (using the `predict` method), which are given to the solver using one of the previously discussed methods. In the example below, we illustrate the usage of this component with a simple feed-forward neural network.\n", + "\n", + "`JointVarsPrimalComponent` can also be used to implement strategies that use multiple machine learning models, but not indepedently. For example, a common strategy in multioutput prediction is building a *classifier chain*. In this approach, the first decision variable is predicted using the instance features alone; but the $n$-th decision variable is predicted using the instance features plus the predicted values of the $n-1$ previous variables. This can be easily implemented using scikit-learn's `ClassifierChain` estimator, as shown in the example below.\n", + "\n", + "### Examples" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "cf9b52dd", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from sklearn.multioutput import ClassifierChain\n", + "from sklearn.neural_network import MLPClassifier\n", + "from miplearn.components.primal.joint import JointVarsPrimalComponent\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "\n", + "# Configures a primal component that uses a feedforward neural network\n", + "# to jointly predict the values of the binary variables, based on the\n", + "# objective cost function, and provides the solution to the solver as\n", + "# a warm start.\n", + "comp = JointVarsPrimalComponent(\n", + " clf=MLPClassifier(),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_var_obj_coeffs\"],\n", + " ),\n", + " action=SetWarmStart(),\n", + ")\n", + "\n", + "# Configures a primal component that uses a chain of logistic regression\n", + "# models to jointly predict the values of the binary variables, based on\n", + "# the objective function.\n", + "comp = JointVarsPrimalComponent(\n", + " clf=ClassifierChain(SingleClassFix(LogisticRegression())),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_var_obj_coeffs\"],\n", + " ),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "dddf7be4", + "metadata": {}, + "source": [ + "## Expert primal component\n", + "\n", + "Before spending time and effort choosing a machine learning strategy and tweaking its parameters, it is usually a good idea to evaluate what would be the performance impact of the model if its predictions were 100% accurate. This is especially important for the prediction of warm starts, since they are not always very beneficial. To simplify this task, MIPLearn provides `ExpertPrimalComponent`, a component which simply loads the optimal solution from the HDF5 file, assuming that it has already been computed, then directly provides it to the solver using one of the available methods. This component is useful in benchmarks, to evaluate how close to the best theoretical performance the machine learning components are.\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "9e2e81b9", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from miplearn.components.primal.expert import ExpertPrimalComponent\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "\n", + "# Configures an expert primal component, which reads a pre-computed\n", + "# optimal solution from the HDF5 file and provides it to the solver\n", + "# as warm start.\n", + "comp = ExpertPrimalComponent(action=SetWarmStart())" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/guide/problems.ipynb.txt b/0.4/_sources/guide/problems.ipynb.txt new file mode 100644 index 00000000..acc35fb2 --- /dev/null +++ b/0.4/_sources/guide/problems.ipynb.txt @@ -0,0 +1,1607 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "f89436b4-5bc5-4ae3-a20a-522a2cd65274", + "metadata": {}, + "source": [ + "# Benchmark Problems\n", + "\n", + "## Overview\n", + "\n", + "Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n", + "\n", + "To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n", + "\n", + "In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm." + ] + }, + { + "cell_type": "markdown", + "id": "bd99c51f", + "metadata": {}, + "source": [ + "
\n", + "Warning\n", + "\n", + "The random instance generators and formulations shown below are subject to change. If you use them in your research, for reproducibility, you should specify the MIPLearn version and all parameters.\n", + "
\n", + "\n", + "
\n", + "Note\n", + "\n", + "- To make the instances easier to process, all formulations are written as a minimization problem.\n", + "- Some problem formulations, such as the one for the *traveling salesman problem*, contain an exponential number of constraints, which are enforced through constraint generation. The MPS files for these problems contain only the constraints that were generated during a trial run, not the entire set of constraints. Resolving the MPS file, therefore, may not generate a feasible primal solution for the problem.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "830f3784-a3fc-4e2f-a484-e7808841ffe8", + "metadata": { + "tags": [] + }, + "source": [ + "## Bin Packing\n", + "\n", + "**Bin packing** is a combinatorial optimization problem that asks for the optimal way to pack a given set of items into a finite number of containers (or bins) of fixed capacity. More specifically, the problem is to assign indivisible items of different sizes to identical bins, while minimizing the number of bins used. The problem is NP-hard and has many practical applications, including logistics and warehouse management, where it is used to determine how to best store and transport goods using a limited amount of space." + ] + }, + { + "cell_type": "markdown", + "id": "af933298-92a9-4c5d-8d07-0d4918dedbb8", + "metadata": { + "tags": [] + }, + "source": [ + "### Formulation\n", + "\n", + "Let $n$ be the number of items, and $s_i$ the size of the $i$-th item. Also let $B$ be the size of the bins. For each bin $j$, let $y_j$ be a binary decision variable which equals one if the bin is used. For every item-bin pair $(i,j)$, let $x_{ij}$ be a binary decision variable which equals one if item $i$ is assigned to bin $j$. The bin packing problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "5e502345", + "metadata": {}, + "source": [ + "\n", + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{j=1}^n y_j \\\\\n", + "\\text{subject to} \\;\\;\\;\n", + " & \\sum_{i=1}^n s_i x_{ij} \\leq B y_j & \\forall j=1,\\ldots,n \\\\\n", + " & \\sum_{j=1}^n x_{ij} = 1 & \\forall i=1,\\ldots,n \\\\\n", + " & y_i \\in \\{0,1\\} & \\forall i=1,\\ldots,n \\\\\n", + " & x_{ij} \\in \\{0,1\\} & \\forall i,j=1,\\ldots,n \\\\\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "9cba2077", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "Random instances of the bin packing problem can be generated using the class [BinPackGenerator][BinPackGenerator].\n", + "\n", + "If `fix_items=False`, the class samples the user-provided probability distributions `n`, `sizes` and `capacity` to decide, respectively, the number of items, the sizes of the items and capacity of the bin. All values are sampled independently.\n", + "\n", + "If `fix_items=True`, the class creates a reference instance, using the method previously described, then generates additional instances by perturbing its item sizes and bin capacity. More specifically, the sizes of the items are set to $s_i \\gamma_i$, where $s_i$ is the size of the $i$-th item in the reference instance and $\\gamma_i$ is sampled from `sizes_jitter`. Similarly, the bin size is set to $B \\beta$, where $B$ is the reference bin size and $\\beta$ is sampled from `capacity_jitter`. The number of items remains the same across all generated instances.\n", + "\n", + "[BinPackGenerator]: ../../api/problems/#miplearn.problems.binpack.BinPackGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "2bc62803", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "f14e560c-ef9f-4c48-8467-72d6acce5f9f", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.409419720Z", + "start_time": "2023-11-07T16:29:47.824353556Z" + }, + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0 [ 8.47 26. 19.52 14.11 3.65 3.65 1.4 21.76 14.82 16.96] 102.24\n", + "1 [ 8.69 22.78 17.81 14.83 4.12 3.67 1.46 22.05 13.66 18.08] 93.41\n", + "2 [ 8.55 25.9 20. 15.89 3.75 3.59 1.51 21.4 13.89 17.68] 90.69\n", + "3 [10.13 22.62 18.89 14.4 3.92 3.94 1.36 23.69 15.85 19.26] 107.9\n", + "4 [ 9.55 25.77 16.79 14.06 3.55 3.76 1.42 20.66 16.02 17.19] 95.62\n", + "5 [ 9.44 22.06 19.41 13.69 4.28 4.11 1.36 19.51 15.98 18.43] 104.58\n", + "6 [ 9.87 21.74 17.78 13.82 4.18 4. 1.4 19.76 14.46 17.08] 104.59\n", + "7 [ 9.62 25.61 18.2 13.83 4.07 4.1 1.47 22.83 15.01 17.78] 98.55\n", + "8 [ 8.47 21.9 16.58 15.37 3.76 3.91 1.57 20.57 14.76 18.61] 94.58\n", + "9 [ 8.57 22.77 17.06 16.25 4.14 4. 1.56 22.97 14.09 19.09] 100.79\n", + "\n", + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 20 rows, 110 columns and 210 nonzeros\n", + "Model fingerprint: 0x1ff9913f\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+02]\n", + " Objective range [1e+00, 1e+00]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective 5.0000000\n", + "Presolve time: 0.00s\n", + "Presolved: 20 rows, 110 columns, 210 nonzeros\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "\n", + "Root relaxation: objective 1.274844e+00, 38 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1.27484 0 4 5.00000 1.27484 74.5% - 0s\n", + "H 0 0 4.0000000 1.27484 68.1% - 0s\n", + "H 0 0 2.0000000 1.27484 36.3% - 0s\n", + " 0 0 1.27484 0 4 2.00000 1.27484 36.3% - 0s\n", + "\n", + "Explored 1 nodes (38 simplex iterations) in 0.03 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 3: 2 4 5 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.000000000000e+00, best bound 2.000000000000e+00, gap 0.0000%\n", + "\n", + "User-callback calls 143, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.binpack import BinPackGenerator, build_binpack_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances of the binpack problem with ten items\n", + "data = BinPackGenerator(\n", + " n=randint(low=10, high=11),\n", + " sizes=uniform(loc=0, scale=25),\n", + " capacity=uniform(loc=100, scale=0),\n", + " sizes_jitter=uniform(loc=0.9, scale=0.2),\n", + " capacity_jitter=uniform(loc=0.9, scale=0.2),\n", + " fix_items=True,\n", + ").generate(10)\n", + "\n", + "# Print sizes and capacities\n", + "for i in range(10):\n", + " print(i, data[i].sizes, data[i].capacity)\n", + "print()\n", + "\n", + "# Optimize first instance\n", + "model = build_binpack_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "9a3df608-4faf-444b-b5c2-18d3e90cbb5a", + "metadata": { + "tags": [] + }, + "source": [ + "## Multi-Dimensional Knapsack\n", + "\n", + "The **multi-dimensional knapsack problem** is a generalization of the classic knapsack problem, which involves selecting a subset of items to be placed in a knapsack such that the total value of the items is maximized without exceeding a maximum weight. In this generalization, items have multiple weights (representing multiple resources), and multiple weight constraints must be satisfied." + ] + }, + { + "cell_type": "markdown", + "id": "8d989002-d837-4ccf-a224-0504a6d66473", + "metadata": { + "tags": [] + }, + "source": [ + "### Formulation\n", + "\n", + "Let $n$ be the number of items and $m$ be the number of resources. For each item $j$ and resource $i$, let $p_j$ be the price of the item, let $w_{ij}$ be the amount of resource $j$ item $i$ consumes (i.e. the $j$-th weight of the item), and let $b_i$ be the total amount of resource $i$ available (or the size of the $j$-th knapsack). The formulation is given by:" + ] + }, + { + "cell_type": "markdown", + "id": "d0d3ea42", + "metadata": {}, + "source": [ + "\n", + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & - \\sum_{j=1}^n p_j x_j\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j=1}^n w_{ij} x_j \\leq b_i\n", + " & \\forall i=1,\\ldots,m \\\\\n", + " & x_j \\in \\{0,1\\}\n", + " & \\forall j=1,\\ldots,n\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "81b5b085-cfa9-45ce-9682-3aeb9be96cba", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [MultiKnapsackGenerator][MultiKnapsackGenerator] can be used to generate random instances of this problem. The number of items $n$ and knapsacks $m$ are sampled from the user-provided probability distributions `n` and `m`. The weights $w_{ij}$ are sampled independently from the provided distribution `w`. The capacity of knapsack $i$ is set to\n", + "\n", + "[MultiKnapsackGenerator]: ../../api/problems/#miplearn.problems.multiknapsack.MultiKnapsackGenerator\n", + "\n", + "$$\n", + " b_i = \\alpha_i \\sum_{j=1}^n w_{ij}\n", + "$$\n", + "\n", + "where $\\alpha_i$, the tightness ratio, is sampled from the provided probability\n", + "distribution `alpha`. To make the instances more challenging, the costs of the items\n", + "are linearly correlated to their average weights. More specifically, the price of each\n", + "item $j$ is set to:\n", + "\n", + "$$\n", + " p_j = \\sum_{i=1}^m \\frac{w_{ij}}{m} + K u_j,\n", + "$$\n", + "\n", + "where $K$, the correlation coefficient, and $u_j$, the correlation multiplier, are sampled\n", + "from the provided probability distributions `K` and `u`.\n", + "\n", + "If `fix_w=True` is provided, then $w_{ij}$ are kept the same in all generated instances. This also implies that $n$ and $m$ are kept fixed. Although the prices and capacities are derived from $w_{ij}$, as long as `u` and `K` are not constants, the generated instances will still not be completely identical.\n", + "\n", + "\n", + "If a probability distribution `w_jitter` is provided, then item weights will be set to $w_{ij} \\gamma_{ij}$ where $\\gamma_{ij}$ is sampled from `w_jitter`. When combined with `fix_w=True`, this argument may be used to generate instances where the weight of each item is roughly the same, but not exactly identical, across all instances. The prices of the items and the capacities of the knapsacks will be calculated as above, but using these perturbed weights instead.\n", + "\n", + "By default, all generated prices, weights and capacities are rounded to the nearest integer number. If `round=False` is provided, this rounding will be disabled." + ] + }, + { + "cell_type": "markdown", + "id": "f92135b8-67e7-4ec5-aeff-2fc17ad5e46d", + "metadata": {}, + "source": [ + "
\n", + "References\n", + "\n", + "* **Freville, Arnaud, and Gérard Plateau.** *An efficient preprocessing procedure for the multidimensional 0–1 knapsack problem.* Discrete applied mathematics 49.1-3 (1994): 189-212.\n", + "* **Fréville, Arnaud.** *The multidimensional 0–1 knapsack problem: An overview.* European Journal of Operational Research 155.1 (2004): 1-21.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "f12a066f", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "1ce5f8fb-2769-4fbd-a40c-fd62b897690a", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.485068449Z", + "start_time": "2023-11-07T16:29:48.406139946Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "prices\n", + " [350. 692. 454. 709. 605. 543. 321. 674. 571. 341.]\n", + "weights\n", + " [[392. 977. 764. 622. 158. 163. 56. 840. 574. 696.]\n", + " [ 20. 948. 860. 209. 178. 184. 293. 541. 414. 305.]\n", + " [629. 135. 278. 378. 466. 803. 205. 492. 584. 45.]\n", + " [630. 173. 64. 907. 947. 794. 312. 99. 711. 439.]\n", + " [117. 506. 35. 915. 266. 662. 312. 516. 521. 178.]]\n", + "capacities\n", + " [1310. 988. 1004. 1269. 1007.]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 5 rows, 10 columns and 50 nonzeros\n", + "Model fingerprint: 0xaf3ac15e\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [2e+01, 1e+03]\n", + " Objective range [3e+02, 7e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+03, 1e+03]\n", + "Found heuristic solution: objective -804.0000000\n", + "Presolve removed 0 rows and 3 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 7 columns, 34 nonzeros\n", + "Variable types: 0 continuous, 7 integer (7 binary)\n", + "\n", + "Root relaxation: objective -1.428726e+03, 4 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 -1428.7265 0 4 -804.00000 -1428.7265 77.7% - 0s\n", + "H 0 0 -1279.000000 -1428.7265 11.7% - 0s\n", + "\n", + "Cutting planes:\n", + " Cover: 1\n", + "\n", + "Explored 1 nodes (4 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 2: -1279 -804 \n", + "No other solutions better than -1279\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective -1.279000000000e+03, best bound -1.279000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 490, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.multiknapsack import (\n", + " MultiKnapsackGenerator,\n", + " build_multiknapsack_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate ten similar random instances of the multiknapsack problem with\n", + "# ten items, five resources and weights around [0, 1000].\n", + "data = MultiKnapsackGenerator(\n", + " n=randint(low=10, high=11),\n", + " m=randint(low=5, high=6),\n", + " w=uniform(loc=0, scale=1000),\n", + " K=uniform(loc=100, scale=0),\n", + " u=uniform(loc=1, scale=0),\n", + " alpha=uniform(loc=0.25, scale=0),\n", + " w_jitter=uniform(loc=0.95, scale=0.1),\n", + " p_jitter=uniform(loc=0.75, scale=0.5),\n", + " fix_w=True,\n", + ").generate(10)\n", + "\n", + "# Print data for one of the instances\n", + "print(\"prices\\n\", data[0].prices)\n", + "print(\"weights\\n\", data[0].weights)\n", + "print(\"capacities\\n\", data[0].capacities)\n", + "print()\n", + "\n", + "# Build model and optimize\n", + "model = build_multiknapsack_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "e20376b0-0781-4bfa-968f-ded5fa47e176", + "metadata": { + "tags": [] + }, + "source": [ + "## Capacitated P-Median\n", + "\n", + "The **capacitated p-median** problem is a variation of the classic $p$-median problem, in which a set of customers must be served by a set of facilities. In the capacitated $p$-Median problem, each facility has a fixed capacity, and the goal is to minimize the total cost of serving the customers while ensuring that the capacity of each facility is not exceeded. Variations of problem are often used in logistics and supply chain management to determine the most efficient locations for warehouses or distribution centers." + ] + }, + { + "cell_type": "markdown", + "id": "2af65137-109e-4ca0-8753-bd999825204f", + "metadata": { + "tags": [] + }, + "source": [ + "### Formulation\n", + "\n", + "Let $I=\\{1,\\ldots,n\\}$ be the set of customers. For each customer $i \\in I$, let $d_i$ be its demand and let $y_i$ be a binary decision variable that equals one if we decide to open a facility at that customer's location. For each pair $(i,j) \\in I \\times I$, let $x_{ij}$ be a binary decision variable that equals one if customer $i$ is assigned to facility $j$. Furthermore, let $w_{ij}$ be the cost of serving customer $i$ from facility $j$, let $p$ be the number of facilities we must open, and let $c_j$ be the capacity of facility $j$. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "a2494ab1-d306-4db7-a100-8f1dfd4a55d7", + "metadata": { + "tags": [] + }, + "source": [ + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & \\sum_{i \\in I} \\sum_{j \\in I} w_{ij} x_{ij}\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j \\in I} x_{ij} = 1 & \\forall i \\in I \\\\\n", + " & \\sum_{j \\in I} y_j = p \\\\\n", + " & \\sum_{i \\in I} d_i x_{ij} \\leq c_j y_j & \\forall j \\in I \\\\\n", + " & x_{ij} \\in \\{0, 1\\} & \\forall i, j \\in I \\\\\n", + " & y_j \\in \\{0, 1\\} & \\forall j \\in I\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "9dddf0d6-1f86-40d4-93a8-ccfe93d38e0d", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [PMedianGenerator][PMedianGenerator] can be used to generate random instances of this problem. First, it decides the number of customers and the parameter $p$ by sampling the provided `n` and `p` distributions, respectively. Then, for each customer $i$, the class builds its geographical location $(x_i, y_i)$ by sampling the provided `x` and `y` distributions. For each $i$, the demand for customer $i$ and the capacity of facility $i$ are decided by sampling the provided distributions `demands` and `capacities`, respectively. Finally, the costs $w_{ij}$ are set to the Euclidean distance between the locations of customers $i$ and $j$.\n", + "\n", + "If `fixed=True`, then the number of customers, their locations, the parameter $p$, the demands and the capacities are only sampled from their respective distributions exactly once, to build a reference instance which is then randomly perturbed. Specifically, in each perturbation, the distances, demands and capacities are multiplied by random scaling factors sampled from the distributions `distances_jitter`, `demands_jitter` and `capacities_jitter`, respectively. The result is a list of instances that have the same set of customers, but slightly different demands, capacities and distances.\n", + "\n", + "[PMedianGenerator]: ../../api/problems/#miplearn.problems.pmedian.PMedianGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "4e701397", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "4e0e4223-b4e0-4962-a157-82a23a86e37d", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.575025403Z", + "start_time": "2023-11-07T16:29:48.453962705Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "p = 5\n", + "distances =\n", + " [[ 0. 50.17 82.42 32.76 33.2 35.45 86.88 79.11 43.17 66.2 ]\n", + " [ 50.17 0. 72.64 72.51 17.06 80.25 39.92 68.93 43.41 42.96]\n", + " [ 82.42 72.64 0. 71.69 70.92 82.51 67.88 3.76 39.74 30.73]\n", + " [ 32.76 72.51 71.69 0. 56.56 11.03 101.35 69.39 42.09 68.58]\n", + " [ 33.2 17.06 70.92 56.56 0. 63.68 54.71 67.16 34.89 44.99]\n", + " [ 35.45 80.25 82.51 11.03 63.68 0. 111.04 80.29 52.78 79.36]\n", + " [ 86.88 39.92 67.88 101.35 54.71 111.04 0. 65.13 61.37 40.82]\n", + " [ 79.11 68.93 3.76 69.39 67.16 80.29 65.13 0. 36.26 27.24]\n", + " [ 43.17 43.41 39.74 42.09 34.89 52.78 61.37 36.26 0. 26.62]\n", + " [ 66.2 42.96 30.73 68.58 44.99 79.36 40.82 27.24 26.62 0. ]]\n", + "demands = [6.12 1.39 2.92 3.66 4.56 7.85 2. 5.14 5.92 0.46]\n", + "capacities = [151.89 42.63 16.26 237.22 241.41 202.1 76.15 24.42 171.06 110.04]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 21 rows, 110 columns and 220 nonzeros\n", + "Model fingerprint: 0x8d8d9346\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "Coefficient statistics:\n", + " Matrix range [5e-01, 2e+02]\n", + " Objective range [4e+00, 1e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 5e+00]\n", + "Found heuristic solution: objective 368.7900000\n", + "Presolve time: 0.00s\n", + "Presolved: 21 rows, 110 columns, 220 nonzeros\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "Found heuristic solution: objective 245.6400000\n", + "\n", + "Root relaxation: objective 0.000000e+00, 18 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 0.00000 0 6 245.64000 0.00000 100% - 0s\n", + "H 0 0 185.1900000 0.00000 100% - 0s\n", + "H 0 0 148.6300000 17.14595 88.5% - 0s\n", + "H 0 0 113.1800000 17.14595 84.9% - 0s\n", + " 0 0 17.14595 0 10 113.18000 17.14595 84.9% - 0s\n", + "H 0 0 99.5000000 17.14595 82.8% - 0s\n", + "H 0 0 98.3900000 17.14595 82.6% - 0s\n", + "H 0 0 93.9800000 64.28872 31.6% - 0s\n", + " 0 0 64.28872 0 15 93.98000 64.28872 31.6% - 0s\n", + "H 0 0 93.9200000 64.28872 31.5% - 0s\n", + " 0 0 86.06884 0 15 93.92000 86.06884 8.36% - 0s\n", + "* 0 0 0 91.2300000 91.23000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (70 simplex iterations) in 0.08 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 10: 91.23 93.92 93.98 ... 368.79\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 9.123000000000e+01, best bound 9.123000000000e+01, gap 0.0000%\n", + "\n", + "User-callback calls 190, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.pmedian import PMedianGenerator, build_pmedian_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with ten customers located in a\n", + "# 100x100 square, with demands in [0,10], capacities in [0, 250].\n", + "data = PMedianGenerator(\n", + " x=uniform(loc=0.0, scale=100.0),\n", + " y=uniform(loc=0.0, scale=100.0),\n", + " n=randint(low=10, high=11),\n", + " p=randint(low=5, high=6),\n", + " demands=uniform(loc=0, scale=10),\n", + " capacities=uniform(loc=0, scale=250),\n", + " distances_jitter=uniform(loc=0.9, scale=0.2),\n", + " demands_jitter=uniform(loc=0.9, scale=0.2),\n", + " capacities_jitter=uniform(loc=0.9, scale=0.2),\n", + " fixed=True,\n", + ").generate(10)\n", + "\n", + "# Print data for one of the instances\n", + "print(\"p =\", data[0].p)\n", + "print(\"distances =\\n\", data[0].distances)\n", + "print(\"demands =\", data[0].demands)\n", + "print(\"capacities =\", data[0].capacities)\n", + "print()\n", + "\n", + "# Build and optimize model\n", + "model = build_pmedian_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "36129dbf-ecba-4026-ad4d-f2356bad4a26", + "metadata": {}, + "source": [ + "## Set cover\n", + "\n", + "The **set cover problem** is a classical NP-hard optimization problem which aims to minimize the number of sets needed to cover all elements in a given universe. Each set may contain a different number of elements, and sets may overlap with each other. This problem can be useful in various real-world scenarios such as scheduling, resource allocation, and network design." + ] + }, + { + "cell_type": "markdown", + "id": "d5254e7a", + "metadata": {}, + "source": [ + "### Formulation\n", + "\n", + "Let $U = \\{1,\\ldots,n\\}$ be a given universe set, and let $S=\\{S_1,\\ldots,S_m\\}$ be a collection of sets whose union equal $U$. For each $j \\in \\{1,\\ldots,m\\}$, let $w_j$ be the weight of set $S_j$, and let $x_j$ be a binary decision variable that equals one if set $S_j$ is chosen. The set cover problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "5062d606-678c-45ba-9a45-d3c8b7401ad1", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & \\sum_{j=1}^m w_j x_j\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j : i \\in S_j} x_j \\geq 1 & \\forall i \\in \\{1,\\ldots,n\\} \\\\\n", + " & x_j \\in \\{0, 1\\} & \\forall j \\in \\{1,\\ldots,m\\}\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "2732c050-2e11-44fc-bdd1-1b804a60f166", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [SetCoverGenerator] can generate random instances of this problem. The class first decides the number of elements and sets by sampling the provided distributions `n_elements` and `n_sets`, respectively. Then it generates a random incidence matrix $M$, as follows:\n", + "\n", + "1. The density $d$ of $M$ is decided by sampling the provided probability distribution `density`.\n", + "2. Each entry of $M$ is then sampled from the Bernoulli distribution, with probability $d$.\n", + "3. To ensure that each element belongs to at least one set, the class identifies elements that are not contained in any set, then assigns them to a random set (chosen uniformly).\n", + "4. Similarly, to ensure that each set contains at least one element, the class identifies empty sets, then modifies them to include one random element (chosen uniformly).\n", + "\n", + "Finally, the weight of set $j$ is set to $w_j + K | S_j |$, where $w_j$ and $k$ are sampled from `costs` and `K`, respectively, and where $|S_j|$ denotes the size of set $S_j$. The parameter $K$ is used to introduce some correlation between the size of the set and its weight, making the instance more challenging. Note that `K` is only sampled once for the entire instance.\n", + "\n", + "If `fix_sets=True`, then all generated instances have exactly the same sets and elements. The costs of the sets, however, are multiplied by random scaling factors sampled from the provided probability distribution `costs_jitter`.\n", + "\n", + "[SetCoverGenerator]: ../../api/problems/#miplearn.problems.setcover.SetCoverGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "569aa5ec-d475-41fa-a5d9-0b1a675fdf95", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "3224845b-9afd-463e-abf4-e0e93d304859", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.804292323Z", + "start_time": "2023-11-07T16:29:48.492933268Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "matrix\n", + " [[1 0 0 0 1 1 1 0 0 0]\n", + " [1 0 0 1 1 1 1 0 1 1]\n", + " [0 1 1 1 1 0 1 0 0 1]\n", + " [0 1 1 0 0 0 1 1 0 1]\n", + " [1 1 1 0 1 0 1 0 0 1]]\n", + "costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n", + " 425.33]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 5 rows, 10 columns and 28 nonzeros\n", + "Model fingerprint: 0xe5c2d4fa\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [7e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective 213.4900000\n", + "Presolve removed 5 rows and 10 columns\n", + "Presolve time: 0.00s\n", + "Presolve: All rows and columns removed\n", + "\n", + "Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 1: 213.49 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.134900000000e+02, best bound 2.134900000000e+02, gap 0.0000%\n", + "\n", + "User-callback calls 178, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.setcover import SetCoverGenerator, build_setcover_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Build random instances with five elements, ten sets and costs\n", + "# in the [0, 1000] interval, with a correlation factor of 25 and\n", + "# an incidence matrix with 25% density.\n", + "data = SetCoverGenerator(\n", + " n_elements=randint(low=5, high=6),\n", + " n_sets=randint(low=10, high=11),\n", + " costs=uniform(loc=0.0, scale=1000.0),\n", + " costs_jitter=uniform(loc=0.90, scale=0.20),\n", + " density=uniform(loc=0.5, scale=0.00),\n", + " K=uniform(loc=25.0, scale=0.0),\n", + " fix_sets=True,\n", + ").generate(10)\n", + "\n", + "# Print problem data for one instance\n", + "print(\"matrix\\n\", data[0].incidence_matrix)\n", + "print(\"costs\", data[0].costs)\n", + "print()\n", + "\n", + "# Build and optimize model\n", + "model = build_setcover_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "255a4e88-2e38-4a1b-ba2e-806b6bd4c815", + "metadata": {}, + "source": [ + "## Set Packing\n", + "\n", + "**Set packing** is a classical optimization problem that asks for the maximum number of disjoint sets within a given list. This problem often arises in real-world situations where a finite number of resources need to be allocated to tasks, such as airline flight crew scheduling." + ] + }, + { + "cell_type": "markdown", + "id": "19342eb1", + "metadata": {}, + "source": [ + "### Formulation\n", + "\n", + "Let $U=\\{1,\\ldots,n\\}$ be a given universe set, and let $S = \\{S_1, \\ldots, S_m\\}$ be a collection of subsets of $U$. For each subset $j \\in \\{1, \\ldots, m\\}$, let $w_j$ be the weight of $S_j$ and let $x_j$ be a binary decision variable which equals one if set $S_j$ is chosen. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "0391b35b", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & -\\sum_{j=1}^m w_j x_j\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j : i \\in S_j} x_j \\leq 1 & \\forall i \\in \\{1,\\ldots,n\\} \\\\\n", + " & x_j \\in \\{0, 1\\} & \\forall j \\in \\{1,\\ldots,m\\}\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "c2d7df7b", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [SetPackGenerator][SetPackGenerator] can generate random instances of this problem. It accepts exactly the same arguments, and generates instance data in exactly the same way as [SetCoverGenerator][SetCoverGenerator]. For more details, please see the documentation for that class.\n", + "\n", + "[SetPackGenerator]: ../../api/problems/#miplearn.problems.setpack.SetPackGenerator\n", + "[SetCoverGenerator]: ../../api/problems/#miplearn.problems.setcover.SetCoverGenerator\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "cc797da7", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.806917868Z", + "start_time": "2023-11-07T16:29:48.781619530Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "matrix\n", + " [[1 0 0 0 1 1 1 0 0 0]\n", + " [1 0 0 1 1 1 1 0 1 1]\n", + " [0 1 1 1 1 0 1 0 0 1]\n", + " [0 1 1 0 0 0 1 1 0 1]\n", + " [1 1 1 0 1 0 1 0 0 1]]\n", + "costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n", + " 425.33]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 5 rows, 10 columns and 28 nonzeros\n", + "Model fingerprint: 0x4ee91388\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [7e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective -1265.560000\n", + "Presolve removed 5 rows and 10 columns\n", + "Presolve time: 0.00s\n", + "Presolve: All rows and columns removed\n", + "\n", + "Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 2: -1986.37 -1265.56 \n", + "No other solutions better than -1986.37\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective -1.986370000000e+03, best bound -1.986370000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 238, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.setpack import SetPackGenerator, build_setpack_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Build random instances with five elements, ten sets and costs\n", + "# in the [0, 1000] interval, with a correlation factor of 25 and\n", + "# an incidence matrix with 25% density.\n", + "data = SetPackGenerator(\n", + " n_elements=randint(low=5, high=6),\n", + " n_sets=randint(low=10, high=11),\n", + " costs=uniform(loc=0.0, scale=1000.0),\n", + " costs_jitter=uniform(loc=0.90, scale=0.20),\n", + " density=uniform(loc=0.5, scale=0.00),\n", + " K=uniform(loc=25.0, scale=0.0),\n", + " fix_sets=True,\n", + ").generate(10)\n", + "\n", + "# Print problem data for one instance\n", + "print(\"matrix\\n\", data[0].incidence_matrix)\n", + "print(\"costs\", data[0].costs)\n", + "print()\n", + "\n", + "# Build and optimize model\n", + "model = build_setpack_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "373e450c-8f8b-4b59-bf73-251bdd6ff67e", + "metadata": {}, + "source": [ + "## Stable Set\n", + "\n", + "The **maximum-weight stable set problem** is a classical optimization problem in graph theory which asks for the maximum-weight subset of vertices in a graph such that no two vertices in the subset are adjacent. The problem often arises in real-world scheduling or resource allocation situations, where stable sets represent tasks or resources that can be chosen simultaneously without conflicts.\n", + "\n", + "### Formulation\n", + "\n", + "Let $G=(V,E)$ be a simple undirected graph, and for each vertex $v \\in V$, let $w_v$ be its weight. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "2f74dd10", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\; & -\\sum_{v \\in V} w_v x_v \\\\\n", + "\\text{such that} \\;\\;\\; & x_v + x_u \\leq 1 & \\forall (v,u) \\in E \\\\\n", + "& x_v \\in \\{0, 1\\} & \\forall v \\in V\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "ef030168", + "metadata": {}, + "source": [ + "\n", + "### Random instance generator\n", + "\n", + "The class [MaxWeightStableSetGenerator][MaxWeightStableSetGenerator] can be used to generate random instances of this problem. The class first samples the user-provided probability distributions `n` and `p` to decide the number of vertices and the density of the graph. Then, it generates a random Erdős-Rényi graph $G_{n,p}$. We recall that, in such a graph, each potential edge is included with probabilty $p$, independently for each other. The class then samples the provided probability distribution `w` to decide the vertex weights.\n", + "\n", + "[MaxWeightStableSetGenerator]: ../../api/problems/#miplearn.problems.stab.MaxWeightStableSetGenerator\n", + "\n", + "If `fix_graph=True`, then all generated instances have the same random graph. For each instance, the weights are decided by sampling `w`, as described above.\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "0f996e99-0ec9-472b-be8a-30c9b8556931", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.954896857Z", + "start_time": "2023-11-07T16:29:48.825579097Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "graph [(0, 2), (0, 4), (0, 8), (1, 2), (1, 3), (1, 5), (1, 6), (1, 9), (2, 5), (2, 9), (3, 6), (3, 7), (6, 9), (7, 8), (8, 9)]\n", + "weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n", + "weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n", + "\n", + "Set parameter PreCrush to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 15 rows, 10 columns and 30 nonzeros\n", + "Model fingerprint: 0x3240ea4a\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [6e+00, 1e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective -219.1400000\n", + "Presolve removed 7 rows and 2 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 8 rows, 8 columns, 19 nonzeros\n", + "Variable types: 0 continuous, 8 integer (8 binary)\n", + "\n", + "Root relaxation: objective -2.205650e+02, 5 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 infeasible 0 -219.14000 -219.14000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: -219.14 \n", + "No other solutions better than -219.14\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective -2.191400000000e+02, best bound -2.191400000000e+02, gap 0.0000%\n", + "\n", + "User-callback calls 299, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.stab import (\n", + " MaxWeightStableSetGenerator,\n", + " build_stab_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with a fixed 10-node graph,\n", + "# 25% density and random weights in the [0, 100] interval.\n", + "data = MaxWeightStableSetGenerator(\n", + " w=uniform(loc=0.0, scale=100.0),\n", + " n=randint(low=10, high=11),\n", + " p=uniform(loc=0.25, scale=0.0),\n", + " fix_graph=True,\n", + ").generate(10)\n", + "\n", + "# Print the graph and weights for two instances\n", + "print(\"graph\", data[0].graph.edges)\n", + "print(\"weights[0]\", data[0].weights)\n", + "print(\"weights[1]\", data[1].weights)\n", + "print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_stab_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "444d1092-fd83-4957-b691-a198d56ba066", + "metadata": {}, + "source": [ + "## Traveling Salesman\n", + "\n", + "Given a list of cities and the distances between them, the **traveling salesman problem** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes." + ] + }, + { + "cell_type": "markdown", + "id": "da3ca69c", + "metadata": {}, + "source": [ + "### Formulation\n", + "\n", + "Let $G=(V,E)$ be a simple undirected graph. For each edge $e \\in E$, let $d_e$ be its weight (or distance) and let $x_e$ be a binary decision variable which equals one if $e$ is included in the route. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "9cf296e9", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{e \\in E} d_e x_e \\\\\n", + "\\text{such that} \\;\\;\\;\n", + " & \\sum_{e : \\delta(v)} x_e = 2 & \\forall v \\in V, \\\\\n", + " & \\sum_{e \\in \\delta(S)} x_e \\geq 2 & \\forall S \\subsetneq V, |S| \\neq \\emptyset, \\\\\n", + " & x_e \\in \\{0, 1\\} & \\forall e \\in E,\n", + "\\end{align*}\n", + "$$\n", + "where $\\delta(v)$ denotes the set of edges adjacent to vertex $v$, and $\\delta(S)$ denotes the set of edges that have one extremity in $S$ and one in $V \\setminus S$. Because of its exponential size, we enforce the second set of inequalities as lazy constraints." + ] + }, + { + "cell_type": "markdown", + "id": "eba3dbe5", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [TravelingSalesmanGenerator][TravelingSalesmanGenerator] can be used to generate random instances of this problem. Initially, the class samples the user-provided probability distribution `n` to decide how many cities to generate. Then, for each city $i$, the class generates its geographical location $(x_i, y_i)$ by sampling the provided distributions `x` and `y`. The distance $d_{ij}$ between cities $i$ and $j$ is then set to\n", + "$$\n", + "\\gamma_{ij} \\sqrt{(x_i - x_j)^2 + (y_i - y_j)^2},\n", + "$$\n", + "where $\\gamma$ is a random scaling factor sampled from the provided probability distribution `gamma`.\n", + "\n", + "If `fix_cities=True`, then the list of cities is kept the same for all generated instances. The $\\gamma$ values, however, and therefore also the distances, are still different. By default, all distances $d_{ij}$ are rounded to the nearest integer. If `round=False` is provided, this rounding will be disabled.\n", + "\n", + "[TravelingSalesmanGenerator]: ../../api/problems/#miplearn.problems.tsp.TravelingSalesmanGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "61f16c56", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "9d0c56c6", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.958833448Z", + "start_time": "2023-11-07T16:29:48.898121017Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "distances[0]\n", + " [[ 0. 513. 762. 358. 325. 374. 932. 731. 391. 634.]\n", + " [ 513. 0. 726. 765. 163. 754. 409. 719. 446. 400.]\n", + " [ 762. 726. 0. 780. 756. 744. 656. 40. 383. 334.]\n", + " [ 358. 765. 780. 0. 549. 117. 925. 702. 422. 728.]\n", + " [ 325. 163. 756. 549. 0. 663. 526. 708. 377. 462.]\n", + " [ 374. 754. 744. 117. 663. 0. 1072. 802. 501. 853.]\n", + " [ 932. 409. 656. 925. 526. 1072. 0. 654. 603. 433.]\n", + " [ 731. 719. 40. 702. 708. 802. 654. 0. 381. 255.]\n", + " [ 391. 446. 383. 422. 377. 501. 603. 381. 0. 287.]\n", + " [ 634. 400. 334. 728. 462. 853. 433. 255. 287. 0.]]\n", + "distances[1]\n", + " [[ 0. 493. 900. 354. 323. 367. 841. 727. 444. 668.]\n", + " [ 493. 0. 690. 687. 175. 725. 368. 744. 398. 446.]\n", + " [ 900. 690. 0. 666. 728. 827. 736. 41. 371. 317.]\n", + " [ 354. 687. 666. 0. 570. 104. 1090. 712. 454. 648.]\n", + " [ 323. 175. 728. 570. 0. 655. 521. 650. 356. 469.]\n", + " [ 367. 725. 827. 104. 655. 0. 1146. 779. 476. 752.]\n", + " [ 841. 368. 736. 1090. 521. 1146. 0. 681. 565. 394.]\n", + " [ 727. 744. 41. 712. 650. 779. 681. 0. 374. 286.]\n", + " [ 444. 398. 371. 454. 356. 476. 565. 374. 0. 274.]\n", + " [ 668. 446. 317. 648. 469. 752. 394. 286. 274. 0.]]\n", + "\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", + "Model fingerprint: 0x719675e5\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [4e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 10 rows, 45 columns, 90 nonzeros\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "\n", + "Root relaxation: objective 2.921000e+03, 17 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + "* 0 0 0 2921.0000000 2921.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Lazy constraints: 3\n", + "\n", + "Explored 1 nodes (17 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 2921 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.921000000000e+03, best bound 2.921000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 106, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanGenerator,\n", + " build_tsp_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with a fixed ten cities in the 1000x1000 box\n", + "# and random distance scaling factors in the [0.90, 1.10] interval.\n", + "data = TravelingSalesmanGenerator(\n", + " n=randint(low=10, high=11),\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " gamma=uniform(loc=0.90, scale=0.20),\n", + " fix_cities=True,\n", + " round=True,\n", + ").generate(10)\n", + "\n", + "# Print distance matrices for the first two instances\n", + "print(\"distances[0]\\n\", data[0].distances)\n", + "print(\"distances[1]\\n\", data[1].distances)\n", + "print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_tsp_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "26dfc157-11f4-4564-b368-95ee8200875e", + "metadata": {}, + "source": [ + "## Unit Commitment\n", + "\n", + "The **unit commitment problem** is a mixed-integer optimization problem which asks which power generation units should be turned on and off, at what time, and at what capacity, in order to meet the demand for electricity generation at the lowest cost. Numerous operational constraints are typically enforced, such as *ramping constraints*, which prevent generation units from changing power output levels too quickly from one time step to the next, and *minimum-up* and *minimum-down* constraints, which prevent units from switching on and off too frequently. The unit commitment problem is widely used in power systems planning and operations." + ] + }, + { + "cell_type": "markdown", + "id": "7048d771", + "metadata": {}, + "source": [ + "\n", + "
\n", + "Note\n", + "\n", + "MIPLearn includes a simple formulation for the unit commitment problem, which enforces only minimum and maximum power production, as well as minimum-up and minimum-down constraints. The formulation does not enforce, for example, ramping trajectories, piecewise-linear cost curves, start-up costs or transmission and n-1 security constraints. For a more complete set of formulations, solution methods and realistic benchmark instances for the problem, see [UnitCommitment.jl](https://github.com/ANL-CEEESA/UnitCommitment.jl).\n", + "
\n", + "\n", + "### Formulation\n", + "\n", + "Let $T$ be the number of time steps, $G$ be the number of generation units, and let $D_t$ be the power demand (in MW) at time $t$. For each generating unit $g$, let $P^\\max_g$ and $P^\\min_g$ be the maximum and minimum amount of power the unit is able to produce when switched on; let $L_g$ and $l_g$ be the minimum up- and down-time for unit $g$; let $C^\\text{fixed}$ be the cost to keep unit $g$ on for one time step, regardless of its power output level; let $C^\\text{start}$ be the cost to switch unit $g$ on; and let $C^\\text{var}$ be the cost for generator $g$ to produce 1 MW of power. In this formulation, we assume linear production costs. For each generator $g$ and time $t$, let $x_{gt}$ be a binary variable which equals one if unit $g$ is on at time $t$, let $w_{gt}$ be a binary variable which equals one if unit $g$ switches from being off at time $t-1$ to being on at time $t$, and let $p_{gt}$ be a continuous variable which indicates the amount of power generated. The formulation is given by:" + ] + }, + { + "cell_type": "markdown", + "id": "bec5ee1c", + "metadata": {}, + "source": [ + "\n", + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{t=1}^T \\sum_{g=1}^G \\left(\n", + " x_{gt} C^\\text{fixed}_g\n", + " + w_{gt} C^\\text{start}_g\n", + " + p_{gt} C^\\text{var}_g\n", + " \\right)\n", + " \\\\\n", + "\\text{such that} \\;\\;\\;\n", + " & \\sum_{k=t-L_g+1}^t w_{gk} \\leq x_{gt}\n", + " & \\forall g\\; \\forall t=L_g-1,\\ldots,T-1 \\\\\n", + " & \\sum_{k=g-l_g+1}^T w_{gt} \\leq 1 - x_{g,t-l_g+1}\n", + " & \\forall g \\forall t=l_g-1,\\ldots,T-1 \\\\\n", + " & w_{gt} \\geq x_{gt} - x_{g,t-1}\n", + " & \\forall g \\forall t=1,\\ldots,T-1 \\\\\n", + " & \\sum_{g=1}^G p_{gt} \\geq D_t\n", + " & \\forall t \\\\\n", + " & P^\\text{min}_g x_{gt} \\leq p_{gt}\n", + " & \\forall g, t \\\\\n", + " & p_{gt} \\leq P^\\text{max}_g x_{gt}\n", + " & \\forall g, t \\\\\n", + " & x_{gt} \\in \\{0, 1\\}\n", + " & \\forall g, t.\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "4a1ffb4c", + "metadata": {}, + "source": [ + "\n", + "The first set of inequalities enforces minimum up-time constraints: if unit $g$ is down at time $t$, then it cannot start up during the previous $L_g$ time steps. The second set of inequalities enforces minimum down-time constraints, and is symmetrical to the previous one. The third set ensures that if unit $g$ starts up at time $t$, then the start up variable must be one. The fourth set ensures that demand is satisfied at each time period. The fifth and sixth sets enforce bounds to the quantity of power generated by each unit.\n", + "\n", + "
\n", + "References\n", + "\n", + "- *Bendotti, P., Fouilhoux, P. & Rottner, C.* **The min-up/min-down unit commitment polytope.** J Comb Optim 36, 1024-1058 (2018). https://doi.org/10.1007/s10878-018-0273-y\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "01bed9fc", + "metadata": {}, + "source": [ + "\n", + "### Random instance generator\n", + "\n", + "The class `UnitCommitmentGenerator` can be used to generate random instances of this problem.\n", + "\n", + "First, the user-provided probability distributions `n_units` and `n_periods` are sampled to determine the number of generating units and the number of time steps, respectively. Then, for each unit, the probabilities `max_power` and `min_power` are sampled to determine the unit's maximum and minimum power output. To make it easier to generate valid ranges, `min_power` is not specified as the absolute power level in MW, but rather as a multiplier of `max_power`; for example, if `max_power` samples to 100 and `min_power` samples to 0.5, then the unit's power range is set to `[50,100]`. Then, the distributions `cost_startup`, `cost_prod` and `cost_fixed` are sampled to determine the unit's startup, variable and fixed costs, while the distributions `min_uptime` and `min_downtime` are sampled to determine its minimum up/down-time.\n", + "\n", + "After parameters for the units have been generated, the class then generates a periodic demand curve, with a peak every 12 time steps, in the range $(0.4C, 0.8C)$, where $C$ is the sum of all units' maximum power output. Finally, all costs and demand values are perturbed by random scaling factors independently sampled from the distributions `cost_jitter` and `demand_jitter`, respectively.\n", + "\n", + "If `fix_units=True`, then the list of generators (with their respective parameters) is kept the same for all generated instances. If `cost_jitter` and `demand_jitter` are provided, the instances will still have slightly different costs and demands." + ] + }, + { + "cell_type": "markdown", + "id": "855b87b4", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "6217da7c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:49.061613905Z", + "start_time": "2023-11-07T16:29:48.941857719Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "min_power[0] [117.79 245.85 271.85 207.7 81.38]\n", + "max_power[0] [218.54 477.82 379.4 319.4 120.21]\n", + "min_uptime[0] [7 6 3 5 7]\n", + "min_downtime[0] [7 3 5 6 2]\n", + "min_power[0] [117.79 245.85 271.85 207.7 81.38]\n", + "cost_startup[0] [3042.42 5247.56 4319.45 2912.29 6118.53]\n", + "cost_prod[0] [ 6.97 14.61 18.32 22.8 39.26]\n", + "cost_fixed[0] [199.67 514.23 592.41 46.45 607.54]\n", + "demand[0]\n", + " [ 905.06 915.41 1166.52 1212.29 1127.81 953.52 905.06 796.21 783.78\n", + " 866.23 768.62 899.59 905.06 946.23 1087.61 1004.24 1048.36 992.03\n", + " 905.06 750.82 691.48 606.15 658.5 809.95]\n", + "\n", + "min_power[1] [117.79 245.85 271.85 207.7 81.38]\n", + "max_power[1] [218.54 477.82 379.4 319.4 120.21]\n", + "min_uptime[1] [7 6 3 5 7]\n", + "min_downtime[1] [7 3 5 6 2]\n", + "min_power[1] [117.79 245.85 271.85 207.7 81.38]\n", + "cost_startup[1] [2458.08 6200.26 4585.74 2666.05 4783.34]\n", + "cost_prod[1] [ 6.31 13.33 20.42 24.37 46.86]\n", + "cost_fixed[1] [196.9 416.42 655.57 52.51 626.15]\n", + "demand[1]\n", + " [ 981.42 840.07 1095.59 1102.03 1088.41 932.29 863.67 848.56 761.33\n", + " 828.28 775.18 834.99 959.76 865.72 1193.52 1058.92 985.19 893.92\n", + " 962.16 781.88 723.15 639.04 602.4 787.02]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 578 rows, 360 columns and 2128 nonzeros\n", + "Model fingerprint: 0x4dc1c661\n", + "Variable types: 120 continuous, 240 integer (240 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 5e+02]\n", + " Objective range [7e+00, 6e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+03]\n", + "Presolve removed 244 rows and 131 columns\n", + "Presolve time: 0.01s\n", + "Presolved: 334 rows, 229 columns, 842 nonzeros\n", + "Variable types: 116 continuous, 113 integer (113 binary)\n", + "Found heuristic solution: objective 440662.46430\n", + "Found heuristic solution: objective 429461.97680\n", + "Found heuristic solution: objective 374043.64040\n", + "\n", + "Root relaxation: objective 3.361348e+05, 142 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 336134.820 0 18 374043.640 336134.820 10.1% - 0s\n", + "H 0 0 368600.14450 336134.820 8.81% - 0s\n", + "H 0 0 364721.76610 336134.820 7.84% - 0s\n", + " 0 0 cutoff 0 364721.766 364721.766 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 3\n", + " Cover: 8\n", + " Implied bound: 29\n", + " Clique: 222\n", + " MIR: 7\n", + " Flow cover: 7\n", + " RLT: 1\n", + " Relax-and-lift: 7\n", + "\n", + "Explored 1 nodes (234 simplex iterations) in 0.02 seconds (0.02 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 5: 364722 368600 374044 ... 440662\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 3.647217661000e+05, best bound 3.647217661000e+05, gap 0.0000%\n", + "\n", + "User-callback calls 677, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.uc import UnitCommitmentGenerator, build_uc_model_gurobipy\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate a random instance with 5 generators and 24 time steps\n", + "data = UnitCommitmentGenerator(\n", + " n_units=randint(low=5, high=6),\n", + " n_periods=randint(low=24, high=25),\n", + " max_power=uniform(loc=50, scale=450),\n", + " min_power=uniform(loc=0.5, scale=0.25),\n", + " cost_startup=uniform(loc=0, scale=10_000),\n", + " cost_prod=uniform(loc=0, scale=50),\n", + " cost_fixed=uniform(loc=0, scale=1_000),\n", + " min_uptime=randint(low=2, high=8),\n", + " min_downtime=randint(low=2, high=8),\n", + " cost_jitter=uniform(loc=0.75, scale=0.5),\n", + " demand_jitter=uniform(loc=0.9, scale=0.2),\n", + " fix_units=True,\n", + ").generate(10)\n", + "\n", + "# Print problem data for the two first instances\n", + "for i in range(2):\n", + " print(f\"min_power[{i}]\", data[i].min_power)\n", + " print(f\"max_power[{i}]\", data[i].max_power)\n", + " print(f\"min_uptime[{i}]\", data[i].min_uptime)\n", + " print(f\"min_downtime[{i}]\", data[i].min_downtime)\n", + " print(f\"min_power[{i}]\", data[i].min_power)\n", + " print(f\"cost_startup[{i}]\", data[i].cost_startup)\n", + " print(f\"cost_prod[{i}]\", data[i].cost_prod)\n", + " print(f\"cost_fixed[{i}]\", data[i].cost_fixed)\n", + " print(f\"demand[{i}]\\n\", data[i].demand)\n", + " print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_uc_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "169293c7-33e1-4d28-8d39-9982776251d7", + "metadata": {}, + "source": [ + "## Vertex Cover\n", + "\n", + "**Minimum weight vertex cover** is a classical optimization problem in graph theory where the goal is to find the minimum-weight set of vertices that are connected to all of the edges in the graph. The problem generalizes one of Karp's 21 NP-complete problems and has applications in various fields, including bioinformatics and machine learning." + ] + }, + { + "cell_type": "markdown", + "id": "91f5781a", + "metadata": {}, + "source": [ + "\n", + "### Formulation\n", + "\n", + "Let $G=(V,E)$ be a simple graph. For each vertex $v \\in V$, let $w_g$ be its weight, and let $x_v$ be a binary decision variable which equals one if $v$ is included in the cover. The mixed-integer linear formulation for the problem is given by:" + ] + }, + { + "cell_type": "markdown", + "id": "544754cb", + "metadata": {}, + "source": [ + " $$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{v \\in V} w_v \\\\\n", + "\\text{such that} \\;\\;\\;\n", + " & x_i + x_j \\ge 1 & \\forall \\{i, j\\} \\in E, \\\\\n", + " & x_{i,j} \\in \\{0, 1\\}\n", + " & \\forall \\{i,j\\} \\in E.\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "35c99166", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [MinWeightVertexCoverGenerator][MinWeightVertexCoverGenerator] can be used to generate random instances of this problem. The class accepts exactly the same parameters and behaves exactly in the same way as [MaxWeightStableSetGenerator][MaxWeightStableSetGenerator]. See the [stable set section](#Stable-Set) for more details.\n", + "\n", + "[MinWeightVertexCoverGenerator]: ../../api/problems/#module-miplearn.problems.vertexcover\n", + "[MaxWeightStableSetGenerator]: ../../api/problems/#miplearn.problems.stab.MaxWeightStableSetGenerator\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "5fff7afe-5b7a-4889-a502-66751ec979bf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:49.075657363Z", + "start_time": "2023-11-07T16:29:49.049561363Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "graph [(0, 2), (0, 4), (0, 8), (1, 2), (1, 3), (1, 5), (1, 6), (1, 9), (2, 5), (2, 9), (3, 6), (3, 7), (6, 9), (7, 8), (8, 9)]\n", + "weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n", + "weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 15 rows, 10 columns and 30 nonzeros\n", + "Model fingerprint: 0x2d2d1390\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [6e+00, 1e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective 301.0000000\n", + "Presolve removed 7 rows and 2 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 8 rows, 8 columns, 19 nonzeros\n", + "Variable types: 0 continuous, 8 integer (8 binary)\n", + "\n", + "Root relaxation: objective 2.995750e+02, 8 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 infeasible 0 301.00000 301.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (8 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 301 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 3.010000000000e+02, best bound 3.010000000000e+02, gap 0.0000%\n", + "\n", + "User-callback calls 326, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.vertexcover import (\n", + " MinWeightVertexCoverGenerator,\n", + " build_vertexcover_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with a fixed 10-node graph,\n", + "# 25% density and random weights in the [0, 100] interval.\n", + "data = MinWeightVertexCoverGenerator(\n", + " w=uniform(loc=0.0, scale=100.0),\n", + " n=randint(low=10, high=11),\n", + " p=uniform(loc=0.25, scale=0.0),\n", + " fix_graph=True,\n", + ").generate(10)\n", + "\n", + "# Print the graph and weights for two instances\n", + "print(\"graph\", data[0].graph.edges)\n", + "print(\"weights[0]\", data[0].weights)\n", + "print(\"weights[1]\", data[1].weights)\n", + "print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_vertexcover_model_gurobipy(data[0])\n", + "model.optimize()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/guide/solvers.ipynb.txt b/0.4/_sources/guide/solvers.ipynb.txt new file mode 100644 index 00000000..c4ee9bc9 --- /dev/null +++ b/0.4/_sources/guide/solvers.ipynb.txt @@ -0,0 +1,251 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "id": "9ec1907b-db93-4840-9439-c9005902b968", + "metadata": {}, + "source": [ + "# Learning Solver\n", + "\n", + "On previous pages, we discussed various components of the MIPLearn framework, including training data collectors, feature extractors, and individual machine learning components. In this page, we introduce **LearningSolver**, the main class of the framework which integrates all the aforementioned components into a cohesive whole. Using **LearningSolver** involves three steps: (i) configuring the solver; (ii) training the ML components; and (iii) solving new MIP instances. In the following, we describe each of these steps, then conclude with a complete runnable example.\n", + "\n", + "### Configuring the solver\n", + "\n", + "**LearningSolver** is composed by multiple individual machine learning components, each targeting a different part of the solution process, or implementing a different machine learning strategy. This architecture allows strategies to be easily enabled, disabled or customized, making the framework flexible. By default, no components are provided and **LearningSolver** is equivalent to a traditional MIP solver. To specify additional components, the `components` constructor argument may be used:\n", + "\n", + "```python\n", + "solver = LearningSolver(\n", + " components=[\n", + " comp1,\n", + " comp2,\n", + " comp3,\n", + " ]\n", + ")\n", + "```\n", + "\n", + "In this example, three components `comp1`, `comp2` and `comp3` are provided. The strategies implemented by these components are applied sequentially when solving the problem. For example, `comp1` and `comp2` could fix a subset of decision variables, while `comp3` constructs a warm start for the remaining problem.\n", + "\n", + "### Training and solving new instances\n", + "\n", + "Once a solver is configured, its ML components need to be trained. This can be achieved by the `solver.fit` method, as illustrated below. The method accepts a list of HDF5 files and trains each individual component sequentially. Once the solver is trained, new instances can be solved using `solver.optimize`. The method returns a dictionary of statistics collected by each component, such as the number of variables fixed.\n", + "\n", + "```python\n", + "# Build instances\n", + "train_data = ...\n", + "test_data = ...\n", + "\n", + "# Collect training data\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_model)\n", + "\n", + "# Build solver\n", + "solver = LearningSolver(...)\n", + "\n", + "# Train components\n", + "solver.fit(train_data)\n", + "\n", + "# Solve a new test instance\n", + "stats = solver.optimize(test_data[0], build_model)\n", + "\n", + "```\n", + "\n", + "### Complete example\n", + "\n", + "In the example below, we illustrate the usage of **LearningSolver** by building instances of the Traveling Salesman Problem, collecting training data, training the ML components, then solving a new instance." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "92b09b98", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", + "Model fingerprint: 0x6ddcd141\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [4e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 10 rows, 45 columns, 90 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n", + " 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 15 iterations and 0.00 seconds (0.00 work units)\n", + "Optimal objective 2.761000000e+03\n", + "\n", + "User-callback calls 56, time in user-callback 0.00 sec\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", + "Model fingerprint: 0x74ca3d0a\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [4e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "\n", + "User MIP start produced solution with objective 2796 (0.00s)\n", + "Loaded user MIP start with objective 2796\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 10 rows, 45 columns, 90 nonzeros\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "\n", + "Root relaxation: objective 2.761000e+03, 14 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n", + " 0 0 cutoff 0 2796.00000 2796.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Lazy constraints: 3\n", + "\n", + "Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 2796 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 110, time in user-callback 0.00 sec\n" + ] + }, + { + "data": { + "text/plain": [ + "{'WS: Count': 1, 'WS: Number of variables set': 41.0}" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import random\n", + "\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from sklearn.linear_model import LogisticRegression\n", + "\n", + "from miplearn.classifiers.minprob import MinProbabilityClassifier\n", + "from miplearn.classifiers.singleclass import SingleClassFix\n", + "from miplearn.collectors.basic import BasicCollector\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n", + "from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n", + "from miplearn.io import write_pkl_gz\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanGenerator,\n", + " build_tsp_model_gurobipy,\n", + ")\n", + "from miplearn.solvers.learning import LearningSolver\n", + "\n", + "# Set random seed to make example reproducible.\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate a few instances of the traveling salesman problem.\n", + "data = TravelingSalesmanGenerator(\n", + " n=randint(low=10, high=11),\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " gamma=uniform(loc=0.90, scale=0.20),\n", + " fix_cities=True,\n", + " round=True,\n", + ").generate(50)\n", + "\n", + "# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n", + "all_data = write_pkl_gz(data, \"data/tsp\")\n", + "\n", + "# Split train/test data\n", + "train_data = all_data[:40]\n", + "test_data = all_data[40:]\n", + "\n", + "# Collect training data\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_tsp_model_gurobipy, n_jobs=4)\n", + "\n", + "# Build learning solver\n", + "solver = LearningSolver(\n", + " components=[\n", + " IndependentVarsPrimalComponent(\n", + " base_clf=SingleClassFix(\n", + " MinProbabilityClassifier(\n", + " base_clf=LogisticRegression(),\n", + " thresholds=[0.95, 0.95],\n", + " ),\n", + " ),\n", + " extractor=AlvLouWeh2017Extractor(),\n", + " action=SetWarmStart(),\n", + " )\n", + " ]\n", + ")\n", + "\n", + "# Train ML models\n", + "solver.fit(train_data)\n", + "\n", + "# Solve a test instance\n", + "solver.optimize(test_data[0], build_tsp_model_gurobipy)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e27d2cbd-5341-461d-bbc1-8131aee8d949", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/index.rst.txt b/0.4/_sources/index.rst.txt new file mode 100644 index 00000000..ebecd362 --- /dev/null +++ b/0.4/_sources/index.rst.txt @@ -0,0 +1,68 @@ +MIPLearn +======== +**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS. + +Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits. + + +Contents +-------- + +.. toctree:: + :maxdepth: 1 + :caption: Tutorials + :numbered: 2 + + tutorials/getting-started-pyomo + tutorials/getting-started-gurobipy + tutorials/getting-started-jump + tutorials/cuts-gurobipy + +.. toctree:: + :maxdepth: 2 + :caption: User Guide + :numbered: 2 + + guide/problems + guide/collectors + guide/features + guide/primal + guide/solvers + +.. toctree:: + :maxdepth: 1 + :caption: Python API Reference + :numbered: 2 + + api/problems + api/collectors + api/components + api/solvers + api/helpers + + +Authors +------- + +- **Alinson S. Xavier** (Argonne National Laboratory) +- **Feng Qiu** (Argonne National Laboratory) +- **Xiaoyi Gu** (Georgia Institute of Technology) +- **Berkay Becu** (Georgia Institute of Technology) +- **Santanu S. Dey** (Georgia Institute of Technology) + + +Acknowledgments +--------------- +* Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy. +* Based upon work supported by the **U.S. Department of Energy Advanced Grid Modeling Program**. + +Citing MIPLearn +--------------- + +If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows: + +* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: https://doi.org/10.5281/zenodo.4287567 + +If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed: + +* **Alinson S. Xavier, Feng Qiu, Shabbir Ahmed.** *Learning to Solve Large-Scale Unit Commitment Problems.* INFORMS Journal on Computing (2020). DOI: https://doi.org/10.1287/ijoc.2020.0976 diff --git a/0.4/_sources/tutorials/cuts-gurobipy.ipynb.txt b/0.4/_sources/tutorials/cuts-gurobipy.ipynb.txt new file mode 100644 index 00000000..ffdc13db --- /dev/null +++ b/0.4/_sources/tutorials/cuts-gurobipy.ipynb.txt @@ -0,0 +1,541 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "b4bd8bd6-3ce9-4932-852f-f98a44120a3e", + "metadata": {}, + "source": [ + "# User cuts and lazy constraints\n", + "\n", + "User cuts and lazy constraints are two advanced mixed-integer programming techniques that can accelerate solver performance. User cuts are additional constraints, derived from the constraints already in the model, that can tighten the feasible region and eliminate fractional solutions, thus reducing the size of the branch-and-bound tree. Lazy constraints, on the other hand, are constraints that are potentially part of the problem formulation but are omitted from the initial model to reduce its size; these constraints are added to the formulation only once the solver finds a solution that violates them. While both techniques have been successful, significant computational effort may still be required to generate strong user cuts and to identify violated lazy constraints, which can reduce their effectiveness.\n", + "\n", + "MIPLearn is able to predict which user cuts and which lazy constraints to enforce at the beginning of the optimization process, using machine learning. In this tutorial, we will use the framework to predict subtour elimination constraints for the **traveling salesman problem** using Gurobipy. We assume that MIPLearn has already been correctly installed.\n", + "\n", + "
\n", + "\n", + "Solver Compatibility\n", + "\n", + "User cuts and lazy constraints are also supported in the Python/Pyomo and Julia/JuMP versions of the package. See the source code of build_tsp_model_pyomo and build_tsp_model_jump for more details. Note, however, the following limitations:\n", + "\n", + "- Python/Pyomo: Only `gurobi_persistent` is currently supported. PRs implementing callbacks for other persistent solvers are welcome.\n", + "- Julia/JuMP: Only solvers supporting solver-independent callbacks are supported. As of JuMP 1.19, this includes Gurobi, CPLEX, XPRESS, SCIP and GLPK. Note that HiGHS and Cbc are not supported. As newer versions of JuMP implement further callback support, MIPLearn should become automatically compatible with these solvers.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "72229e1f-cbd8-43f0-82ee-17d6ec9c3b7d", + "metadata": {}, + "source": [ + "## Modeling the traveling salesman problem\n", + "\n", + "Given a list of cities and the distances between them, the **traveling salesman problem (TSP)** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.\n", + "\n", + "To describe an instance of TSP, we need to specify the number of cities $n$, and an $n \\times n$ matrix of distances. The class `TravelingSalesmanData`, in the `miplearn.problems.tsp` package, can hold this data:" + ] + }, + { + "cell_type": "markdown", + "id": "4598a1bc-55b6-48cc-a050-2262786c203a", + "metadata": {}, + "source": [ + "```python\n", + "@dataclass\r\n", + "class TravelingSalesmanData:\r\n", + " n_cities: int\r\n", + " distances: np.ndarray\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "3a43cc12-1207-4247-bdb2-69a6a2910738", + "metadata": {}, + "source": [ + "MIPLearn also provides `TravelingSalesmandGenerator`, a random generator for TSP instances, and `build_tsp_model_gurobipy`, a function which converts `TravelingSalesmanData` into an actual gurobipy optimization model, and which uses lazy constraints to enforce subtour elimination.\n", + "\n", + "The example below is a simplified and annotated version of `build_tsp_model_gurobipy`, illustrating the usage of callbacks with MIPLearn. Compared the the previous tutorial examples, note that, in addition to defining the variables, objective function and constraints of our problem, we also define two callback functions `lazy_separate` and `lazy_enforce`." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "e4712a85-0327-439c-8889-933e1ff714e7", + "metadata": {}, + "outputs": [], + "source": [ + "import gurobipy as gp\n", + "from gurobipy import quicksum, GRB, tuplelist\n", + "from miplearn.solvers.gurobi import GurobiModel\n", + "import networkx as nx\n", + "import numpy as np\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanData,\n", + " TravelingSalesmanGenerator,\n", + ")\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.io import write_pkl_gz, read_pkl_gz\n", + "from miplearn.collectors.basic import BasicCollector\n", + "from miplearn.solvers.learning import LearningSolver\n", + "from miplearn.components.lazy.mem import MemorizingLazyComponent\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "from sklearn.neighbors import KNeighborsClassifier\n", + "\n", + "# Set up random seed to make example more reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Set up Python logging\n", + "import logging\n", + "\n", + "logging.basicConfig(level=logging.WARNING)\n", + "\n", + "\n", + "def build_tsp_model_gurobipy_simplified(data):\n", + " # Read data from file if a filename is provided\n", + " if isinstance(data, str):\n", + " data = read_pkl_gz(data)\n", + "\n", + " # Create empty gurobipy model\n", + " model = gp.Model()\n", + "\n", + " # Create set of edges between every pair of cities, for convenience\n", + " edges = tuplelist(\n", + " (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)\n", + " )\n", + "\n", + " # Add binary variable x[e] for each edge e\n", + " x = model.addVars(edges, vtype=GRB.BINARY, name=\"x\")\n", + "\n", + " # Add objective function\n", + " model.setObjective(quicksum(x[(i, j)] * data.distances[i, j] for (i, j) in edges))\n", + "\n", + " # Add constraint: must choose two edges adjacent to each city\n", + " model.addConstrs(\n", + " (\n", + " quicksum(x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)\n", + " == 2\n", + " for i in range(data.n_cities)\n", + " ),\n", + " name=\"eq_degree\",\n", + " )\n", + "\n", + " def lazy_separate(m: GurobiModel):\n", + " \"\"\"\n", + " Callback function that finds subtours in the current solution.\n", + " \"\"\"\n", + " # Query current value of the x variables\n", + " x_val = m.inner.cbGetSolution(x)\n", + "\n", + " # Initialize empty set of violations\n", + " violations = []\n", + "\n", + " # Build set of edges we have currently selected\n", + " selected_edges = [e for e in edges if x_val[e] > 0.5]\n", + "\n", + " # Build a graph containing the selected edges, using networkx\n", + " graph = nx.Graph()\n", + " graph.add_edges_from(selected_edges)\n", + "\n", + " # For each component of the graph\n", + " for component in list(nx.connected_components(graph)):\n", + "\n", + " # If the component is not the entire graph, we found a\n", + " # subtour. Add the edge cut to the list of violations.\n", + " if len(component) < data.n_cities:\n", + " cut_edges = [\n", + " [e[0], e[1]]\n", + " for e in edges\n", + " if (e[0] in component and e[1] not in component)\n", + " or (e[0] not in component and e[1] in component)\n", + " ]\n", + " violations.append(cut_edges)\n", + "\n", + " # Return the list of violations\n", + " return violations\n", + "\n", + " def lazy_enforce(m: GurobiModel, violations) -> None:\n", + " \"\"\"\n", + " Callback function that, given a list of subtours, adds lazy\n", + " constraints to remove them from the feasible region.\n", + " \"\"\"\n", + " print(f\"Enforcing {len(violations)} subtour elimination constraints\")\n", + " for violation in violations:\n", + " m.add_constr(quicksum(x[e[0], e[1]] for e in violation) >= 2)\n", + "\n", + " return GurobiModel(\n", + " model,\n", + " lazy_separate=lazy_separate,\n", + " lazy_enforce=lazy_enforce,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "58875042-d6ac-4f93-b3cc-9a5822b11dad", + "metadata": {}, + "source": [ + "The `lazy_separate` function starts by querying the current fractional solution value through `m.inner.cbGetSolution` (recall that `m.inner` is a regular gurobipy model), then finds the set of violated lazy constraints. Unlike a regular lazy constraint solver callback, note that `lazy_separate` does not add the violated constraints to the model; it simply returns a list of objects that uniquely identifies the set of lazy constraints that should be generated. Enforcing the constraints is the responsbility of the second callback function, `lazy_enforce`. This function takes as input the model and the list of violations found by `lazy_separate`, converts them into actual constraints, and adds them to the model through `m.add_constr`.\n", + "\n", + "During training data generation, MIPLearn calls `lazy_separate` and `lazy_enforce` in sequence, inside a regular solver callback. However, once the machine learning models are trained, MIPLearn calls `lazy_enforce` directly, before the optimization process starts, with a list of **predicted** violations, as we will see in the example below." + ] + }, + { + "cell_type": "markdown", + "id": "5839728e-406c-4be2-ba81-83f2b873d4b2", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Constraint Representation\n", + "\n", + "How should user cuts and lazy constraints be represented is a decision that the user can make; MIPLearn is representation agnostic. The objects returned by `lazy_separate`, however, are serialized as JSON and stored in the HDF5 training data files. Therefore, it is recommended to use only simple objects, such as lists, tuples and dictionaries.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "847ae32e-fad7-406a-8797-0d79065a07fd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "To test the callback defined above, we generate a small set of TSP instances, using the provided random instance generator. As in the previous tutorial, we generate some test instances and some training instances, then solve them using `BasicCollector`. Input problem data is stored in `tsp/train/00000.pkl.gz, ...`, whereas solver training data (including list of required lazy constraints) is stored in `tsp/train/00000.h5, ...`." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "eb63154a-1fa6-4eac-aa46-6838b9c201f6", + "metadata": {}, + "outputs": [], + "source": [ + "# Configure generator to produce instances with 50 cities located\n", + "# in the 1000 x 1000 square, and with slightly perturbed distances.\n", + "gen = TravelingSalesmanGenerator(\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " n=randint(low=50, high=51),\n", + " gamma=uniform(loc=1.0, scale=0.25),\n", + " fix_cities=True,\n", + " round=True,\n", + ")\n", + "\n", + "# Generate 500 instances and store input data file to .pkl.gz files\n", + "data = gen.generate(500)\n", + "train_data = write_pkl_gz(data[0:450], \"tsp/train\")\n", + "test_data = write_pkl_gz(data[450:500], \"tsp/test\")\n", + "\n", + "# Solve the training instances in parallel, collecting the required lazy\n", + "# constraints, in addition to other information, such as optimal solution.\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_tsp_model_gurobipy_simplified, n_jobs=10)" + ] + }, + { + "cell_type": "markdown", + "id": "6903c26c-dbe0-4a2e-bced-fdbf93513dde", + "metadata": {}, + "source": [ + "## Training and solving new instances" + ] + }, + { + "cell_type": "markdown", + "id": "57cd724a-2d27-4698-a1e6-9ab8345ef31f", + "metadata": {}, + "source": [ + "After producing the training dataset, we can train the machine learning models to predict which lazy constraints are necessary. In this tutorial, we use the following ML strategy: given a new instance, find the 50 most similar ones in the training dataset and verify how often each lazy constraint was required. If a lazy constraint was required for the majority of the 50 most-similar instances, enforce it ahead-of-time for the current instance. To measure instance similarity, use the objective function only. This ML strategy can be implemented using `MemorizingLazyComponent` with `H5FieldsExtractor` and `KNeighborsClassifier`, as shown below." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "43779e3d-4174-4189-bc75-9f564910e212", + "metadata": {}, + "outputs": [], + "source": [ + "solver = LearningSolver(\n", + " components=[\n", + " MemorizingLazyComponent(\n", + " extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n", + " clf=KNeighborsClassifier(n_neighbors=100),\n", + " ),\n", + " ],\n", + ")\n", + "solver.fit(train_data)" + ] + }, + { + "cell_type": "markdown", + "id": "12480712-9d3d-4cbc-a6d7-d6c1e2f950f4", + "metadata": {}, + "source": [ + "Next, we solve one of the test instances using the trained solver. In the run below, we can see that MIPLearn adds many lazy constraints ahead-of-time, before the optimization starts. During the optimization process itself, some additional lazy constraints are required, but very few." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "23f904ad-f1a8-4b5a-81ae-c0b9e813a4b2", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter Threads to value 1\n", + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n", + "Model fingerprint: 0x04d7bec1\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 50 rows, 1225 columns, 2450 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n", + " 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 66 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 5.588000000e+03\n", + "\n", + "User-callback calls 107, time in user-callback 0.00 sec\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:miplearn.components.cuts.mem:Predicting violated lazy constraints...\n", + "INFO:miplearn.components.lazy.mem:Enforcing 19 constraints ahead-of-time...\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Enforcing 19 subtour elimination constraints\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 69 rows, 1225 columns and 6091 nonzeros\n", + "Model fingerprint: 0x09bd34d6\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Found heuristic solution: objective 29853.000000\n", + "Presolve time: 0.00s\n", + "Presolved: 69 rows, 1225 columns, 6091 nonzeros\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "\n", + "Root relaxation: objective 6.139000e+03, 93 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 6139.00000 0 6 29853.0000 6139.00000 79.4% - 0s\n", + "H 0 0 6390.0000000 6139.00000 3.93% - 0s\n", + " 0 0 6165.50000 0 10 6390.00000 6165.50000 3.51% - 0s\n", + "Enforcing 3 subtour elimination constraints\n", + " 0 0 6165.50000 0 6 6390.00000 6165.50000 3.51% - 0s\n", + " 0 0 6198.50000 0 16 6390.00000 6198.50000 3.00% - 0s\n", + "* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 11\n", + " MIR: 1\n", + " Zero half: 4\n", + " Lazy constraints: 3\n", + "\n", + "Explored 1 nodes (222 simplex iterations) in 0.03 seconds (0.02 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 3: 6219 6390 29853 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 141, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "# Increase log verbosity, so that we can see what is MIPLearn doing\n", + "logging.getLogger(\"miplearn\").setLevel(logging.INFO)\n", + "\n", + "# Solve a new test instance\n", + "solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);" + ] + }, + { + "cell_type": "markdown", + "id": "79cc3e61-ee2b-4f18-82cb-373d55d67de6", + "metadata": {}, + "source": [ + "Finally, we solve the same instance, but using a regular solver, without ML prediction. We can see that a much larger number of lazy constraints are added during the optimization process itself. Additionally, the solver requires a larger number of iterations to find the optimal solution. There is not a significant difference in running time because of the small size of these instances." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "a015c51c-091a-43b6-b761-9f3577fc083e", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n", + "Model fingerprint: 0x04d7bec1\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 50 rows, 1225 columns, 2450 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n", + " 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 66 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 5.588000000e+03\n", + "\n", + "User-callback calls 107, time in user-callback 0.00 sec\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n", + "Model fingerprint: 0x77a94572\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Found heuristic solution: objective 29695.000000\n", + "Presolve time: 0.00s\n", + "Presolved: 50 rows, 1225 columns, 2450 nonzeros\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "\n", + "Root relaxation: objective 5.588000e+03, 68 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 5588.00000 0 12 29695.0000 5588.00000 81.2% - 0s\n", + "Enforcing 9 subtour elimination constraints\n", + "Enforcing 11 subtour elimination constraints\n", + "H 0 0 27241.000000 5588.00000 79.5% - 0s\n", + " 0 0 5898.00000 0 8 27241.0000 5898.00000 78.3% - 0s\n", + "Enforcing 4 subtour elimination constraints\n", + "Enforcing 3 subtour elimination constraints\n", + " 0 0 6066.00000 0 - 27241.0000 6066.00000 77.7% - 0s\n", + "Enforcing 2 subtour elimination constraints\n", + " 0 0 6128.00000 0 - 27241.0000 6128.00000 77.5% - 0s\n", + " 0 0 6139.00000 0 6 27241.0000 6139.00000 77.5% - 0s\n", + "H 0 0 6368.0000000 6139.00000 3.60% - 0s\n", + " 0 0 6154.75000 0 15 6368.00000 6154.75000 3.35% - 0s\n", + "Enforcing 2 subtour elimination constraints\n", + " 0 0 6154.75000 0 6 6368.00000 6154.75000 3.35% - 0s\n", + " 0 0 6165.75000 0 11 6368.00000 6165.75000 3.18% - 0s\n", + "Enforcing 3 subtour elimination constraints\n", + " 0 0 6204.00000 0 6 6368.00000 6204.00000 2.58% - 0s\n", + "* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 5\n", + " MIR: 1\n", + " Zero half: 4\n", + " Lazy constraints: 4\n", + "\n", + "Explored 1 nodes (224 simplex iterations) in 0.10 seconds (0.03 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 4: 6219 6368 27241 29695 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 170, time in user-callback 0.01 sec\n" + ] + } + ], + "source": [ + "solver = LearningSolver(components=[]) # empty set of ML components\n", + "solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);" + ] + }, + { + "cell_type": "markdown", + "id": "432c99b2-67fe-409b-8224-ccef91de96d1", + "metadata": {}, + "source": [ + "## Learning user cuts\n", + "\n", + "The example above focused on lazy constraints. To enforce user cuts instead, the procedure is very similar, with the following changes:\n", + "\n", + "- Instead of `lazy_separate` and `lazy_enforce`, use `cuts_separate` and `cuts_enforce`\n", + "- Instead of `m.inner.cbGetSolution`, use `m.inner.cbGetNodeRel`\n", + "\n", + "For a complete example, see `build_stab_model_gurobipy`, `build_stab_model_pyomo` and `build_stab_model_jump`, which solves the maximum-weight stable set problem using user cut callbacks." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e6cb694d-8c43-410f-9a13-01bf9e0763b7", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/tutorials/getting-started-gurobipy.ipynb.txt b/0.4/_sources/tutorials/getting-started-gurobipy.ipynb.txt new file mode 100644 index 00000000..110e3f43 --- /dev/null +++ b/0.4/_sources/tutorials/getting-started-gurobipy.ipynb.txt @@ -0,0 +1,837 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6b8983b1", + "metadata": { + "tags": [] + }, + "source": [ + "# Getting started (Gurobipy)\n", + "\n", + "## Introduction\n", + "\n", + "**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n", + "\n", + "1. Install the Python/Gurobipy version of MIPLearn\n", + "2. Model a simple optimization problem using Gurobipy\n", + "3. Generate training data and train the ML models\n", + "4. Use the ML models together Gurobi to solve new instances\n", + "\n", + "
\n", + "Note\n", + " \n", + "The Python/Gurobipy version of MIPLearn is only compatible with the Gurobi Optimizer. For broader solver compatibility, see the Python/Pyomo and Julia/JuMP versions of the package.\n", + "
\n", + "\n", + "
\n", + "Warning\n", + " \n", + "MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n", + " \n", + "
\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "02f0a927", + "metadata": {}, + "source": [ + "## Installation\n", + "\n", + "MIPLearn is available in two versions:\n", + "\n", + "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", + "- Julia version, compatible with the JuMP modeling language.\n", + "\n", + "In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n", + "\n", + "```\n", + "$ pip install MIPLearn==0.3\n", + "```\n", + "\n", + "In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.\n", + "\n", + "```\n", + "$ pip install 'gurobipy>=10,<10.1'\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a14e4550", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + " \n", + "In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.\n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "16b86823", + "metadata": {}, + "source": [ + "## Modeling a simple optimization problem\n", + "\n", + "To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n", + "\n", + "Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n", + "\n", + "This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:" + ] + }, + { + "cell_type": "markdown", + "id": "f12c3702", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align}\n", + "\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n", + "\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n", + "& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n", + "& \\sum_{i=1}^n y_i = d \\\\\n", + "& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n", + "& y_i \\geq 0 & i=1,\\ldots,n\n", + "\\end{align}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "be3989ed", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Note\n", + "\n", + "We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "a5fd33f6", + "metadata": {}, + "source": [ + "Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class `UnitCommitmentData`, which holds all the input data." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "22a67170-10b4-43d3-8708-014d91141e73", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:18:25.442346786Z", + "start_time": "2023-06-06T20:18:25.329017476Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "from dataclasses import dataclass\n", + "from typing import List\n", + "\n", + "import numpy as np\n", + "\n", + "\n", + "@dataclass\n", + "class UnitCommitmentData:\n", + " demand: float\n", + " pmin: List[float]\n", + " pmax: List[float]\n", + " cfix: List[float]\n", + " cvar: List[float]" + ] + }, + { + "cell_type": "markdown", + "id": "29f55efa-0751-465a-9b0a-a821d46a3d40", + "metadata": {}, + "source": [ + "Next, we write a `build_uc_model` function, which converts the input data into a concrete Pyomo model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a compressed pickle file containing this data." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "2f67032f-0d74-4317-b45c-19da0ec859e9", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:48:05.953902842Z", + "start_time": "2023-06-06T20:48:05.909747925Z" + } + }, + "outputs": [], + "source": [ + "import gurobipy as gp\n", + "from gurobipy import GRB, quicksum\n", + "from typing import Union\n", + "from miplearn.io import read_pkl_gz\n", + "from miplearn.solvers.gurobi import GurobiModel\n", + "\n", + "\n", + "def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:\n", + " if isinstance(data, str):\n", + " data = read_pkl_gz(data)\n", + "\n", + " model = gp.Model()\n", + " n = len(data.pmin)\n", + " x = model._x = model.addVars(n, vtype=GRB.BINARY, name=\"x\")\n", + " y = model._y = model.addVars(n, name=\"y\")\n", + " model.setObjective(\n", + " quicksum(data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n))\n", + " )\n", + " model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))\n", + " model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))\n", + " model.addConstr(quicksum(y[i] for i in range(n)) == data.demand)\n", + " return GurobiModel(model)" + ] + }, + { + "cell_type": "markdown", + "id": "c22714a3", + "metadata": {}, + "source": [ + "At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "2a896f47", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:14.266758244Z", + "start_time": "2023-06-06T20:49:14.223514806Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", + "Model fingerprint: 0x58dfdd53\n", + "Variable types: 3 continuous, 3 integer (3 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 7e+01]\n", + " Objective range [2e+00, 7e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+02, 1e+02]\n", + "Presolve removed 2 rows and 1 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 5 columns, 13 nonzeros\n", + "Variable types: 0 continuous, 5 integer (3 binary)\n", + "Found heuristic solution: objective 1400.0000000\n", + "\n", + "Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n", + " 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n", + "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 2: 1320 1400 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", + "obj = 1320.0\n", + "x = [-0.0, 1.0, 1.0]\n", + "y = [0.0, 60.0, 40.0]\n" + ] + } + ], + "source": [ + "model = build_uc_model(\n", + " UnitCommitmentData(\n", + " demand=100.0,\n", + " pmin=[10, 20, 30],\n", + " pmax=[50, 60, 70],\n", + " cfix=[700, 600, 500],\n", + " cvar=[1.5, 2.0, 2.5],\n", + " )\n", + ")\n", + "\n", + "model.optimize()\n", + "print(\"obj =\", model.inner.objVal)\n", + "print(\"x =\", [model.inner._x[i].x for i in range(3)])\n", + "print(\"y =\", [model.inner._y[i].x for i in range(3)])" + ] + }, + { + "cell_type": "markdown", + "id": "41b03bbc", + "metadata": {}, + "source": [ + "Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power." + ] + }, + { + "cell_type": "markdown", + "id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + "\n", + "- In the example above, `GurobiModel` is just a thin wrapper around a standard Gurobi model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original Gurobi model can be accessed through `model.inner`, as illustrated above.\n", + "- To ensure training data consistency, MIPLearn requires all decision variables to have names.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cf60c1dd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n", + "\n", + "In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "5eb09fab", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:22.758192368Z", + "start_time": "2023-06-06T20:49:22.724784572Z" + } + }, + "outputs": [], + "source": [ + "from scipy.stats import uniform\n", + "from typing import List\n", + "import random\n", + "\n", + "\n", + "def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:\n", + " random.seed(seed)\n", + " np.random.seed(seed)\n", + " pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)\n", + " pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)\n", + " cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)\n", + " cvar = uniform(loc=1.25, scale=0.25).rvs(n)\n", + " return [\n", + " UnitCommitmentData(\n", + " demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),\n", + " pmin=pmin,\n", + " pmax=pmax,\n", + " cfix=cfix,\n", + " cvar=cvar,\n", + " )\n", + " for _ in range(samples)\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "id": "3a03a7ac", + "metadata": {}, + "source": [ + "In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n", + "\n", + "Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00000.pkl.gz`, `uc/train/00001.pkl.gz`, etc., which contain the input data in compressed (gzipped) pickle format." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "6156752c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:24.811192929Z", + "start_time": "2023-06-06T20:49:24.575639142Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.io import write_pkl_gz\n", + "\n", + "data = random_uc_data(samples=500, n=500)\n", + "train_data = write_pkl_gz(data[0:450], \"uc/train\")\n", + "test_data = write_pkl_gz(data[450:500], \"uc/test\")" + ] + }, + { + "cell_type": "markdown", + "id": "b17af877", + "metadata": {}, + "source": [ + "Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00000.h5`, `uc/train/00001.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00000.mps.gz`, `uc/train/00001.mps.gz`, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7623f002", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:34.936729253Z", + "start_time": "2023-06-06T20:49:25.936126612Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.collectors.basic import BasicCollector\n", + "\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_uc_model, n_jobs=4)" + ] + }, + { + "cell_type": "markdown", + "id": "c42b1be1-9723-4827-82d8-974afa51ef9f", + "metadata": {}, + "source": [ + "## Training and solving test instances" + ] + }, + { + "cell_type": "markdown", + "id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c", + "metadata": {}, + "source": [ + "With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n", + "\n", + "1. Memorize the optimal solutions of all training instances;\n", + "2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n", + "3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n", + "4. Provide this partial solution to the solver as a warm start.\n", + "\n", + "This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:38.997939600Z", + "start_time": "2023-06-06T20:49:38.968261432Z" + } + }, + "outputs": [], + "source": [ + "from sklearn.neighbors import KNeighborsClassifier\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "from miplearn.components.primal.mem import (\n", + " MemorizingPrimalComponent,\n", + " MergeTopSolutions,\n", + ")\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "\n", + "comp = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=25),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_constr_rhs\"],\n", + " ),\n", + " constructor=MergeTopSolutions(25, [0.0, 1.0]),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94", + "metadata": {}, + "source": [ + "Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:42.072345411Z", + "start_time": "2023-06-06T20:49:41.294040974Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xa8b70287\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.01s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xcf27855a\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.29153e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2915e+09 8.2907e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 1\n", + " Flow cover: 2\n", + "\n", + "Explored 1 nodes (565 simplex iterations) in 0.03 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 8.29153e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n" + ] + }, + { + "data": { + "text/plain": [ + "{'WS: Count': 1, 'WS: Number of variables set': 482.0}" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from miplearn.solvers.learning import LearningSolver\n", + "\n", + "solver_ml = LearningSolver(components=[comp])\n", + "solver_ml.fit(train_data)\n", + "solver_ml.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0", + "metadata": {}, + "source": [ + "By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "2ff391ed-e855-4228-aa09-a7641d8c2893", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:44.012782276Z", + "start_time": "2023-06-06T20:49:43.813974362Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xa8b70287\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x4cbbf7c7\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Found heuristic solution: objective 9.757128e+09\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 9.7571e+09 8.2906e+09 15.0% - 0s\n", + "H 0 0 8.298273e+09 8.2906e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + "H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + "H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 2\n", + " MIR: 1\n", + "\n", + "Explored 1 nodes (1031 simplex iterations) in 0.15 seconds (0.03 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n" + ] + }, + { + "data": { + "text/plain": [ + "{}" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "solver_baseline = LearningSolver(components=[])\n", + "solver_baseline.fit(train_data)\n", + "solver_baseline.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266", + "metadata": {}, + "source": [ + "In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems." + ] + }, + { + "cell_type": "markdown", + "id": "eec97f06", + "metadata": { + "tags": [] + }, + "source": [ + "## Accessing the solution\n", + "\n", + "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "67a6cd18", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:50:12.869892930Z", + "start_time": "2023-06-06T20:50:12.509410473Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x19042f12\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n", + " 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.253596777e+09\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xf97cde91\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.25814e+09 (0.00s)\n", + "User MIP start produced solution with objective 8.25512e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.25459e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Cover: 1\n", + " MIR: 2\n", + " StrongCG: 1\n", + " Flow cover: 1\n", + "\n", + "Explored 1 nodes (575 simplex iterations) in 0.05 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n", + "obj = 8254590409.969726\n", + "x = [1.0, 1.0, 0.0]\n", + "y = [935662.0949262811, 1604270.0218116897, 0.0]\n" + ] + } + ], + "source": [ + "data = random_uc_data(samples=1, n=500)[0]\n", + "model = build_uc_model(data)\n", + "solver_ml.optimize(model)\n", + "print(\"obj =\", model.inner.objVal)\n", + "print(\"x =\", [model.inner._x[i].x for i in range(3)])\n", + "print(\"y =\", [model.inner._y[i].x for i in range(3)])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5593d23a-83bd-4e16-8253-6300f5e3f63b", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/tutorials/getting-started-jump.ipynb.txt b/0.4/_sources/tutorials/getting-started-jump.ipynb.txt new file mode 100644 index 00000000..8dbf587e --- /dev/null +++ b/0.4/_sources/tutorials/getting-started-jump.ipynb.txt @@ -0,0 +1,680 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6b8983b1", + "metadata": { + "tags": [] + }, + "source": [ + "# Getting started (JuMP)\n", + "\n", + "## Introduction\n", + "\n", + "**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n", + "\n", + "1. Install the Julia/JuMP version of MIPLearn\n", + "2. Model a simple optimization problem using JuMP\n", + "3. Generate training data and train the ML models\n", + "4. Use the ML models together Gurobi to solve new instances\n", + "\n", + "
\n", + "Warning\n", + " \n", + "MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n", + " \n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "id": "02f0a927", + "metadata": {}, + "source": [ + "## Installation\n", + "\n", + "MIPLearn is available in two versions:\n", + "\n", + "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", + "- Julia version, compatible with the JuMP modeling language.\n", + "\n", + "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n", + "\n", + "```\n", + "pkg> add MIPLearn@0.3\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "e8274543", + "metadata": {}, + "source": [ + "In addition to MIPLearn itself, we will also install:\n", + "\n", + "- the JuMP modeling language\n", + "- Gurobi, a state-of-the-art commercial MILP solver\n", + "- Distributions, to generate random data\n", + "- PyCall, to access ML model from Scikit-Learn\n", + "- Suppressor, to make the output cleaner\n", + "\n", + "```\n", + "pkg> add JuMP@1, Gurobi@1, Distributions@0.25, PyCall@1, Suppressor@0.2\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a14e4550", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + "\n", + "- If you do not have a Gurobi license available, you can also follow the tutorial by installing an open-source solver, such as `HiGHS`, and replacing `Gurobi.Optimizer` by `HiGHS.Optimizer` in all the code examples.\n", + "- In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.\n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "16b86823", + "metadata": {}, + "source": [ + "## Modeling a simple optimization problem\n", + "\n", + "To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n", + "\n", + "Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n", + "\n", + "This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:" + ] + }, + { + "cell_type": "markdown", + "id": "f12c3702", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align}\n", + "\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n", + "\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n", + "& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n", + "& \\sum_{i=1}^n y_i = d \\\\\n", + "& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n", + "& y_i \\geq 0 & i=1,\\ldots,n\n", + "\\end{align}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "be3989ed", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Note\n", + "\n", + "We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "a5fd33f6", + "metadata": {}, + "source": [ + "Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Julia and JuMP. We start by defining a data class `UnitCommitmentData`, which holds all the input data." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "c62ebff1-db40-45a1-9997-d121837f067b", + "metadata": {}, + "outputs": [], + "source": [ + "struct UnitCommitmentData\n", + " demand::Float64\n", + " pmin::Vector{Float64}\n", + " pmax::Vector{Float64}\n", + " cfix::Vector{Float64}\n", + " cvar::Vector{Float64}\n", + "end;" + ] + }, + { + "cell_type": "markdown", + "id": "29f55efa-0751-465a-9b0a-a821d46a3d40", + "metadata": {}, + "source": [ + "Next, we write a `build_uc_model` function, which converts the input data into a concrete JuMP model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a JLD2 file containing this data." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "79ef7775-18ca-4dfa-b438-49860f762ad0", + "metadata": {}, + "outputs": [], + "source": [ + "using MIPLearn\n", + "using JuMP\n", + "using Gurobi\n", + "\n", + "function build_uc_model(data)\n", + " if data isa String\n", + " data = read_jld2(data)\n", + " end\n", + " model = Model(Gurobi.Optimizer)\n", + " G = 1:length(data.pmin)\n", + " @variable(model, x[G], Bin)\n", + " @variable(model, y[G] >= 0)\n", + " @objective(model, Min, sum(data.cfix[g] * x[g] + data.cvar[g] * y[g] for g in G))\n", + " @constraint(model, eq_max_power[g in G], y[g] <= data.pmax[g] * x[g])\n", + " @constraint(model, eq_min_power[g in G], y[g] >= data.pmin[g] * x[g])\n", + " @constraint(model, eq_demand, sum(y[g] for g in G) == data.demand)\n", + " return JumpModel(model)\n", + "end;" + ] + }, + { + "cell_type": "markdown", + "id": "c22714a3", + "metadata": {}, + "source": [ + "At this point, we can already use Gurobi to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "dd828d68-fd43-4d2a-a058-3e2628d99d9e", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:01:10.993801745Z", + "start_time": "2023-06-06T20:01:10.887580927Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", + "Model fingerprint: 0x55e33a07\n", + "Variable types: 3 continuous, 3 integer (3 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 7e+01]\n", + " Objective range [2e+00, 7e+02]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [1e+02, 1e+02]\n", + "Presolve removed 2 rows and 1 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 5 columns, 13 nonzeros\n", + "Variable types: 0 continuous, 5 integer (3 binary)\n", + "Found heuristic solution: objective 1400.0000000\n", + "\n", + "Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n", + " 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n", + "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.00 seconds (0.00 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 2: 1320 1400 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 371, time in user-callback 0.00 sec\n", + "objective_value(model.inner) = 1320.0\n", + "Vector(value.(model.inner[:x])) = [-0.0, 1.0, 1.0]\n", + "Vector(value.(model.inner[:y])) = [0.0, 60.0, 40.0]\n" + ] + } + ], + "source": [ + "model = build_uc_model(\n", + " UnitCommitmentData(\n", + " 100.0, # demand\n", + " [10, 20, 30], # pmin\n", + " [50, 60, 70], # pmax\n", + " [700, 600, 500], # cfix\n", + " [1.5, 2.0, 2.5], # cvar\n", + " )\n", + ")\n", + "model.optimize()\n", + "@show objective_value(model.inner)\n", + "@show Vector(value.(model.inner[:x]))\n", + "@show Vector(value.(model.inner[:y]));" + ] + }, + { + "cell_type": "markdown", + "id": "41b03bbc", + "metadata": {}, + "source": [ + "Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power." + ] + }, + { + "cell_type": "markdown", + "id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Notes\n", + " \n", + "- In the example above, `JumpModel` is just a thin wrapper around a standard JuMP model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original JuMP model can be accessed through `model.inner`, as illustrated above.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cf60c1dd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n", + "\n", + "In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "1326efd7-3869-4137-ab6b-df9cb609a7e0", + "metadata": {}, + "outputs": [], + "source": [ + "using Distributions\n", + "using Random\n", + "\n", + "function random_uc_data(; samples::Int, n::Int, seed::Int=42)::Vector\n", + " Random.seed!(seed)\n", + " pmin = rand(Uniform(100_000, 500_000), n)\n", + " pmax = pmin .* rand(Uniform(2, 2.5), n)\n", + " cfix = pmin .* rand(Uniform(100, 125), n)\n", + " cvar = rand(Uniform(1.25, 1.50), n)\n", + " return [\n", + " UnitCommitmentData(\n", + " sum(pmax) * rand(Uniform(0.5, 0.75)),\n", + " pmin,\n", + " pmax,\n", + " cfix,\n", + " cvar,\n", + " )\n", + " for _ in 1:samples\n", + " ]\n", + "end;" + ] + }, + { + "cell_type": "markdown", + "id": "3a03a7ac", + "metadata": {}, + "source": [ + "In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n", + "\n", + "Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00001.jld2`, `uc/train/00002.jld2`, etc., which contain the input data in JLD2 format." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "6156752c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:04.782830561Z", + "start_time": "2023-06-06T20:03:04.530421396Z" + } + }, + "outputs": [], + "source": [ + "data = random_uc_data(samples=500, n=500)\n", + "train_data = write_jld2(data[1:450], \"uc/train\")\n", + "test_data = write_jld2(data[451:500], \"uc/test\");" + ] + }, + { + "cell_type": "markdown", + "id": "b17af877", + "metadata": {}, + "source": [ + "Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00001.h5`, `uc/train/00002.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00001.mps.gz`, `uc/train/00002.mps.gz`, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7623f002", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:35.571497019Z", + "start_time": "2023-06-06T20:03:25.804104036Z" + } + }, + "outputs": [], + "source": [ + "using Suppressor\n", + "@suppress_out begin\n", + " bc = BasicCollector()\n", + " bc.collect(train_data, build_uc_model)\n", + "end" + ] + }, + { + "cell_type": "markdown", + "id": "c42b1be1-9723-4827-82d8-974afa51ef9f", + "metadata": {}, + "source": [ + "## Training and solving test instances" + ] + }, + { + "cell_type": "markdown", + "id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c", + "metadata": {}, + "source": [ + "With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n", + "\n", + "1. Memorize the optimal solutions of all training instances;\n", + "2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n", + "3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n", + "4. Provide this partial solution to the solver as a warm start.\n", + "\n", + "This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:20.497772794Z", + "start_time": "2023-06-06T20:05:20.484821405Z" + } + }, + "outputs": [], + "source": [ + "# Load kNN classifier from Scikit-Learn\n", + "using PyCall\n", + "KNeighborsClassifier = pyimport(\"sklearn.neighbors\").KNeighborsClassifier\n", + "\n", + "# Build the MIPLearn component\n", + "comp = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=25),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_constr_rhs\"],\n", + " ),\n", + " constructor=MergeTopSolutions(25, [0.0, 1.0]),\n", + " action=SetWarmStart(),\n", + ");" + ] + }, + { + "cell_type": "markdown", + "id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94", + "metadata": {}, + "source": [ + "Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:22.672002339Z", + "start_time": "2023-06-06T20:05:21.447466634Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xd2378195\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [2e+08, 2e+08]\n", + "\n", + "User MIP start produced solution with objective 1.02165e+10 (0.00s)\n", + "Loaded user MIP start with objective 1.02165e+10\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1.0216e+10 0 1 1.0217e+10 1.0216e+10 0.01% - 0s\n", + "\n", + "Explored 1 nodes (510 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 1: 1.02165e+10 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.021651058978e+10, best bound 1.021567971257e+10, gap 0.0081%\n", + "\n", + "User-callback calls 169, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "solver_ml = LearningSolver(components=[comp])\n", + "solver_ml.fit(train_data)\n", + "solver_ml.optimize(test_data[1], build_uc_model);" + ] + }, + { + "cell_type": "markdown", + "id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0", + "metadata": {}, + "source": [ + "By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "2ff391ed-e855-4228-aa09-a7641d8c2893", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:46.969575966Z", + "start_time": "2023-06-06T20:05:46.420803286Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xb45c0594\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [2e+08, 2e+08]\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Found heuristic solution: objective 1.071463e+10\n", + "\n", + "Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1.0216e+10 0 1 1.0715e+10 1.0216e+10 4.66% - 0s\n", + "H 0 0 1.025162e+10 1.0216e+10 0.35% - 0s\n", + " 0 0 1.0216e+10 0 1 1.0252e+10 1.0216e+10 0.35% - 0s\n", + "H 0 0 1.023090e+10 1.0216e+10 0.15% - 0s\n", + "H 0 0 1.022335e+10 1.0216e+10 0.07% - 0s\n", + "H 0 0 1.022281e+10 1.0216e+10 0.07% - 0s\n", + "H 0 0 1.021753e+10 1.0216e+10 0.02% - 0s\n", + "H 0 0 1.021752e+10 1.0216e+10 0.02% - 0s\n", + " 0 0 1.0216e+10 0 3 1.0218e+10 1.0216e+10 0.02% - 0s\n", + " 0 0 1.0216e+10 0 1 1.0218e+10 1.0216e+10 0.02% - 0s\n", + "H 0 0 1.021651e+10 1.0216e+10 0.01% - 0s\n", + "\n", + "Explored 1 nodes (764 simplex iterations) in 0.03 seconds (0.02 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 7: 1.02165e+10 1.02175e+10 1.02228e+10 ... 1.07146e+10\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.021651058978e+10, best bound 1.021573363741e+10, gap 0.0076%\n", + "\n", + "User-callback calls 204, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "solver_baseline = LearningSolver(components=[])\n", + "solver_baseline.fit(train_data)\n", + "solver_baseline.optimize(test_data[1], build_uc_model);" + ] + }, + { + "cell_type": "markdown", + "id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266", + "metadata": {}, + "source": [ + "In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems." + ] + }, + { + "cell_type": "markdown", + "id": "eec97f06", + "metadata": { + "tags": [] + }, + "source": [ + "## Accessing the solution\n", + "\n", + "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "67a6cd18", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:06:26.913448568Z", + "start_time": "2023-06-06T20:06:26.169047914Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x974a7fba\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [2e+08, 2e+08]\n", + "\n", + "User MIP start produced solution with objective 9.86729e+09 (0.00s)\n", + "User MIP start produced solution with objective 9.86675e+09 (0.00s)\n", + "User MIP start produced solution with objective 9.86654e+09 (0.01s)\n", + "User MIP start produced solution with objective 9.8661e+09 (0.01s)\n", + "Loaded user MIP start with objective 9.8661e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 9.865344e+09, 510 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 9.8653e+09 0 1 9.8661e+09 9.8653e+09 0.01% - 0s\n", + "\n", + "Explored 1 nodes (510 simplex iterations) in 0.02 seconds (0.01 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 4: 9.8661e+09 9.86654e+09 9.86675e+09 9.86729e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 9.866096485614e+09, best bound 9.865343669936e+09, gap 0.0076%\n", + "\n", + "User-callback calls 182, time in user-callback 0.00 sec\n", + "objective_value(model.inner) = 9.866096485613789e9\n" + ] + } + ], + "source": [ + "data = random_uc_data(samples=1, n=500)[1]\n", + "model = build_uc_model(data)\n", + "solver_ml.optimize(model)\n", + "@show objective_value(model.inner);" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Julia 1.9.0", + "language": "julia", + "name": "julia-1.9" + }, + "language_info": { + "file_extension": ".jl", + "mimetype": "application/julia", + "name": "julia", + "version": "1.9.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_sources/tutorials/getting-started-pyomo.ipynb.txt b/0.4/_sources/tutorials/getting-started-pyomo.ipynb.txt new file mode 100644 index 00000000..e109ddb5 --- /dev/null +++ b/0.4/_sources/tutorials/getting-started-pyomo.ipynb.txt @@ -0,0 +1,858 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6b8983b1", + "metadata": { + "tags": [] + }, + "source": [ + "# Getting started (Pyomo)\n", + "\n", + "## Introduction\n", + "\n", + "**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n", + "\n", + "1. Install the Python/Pyomo version of MIPLearn\n", + "2. Model a simple optimization problem using Pyomo\n", + "3. Generate training data and train the ML models\n", + "4. Use the ML models together Gurobi to solve new instances\n", + "\n", + "
\n", + "Note\n", + " \n", + "The Python/Pyomo version of MIPLearn is currently only compatible with Pyomo persistent solvers (Gurobi, CPLEX and XPRESS). For broader solver compatibility, see the Julia/JuMP version of the package.\n", + "
\n", + "\n", + "
\n", + "Warning\n", + " \n", + "MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n", + " \n", + "
\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "02f0a927", + "metadata": {}, + "source": [ + "## Installation\n", + "\n", + "MIPLearn is available in two versions:\n", + "\n", + "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", + "- Julia version, compatible with the JuMP modeling language.\n", + "\n", + "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n", + "\n", + "```\n", + "$ pip install MIPLearn==0.3\n", + "```\n", + "\n", + "In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.\n", + "\n", + "```\n", + "$ pip install 'gurobipy>=10,<10.1'\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a14e4550", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + " \n", + "In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.\n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "16b86823", + "metadata": {}, + "source": [ + "## Modeling a simple optimization problem\n", + "\n", + "To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n", + "\n", + "Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n", + "\n", + "This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:" + ] + }, + { + "cell_type": "markdown", + "id": "f12c3702", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align}\n", + "\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n", + "\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n", + "& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n", + "& \\sum_{i=1}^n y_i = d \\\\\n", + "& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n", + "& y_i \\geq 0 & i=1,\\ldots,n\n", + "\\end{align}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "be3989ed", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Note\n", + "\n", + "We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "a5fd33f6", + "metadata": {}, + "source": [ + "Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class `UnitCommitmentData`, which holds all the input data." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "22a67170-10b4-43d3-8708-014d91141e73", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:00:03.278853343Z", + "start_time": "2023-06-06T20:00:03.123324067Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "from dataclasses import dataclass\n", + "from typing import List\n", + "\n", + "import numpy as np\n", + "\n", + "\n", + "@dataclass\n", + "class UnitCommitmentData:\n", + " demand: float\n", + " pmin: List[float]\n", + " pmax: List[float]\n", + " cfix: List[float]\n", + " cvar: List[float]" + ] + }, + { + "cell_type": "markdown", + "id": "29f55efa-0751-465a-9b0a-a821d46a3d40", + "metadata": {}, + "source": [ + "Next, we write a `build_uc_model` function, which converts the input data into a concrete Pyomo model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a compressed pickle file containing this data." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "2f67032f-0d74-4317-b45c-19da0ec859e9", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:00:45.890126754Z", + "start_time": "2023-06-06T20:00:45.637044282Z" + } + }, + "outputs": [], + "source": [ + "import pyomo.environ as pe\n", + "from typing import Union\n", + "from miplearn.io import read_pkl_gz\n", + "from miplearn.solvers.pyomo import PyomoModel\n", + "\n", + "\n", + "def build_uc_model(data: Union[str, UnitCommitmentData]) -> PyomoModel:\n", + " if isinstance(data, str):\n", + " data = read_pkl_gz(data)\n", + "\n", + " model = pe.ConcreteModel()\n", + " n = len(data.pmin)\n", + " model.x = pe.Var(range(n), domain=pe.Binary)\n", + " model.y = pe.Var(range(n), domain=pe.NonNegativeReals)\n", + " model.obj = pe.Objective(\n", + " expr=sum(\n", + " data.cfix[i] * model.x[i] + data.cvar[i] * model.y[i] for i in range(n)\n", + " )\n", + " )\n", + " model.eq_max_power = pe.ConstraintList()\n", + " model.eq_min_power = pe.ConstraintList()\n", + " for i in range(n):\n", + " model.eq_max_power.add(model.y[i] <= data.pmax[i] * model.x[i])\n", + " model.eq_min_power.add(model.y[i] >= data.pmin[i] * model.x[i])\n", + " model.eq_demand = pe.Constraint(\n", + " expr=sum(model.y[i] for i in range(n)) == data.demand,\n", + " )\n", + " return PyomoModel(model, \"gurobi_persistent\")" + ] + }, + { + "cell_type": "markdown", + "id": "c22714a3", + "metadata": {}, + "source": [ + "At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "2a896f47", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:01:10.993801745Z", + "start_time": "2023-06-06T20:01:10.887580927Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", + "Model fingerprint: 0x15c7a953\n", + "Variable types: 3 continuous, 3 integer (3 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 7e+01]\n", + " Objective range [2e+00, 7e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+02, 1e+02]\n", + "Presolve removed 2 rows and 1 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 5 columns, 13 nonzeros\n", + "Variable types: 0 continuous, 5 integer (3 binary)\n", + "Found heuristic solution: objective 1400.0000000\n", + "\n", + "Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n", + " 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n", + "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 2: 1320 1400 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n", + "obj = 1320.0\n", + "x = [-0.0, 1.0, 1.0]\n", + "y = [0.0, 60.0, 40.0]\n" + ] + } + ], + "source": [ + "model = build_uc_model(\n", + " UnitCommitmentData(\n", + " demand=100.0,\n", + " pmin=[10, 20, 30],\n", + " pmax=[50, 60, 70],\n", + " cfix=[700, 600, 500],\n", + " cvar=[1.5, 2.0, 2.5],\n", + " )\n", + ")\n", + "\n", + "model.optimize()\n", + "print(\"obj =\", model.inner.obj())\n", + "print(\"x =\", [model.inner.x[i].value for i in range(3)])\n", + "print(\"y =\", [model.inner.y[i].value for i in range(3)])" + ] + }, + { + "cell_type": "markdown", + "id": "41b03bbc", + "metadata": {}, + "source": [ + "Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power." + ] + }, + { + "cell_type": "markdown", + "id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Notes\n", + " \n", + "- In the example above, `PyomoModel` is just a thin wrapper around a standard Pyomo model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original Pyomo model can be accessed through `model.inner`, as illustrated above. \n", + "- To use CPLEX or XPRESS, instead of Gurobi, replace `gurobi_persistent` by `cplex_persistent` or `xpress_persistent` in the `build_uc_model`. Note that only persistent Pyomo solvers are currently supported. Pull requests adding support for other types of solver are very welcome.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cf60c1dd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n", + "\n", + "In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "5eb09fab", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:02:27.324208900Z", + "start_time": "2023-06-06T20:02:26.990044230Z" + } + }, + "outputs": [], + "source": [ + "from scipy.stats import uniform\n", + "from typing import List\n", + "import random\n", + "\n", + "\n", + "def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:\n", + " random.seed(seed)\n", + " np.random.seed(seed)\n", + " pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)\n", + " pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)\n", + " cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)\n", + " cvar = uniform(loc=1.25, scale=0.25).rvs(n)\n", + " return [\n", + " UnitCommitmentData(\n", + " demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),\n", + " pmin=pmin,\n", + " pmax=pmax,\n", + " cfix=cfix,\n", + " cvar=cvar,\n", + " )\n", + " for _ in range(samples)\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "id": "3a03a7ac", + "metadata": {}, + "source": [ + "In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n", + "\n", + "Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00000.pkl.gz`, `uc/train/00001.pkl.gz`, etc., which contain the input data in compressed (gzipped) pickle format." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "6156752c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:04.782830561Z", + "start_time": "2023-06-06T20:03:04.530421396Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.io import write_pkl_gz\n", + "\n", + "data = random_uc_data(samples=500, n=500)\n", + "train_data = write_pkl_gz(data[0:450], \"uc/train\")\n", + "test_data = write_pkl_gz(data[450:500], \"uc/test\")" + ] + }, + { + "cell_type": "markdown", + "id": "b17af877", + "metadata": {}, + "source": [ + "Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00000.h5`, `uc/train/00001.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00000.mps.gz`, `uc/train/00001.mps.gz`, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7623f002", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:35.571497019Z", + "start_time": "2023-06-06T20:03:25.804104036Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.collectors.basic import BasicCollector\n", + "\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_uc_model, n_jobs=4)" + ] + }, + { + "cell_type": "markdown", + "id": "c42b1be1-9723-4827-82d8-974afa51ef9f", + "metadata": {}, + "source": [ + "## Training and solving test instances" + ] + }, + { + "cell_type": "markdown", + "id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c", + "metadata": {}, + "source": [ + "With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n", + "\n", + "1. Memorize the optimal solutions of all training instances;\n", + "2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n", + "3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n", + "4. Provide this partial solution to the solver as a warm start.\n", + "\n", + "This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:20.497772794Z", + "start_time": "2023-06-06T20:05:20.484821405Z" + } + }, + "outputs": [], + "source": [ + "from sklearn.neighbors import KNeighborsClassifier\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "from miplearn.components.primal.mem import (\n", + " MemorizingPrimalComponent,\n", + " MergeTopSolutions,\n", + ")\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "\n", + "comp = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=25),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_constr_rhs\"],\n", + " ),\n", + " constructor=MergeTopSolutions(25, [0.0, 1.0]),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94", + "metadata": {}, + "source": [ + "Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:22.672002339Z", + "start_time": "2023-06-06T20:05:21.447466634Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x5e67c6ee\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x4a7cfe2b\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.29153e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2915e+09 8.2907e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 1\n", + " Flow cover: 2\n", + "\n", + "Explored 1 nodes (565 simplex iterations) in 0.04 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 8.29153e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n" + ] + }, + { + "data": { + "text/plain": [ + "{}" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from miplearn.solvers.learning import LearningSolver\n", + "\n", + "solver_ml = LearningSolver(components=[comp])\n", + "solver_ml.fit(train_data)\n", + "solver_ml.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0", + "metadata": {}, + "source": [ + "By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "2ff391ed-e855-4228-aa09-a7641d8c2893", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:46.969575966Z", + "start_time": "2023-06-06T20:05:46.420803286Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x5e67c6ee\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x8a0f9587\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Found heuristic solution: objective 9.757128e+09\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 9.7571e+09 8.2906e+09 15.0% - 0s\n", + "H 0 0 8.298273e+09 8.2906e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + "H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + "H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 2\n", + " MIR: 1\n", + "\n", + "Explored 1 nodes (1025 simplex iterations) in 0.12 seconds (0.03 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n" + ] + }, + { + "data": { + "text/plain": [ + "{}" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "solver_baseline = LearningSolver(components=[])\n", + "solver_baseline.fit(train_data)\n", + "solver_baseline.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266", + "metadata": {}, + "source": [ + "In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems." + ] + }, + { + "cell_type": "markdown", + "id": "eec97f06", + "metadata": { + "tags": [] + }, + "source": [ + "## Accessing the solution\n", + "\n", + "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "67a6cd18", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:06:26.913448568Z", + "start_time": "2023-06-06T20:06:26.169047914Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x2dfe4e1c\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n", + " 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.253596777e+09\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x0f0924a1\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.25814e+09 (0.00s)\n", + "User MIP start produced solution with objective 8.25512e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.25459e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Cover: 1\n", + " MIR: 2\n", + " StrongCG: 1\n", + " Flow cover: 1\n", + "\n", + "Explored 1 nodes (575 simplex iterations) in 0.09 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n", + "obj = 8254590409.96973\n", + " x = [1.0, 1.0, 0.0, 1.0, 1.0]\n", + " y = [935662.0949262811, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n" + ] + } + ], + "source": [ + "data = random_uc_data(samples=1, n=500)[0]\n", + "model = build_uc_model(data)\n", + "solver_ml.optimize(model)\n", + "print(\"obj =\", model.inner.obj())\n", + "print(\" x =\", [model.inner.x[i].value for i in range(5)])\n", + "print(\" y =\", [model.inner.y[i].value for i in range(5)])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5593d23a-83bd-4e16-8253-6300f5e3f63b", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/_static/__init__.py b/0.4/_static/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/0.4/_static/__pycache__/__init__.cpython-311.pyc b/0.4/_static/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 00000000..3a82a9a0 Binary files /dev/null and b/0.4/_static/__pycache__/__init__.cpython-311.pyc differ diff --git a/0.4/_static/basic.css b/0.4/_static/basic.css new file mode 100644 index 00000000..e760386b --- /dev/null +++ b/0.4/_static/basic.css @@ -0,0 +1,925 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 270px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +a:visited { + color: #551A8B; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +.sig dd { + margin-top: 0px; + margin-bottom: 0px; +} + +.sig dl { + margin-top: 0px; + margin-bottom: 0px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +.translated { + background-color: rgba(207, 255, 207, 0.2) +} + +.untranslated { + background-color: rgba(255, 207, 207, 0.2) +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/0.4/_static/css/index.c5995385ac14fb8791e8eb36b4908be2.css b/0.4/_static/css/index.c5995385ac14fb8791e8eb36b4908be2.css new file mode 100644 index 00000000..655656db --- /dev/null +++ b/0.4/_static/css/index.c5995385ac14fb8791e8eb36b4908be2.css @@ -0,0 +1,6 @@ +/*! + * Bootstrap v4.5.0 (https://getbootstrap.com/) + * Copyright 2011-2020 The Bootstrap Authors + * Copyright 2011-2020 Twitter, Inc. + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + */:root{--blue:#007bff;--indigo:#6610f2;--purple:#6f42c1;--pink:#e83e8c;--red:#dc3545;--orange:#fd7e14;--yellow:#ffc107;--green:#28a745;--teal:#20c997;--cyan:#17a2b8;--white:#fff;--gray:#6c757d;--gray-dark:#343a40;--primary:#007bff;--secondary:#6c757d;--success:#28a745;--info:#17a2b8;--warning:#ffc107;--danger:#dc3545;--light:#f8f9fa;--dark:#343a40;--breakpoint-xs:0;--breakpoint-sm:540px;--breakpoint-md:720px;--breakpoint-lg:960px;--breakpoint-xl:1200px;--font-family-sans-serif:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";--font-family-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace}*,:after,:before{box-sizing:border-box}html{font-family:sans-serif;line-height:1.15;-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:rgba(0,0,0,0)}article,aside,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}body{margin:0;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;font-size:1rem;line-height:1.5;color:#212529;text-align:left}[tabindex="-1"]:focus:not(:focus-visible){outline:0!important}hr{box-sizing:content-box;height:0;overflow:visible}h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem}p{margin-top:0;margin-bottom:1rem}abbr[data-original-title],abbr[title]{text-decoration:underline;text-decoration:underline dotted;cursor:help;border-bottom:0;text-decoration-skip-ink:none}address{font-style:normal;line-height:inherit}address,dl,ol,ul{margin-bottom:1rem}dl,ol,ul{margin-top:0}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#007bff;background-color:transparent}a:hover{color:#0056b3}a:not([href]),a:not([href]):hover{color:inherit;text-decoration:none}code,kbd,pre,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}pre{margin-top:0;margin-bottom:1rem;overflow:auto;-ms-overflow-style:scrollbar}figure{margin:0 0 1rem}img{border-style:none}img,svg{vertical-align:middle}svg{overflow:hidden}table{border-collapse:collapse}caption{padding-top:.75rem;padding-bottom:.75rem;color:#6c757d;text-align:left;caption-side:bottom}th{text-align:inherit}label{display:inline-block;margin-bottom:.5rem}button{border-radius:0}button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,input{overflow:visible}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{padding:0;border-style:none}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}textarea{overflow:auto;resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{display:block;width:100%;max-width:100%;padding:0;margin-bottom:.5rem;font-size:1.5rem;line-height:inherit;color:inherit;white-space:normal}progress{vertical-align:baseline}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:none}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}summary{display:list-item;cursor:pointer}template{display:none}[hidden]{display:none!important}.h1,.h2,.h3,.h4,.h5,.h6,h1,h2,h3,h4,h5,h6{margin-bottom:.5rem;font-weight:500;line-height:1.2}.h1,h1{font-size:2.5rem}.h2,h2{font-size:2rem}.h3,h3{font-size:1.75rem}.h4,h4{font-size:1.5rem}.h5,h5{font-size:1.25rem}.h6,h6{font-size:1rem}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:6rem}.display-1,.display-2{font-weight:300;line-height:1.2}.display-2{font-size:5.5rem}.display-3{font-size:4.5rem}.display-3,.display-4{font-weight:300;line-height:1.2}.display-4{font-size:3.5rem}hr{margin-top:1rem;margin-bottom:1rem;border-top:1px solid rgba(0,0,0,.1)}.small,small{font-size:80%;font-weight:400}.mark,mark{padding:.2em;background-color:#fcf8e3}.list-inline,.list-unstyled{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:90%;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote-footer{display:block;font-size:80%;color:#6c757d}.blockquote-footer:before{content:"\2014\00A0"}.img-fluid,.img-thumbnail{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:90%;color:#6c757d}code{font-size:87.5%;color:#e83e8c;word-wrap:break-word}a>code{color:inherit}kbd{padding:.2rem .4rem;font-size:87.5%;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:100%;font-weight:700}pre{display:block;font-size:87.5%;color:#212529}pre code{font-size:inherit;color:inherit;word-break:normal}.pre-scrollable{max-height:340px;overflow-y:scroll}.container{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:540px){.container{max-width:540px}}@media (min-width:720px){.container{max-width:720px}}@media (min-width:960px){.container{max-width:960px}}@media (min-width:1200px){.container{max-width:1400px}}.container-fluid,.container-lg,.container-md,.container-sm,.container-xl{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:540px){.container,.container-sm{max-width:540px}}@media (min-width:720px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:960px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1400px}}.row{display:flex;flex-wrap:wrap;margin-right:-15px;margin-left:-15px}.no-gutters{margin-right:0;margin-left:0}.no-gutters>.col,.no-gutters>[class*=col-]{padding-right:0;padding-left:0}.col,.col-1,.col-2,.col-3,.col-4,.col-5,.col-6,.col-7,.col-8,.col-9,.col-10,.col-11,.col-12,.col-auto,.col-lg,.col-lg-1,.col-lg-2,.col-lg-3,.col-lg-4,.col-lg-5,.col-lg-6,.col-lg-7,.col-lg-8,.col-lg-9,.col-lg-10,.col-lg-11,.col-lg-12,.col-lg-auto,.col-md,.col-md-1,.col-md-2,.col-md-3,.col-md-4,.col-md-5,.col-md-6,.col-md-7,.col-md-8,.col-md-9,.col-md-10,.col-md-11,.col-md-12,.col-md-auto,.col-sm,.col-sm-1,.col-sm-2,.col-sm-3,.col-sm-4,.col-sm-5,.col-sm-6,.col-sm-7,.col-sm-8,.col-sm-9,.col-sm-10,.col-sm-11,.col-sm-12,.col-sm-auto,.col-xl,.col-xl-1,.col-xl-2,.col-xl-3,.col-xl-4,.col-xl-5,.col-xl-6,.col-xl-7,.col-xl-8,.col-xl-9,.col-xl-10,.col-xl-11,.col-xl-12,.col-xl-auto{position:relative;width:100%;padding-right:15px;padding-left:15px}.col{flex-basis:0;flex-grow:1;min-width:0;max-width:100%}.row-cols-1>*{flex:0 0 100%;max-width:100%}.row-cols-2>*{flex:0 0 50%;max-width:50%}.row-cols-3>*{flex:0 0 33.33333%;max-width:33.33333%}.row-cols-4>*{flex:0 0 25%;max-width:25%}.row-cols-5>*{flex:0 0 20%;max-width:20%}.row-cols-6>*{flex:0 0 16.66667%;max-width:16.66667%}.col-auto{flex:0 0 auto;width:auto;max-width:100%}.col-1{flex:0 0 8.33333%;max-width:8.33333%}.col-2{flex:0 0 16.66667%;max-width:16.66667%}.col-3{flex:0 0 25%;max-width:25%}.col-4{flex:0 0 33.33333%;max-width:33.33333%}.col-5{flex:0 0 41.66667%;max-width:41.66667%}.col-6{flex:0 0 50%;max-width:50%}.col-7{flex:0 0 58.33333%;max-width:58.33333%}.col-8{flex:0 0 66.66667%;max-width:66.66667%}.col-9{flex:0 0 75%;max-width:75%}.col-10{flex:0 0 83.33333%;max-width:83.33333%}.col-11{flex:0 0 91.66667%;max-width:91.66667%}.col-12{flex:0 0 100%;max-width:100%}.order-first{order:-1}.order-last{order:13}.order-0{order:0}.order-1{order:1}.order-2{order:2}.order-3{order:3}.order-4{order:4}.order-5{order:5}.order-6{order:6}.order-7{order:7}.order-8{order:8}.order-9{order:9}.order-10{order:10}.order-11{order:11}.order-12{order:12}.offset-1{margin-left:8.33333%}.offset-2{margin-left:16.66667%}.offset-3{margin-left:25%}.offset-4{margin-left:33.33333%}.offset-5{margin-left:41.66667%}.offset-6{margin-left:50%}.offset-7{margin-left:58.33333%}.offset-8{margin-left:66.66667%}.offset-9{margin-left:75%}.offset-10{margin-left:83.33333%}.offset-11{margin-left:91.66667%}@media (min-width:540px){.col-sm{flex-basis:0;flex-grow:1;min-width:0;max-width:100%}.row-cols-sm-1>*{flex:0 0 100%;max-width:100%}.row-cols-sm-2>*{flex:0 0 50%;max-width:50%}.row-cols-sm-3>*{flex:0 0 33.33333%;max-width:33.33333%}.row-cols-sm-4>*{flex:0 0 25%;max-width:25%}.row-cols-sm-5>*{flex:0 0 20%;max-width:20%}.row-cols-sm-6>*{flex:0 0 16.66667%;max-width:16.66667%}.col-sm-auto{flex:0 0 auto;width:auto;max-width:100%}.col-sm-1{flex:0 0 8.33333%;max-width:8.33333%}.col-sm-2{flex:0 0 16.66667%;max-width:16.66667%}.col-sm-3{flex:0 0 25%;max-width:25%}.col-sm-4{flex:0 0 33.33333%;max-width:33.33333%}.col-sm-5{flex:0 0 41.66667%;max-width:41.66667%}.col-sm-6{flex:0 0 50%;max-width:50%}.col-sm-7{flex:0 0 58.33333%;max-width:58.33333%}.col-sm-8{flex:0 0 66.66667%;max-width:66.66667%}.col-sm-9{flex:0 0 75%;max-width:75%}.col-sm-10{flex:0 0 83.33333%;max-width:83.33333%}.col-sm-11{flex:0 0 91.66667%;max-width:91.66667%}.col-sm-12{flex:0 0 100%;max-width:100%}.order-sm-first{order:-1}.order-sm-last{order:13}.order-sm-0{order:0}.order-sm-1{order:1}.order-sm-2{order:2}.order-sm-3{order:3}.order-sm-4{order:4}.order-sm-5{order:5}.order-sm-6{order:6}.order-sm-7{order:7}.order-sm-8{order:8}.order-sm-9{order:9}.order-sm-10{order:10}.order-sm-11{order:11}.order-sm-12{order:12}.offset-sm-0{margin-left:0}.offset-sm-1{margin-left:8.33333%}.offset-sm-2{margin-left:16.66667%}.offset-sm-3{margin-left:25%}.offset-sm-4{margin-left:33.33333%}.offset-sm-5{margin-left:41.66667%}.offset-sm-6{margin-left:50%}.offset-sm-7{margin-left:58.33333%}.offset-sm-8{margin-left:66.66667%}.offset-sm-9{margin-left:75%}.offset-sm-10{margin-left:83.33333%}.offset-sm-11{margin-left:91.66667%}}@media (min-width:720px){.col-md{flex-basis:0;flex-grow:1;min-width:0;max-width:100%}.row-cols-md-1>*{flex:0 0 100%;max-width:100%}.row-cols-md-2>*{flex:0 0 50%;max-width:50%}.row-cols-md-3>*{flex:0 0 33.33333%;max-width:33.33333%}.row-cols-md-4>*{flex:0 0 25%;max-width:25%}.row-cols-md-5>*{flex:0 0 20%;max-width:20%}.row-cols-md-6>*{flex:0 0 16.66667%;max-width:16.66667%}.col-md-auto{flex:0 0 auto;width:auto;max-width:100%}.col-md-1{flex:0 0 8.33333%;max-width:8.33333%}.col-md-2{flex:0 0 16.66667%;max-width:16.66667%}.col-md-3{flex:0 0 25%;max-width:25%}.col-md-4{flex:0 0 33.33333%;max-width:33.33333%}.col-md-5{flex:0 0 41.66667%;max-width:41.66667%}.col-md-6{flex:0 0 50%;max-width:50%}.col-md-7{flex:0 0 58.33333%;max-width:58.33333%}.col-md-8{flex:0 0 66.66667%;max-width:66.66667%}.col-md-9{flex:0 0 75%;max-width:75%}.col-md-10{flex:0 0 83.33333%;max-width:83.33333%}.col-md-11{flex:0 0 91.66667%;max-width:91.66667%}.col-md-12{flex:0 0 100%;max-width:100%}.order-md-first{order:-1}.order-md-last{order:13}.order-md-0{order:0}.order-md-1{order:1}.order-md-2{order:2}.order-md-3{order:3}.order-md-4{order:4}.order-md-5{order:5}.order-md-6{order:6}.order-md-7{order:7}.order-md-8{order:8}.order-md-9{order:9}.order-md-10{order:10}.order-md-11{order:11}.order-md-12{order:12}.offset-md-0{margin-left:0}.offset-md-1{margin-left:8.33333%}.offset-md-2{margin-left:16.66667%}.offset-md-3{margin-left:25%}.offset-md-4{margin-left:33.33333%}.offset-md-5{margin-left:41.66667%}.offset-md-6{margin-left:50%}.offset-md-7{margin-left:58.33333%}.offset-md-8{margin-left:66.66667%}.offset-md-9{margin-left:75%}.offset-md-10{margin-left:83.33333%}.offset-md-11{margin-left:91.66667%}}@media (min-width:960px){.col-lg{flex-basis:0;flex-grow:1;min-width:0;max-width:100%}.row-cols-lg-1>*{flex:0 0 100%;max-width:100%}.row-cols-lg-2>*{flex:0 0 50%;max-width:50%}.row-cols-lg-3>*{flex:0 0 33.33333%;max-width:33.33333%}.row-cols-lg-4>*{flex:0 0 25%;max-width:25%}.row-cols-lg-5>*{flex:0 0 20%;max-width:20%}.row-cols-lg-6>*{flex:0 0 16.66667%;max-width:16.66667%}.col-lg-auto{flex:0 0 auto;width:auto;max-width:100%}.col-lg-1{flex:0 0 8.33333%;max-width:8.33333%}.col-lg-2{flex:0 0 16.66667%;max-width:16.66667%}.col-lg-3{flex:0 0 25%;max-width:25%}.col-lg-4{flex:0 0 33.33333%;max-width:33.33333%}.col-lg-5{flex:0 0 41.66667%;max-width:41.66667%}.col-lg-6{flex:0 0 50%;max-width:50%}.col-lg-7{flex:0 0 58.33333%;max-width:58.33333%}.col-lg-8{flex:0 0 66.66667%;max-width:66.66667%}.col-lg-9{flex:0 0 75%;max-width:75%}.col-lg-10{flex:0 0 83.33333%;max-width:83.33333%}.col-lg-11{flex:0 0 91.66667%;max-width:91.66667%}.col-lg-12{flex:0 0 100%;max-width:100%}.order-lg-first{order:-1}.order-lg-last{order:13}.order-lg-0{order:0}.order-lg-1{order:1}.order-lg-2{order:2}.order-lg-3{order:3}.order-lg-4{order:4}.order-lg-5{order:5}.order-lg-6{order:6}.order-lg-7{order:7}.order-lg-8{order:8}.order-lg-9{order:9}.order-lg-10{order:10}.order-lg-11{order:11}.order-lg-12{order:12}.offset-lg-0{margin-left:0}.offset-lg-1{margin-left:8.33333%}.offset-lg-2{margin-left:16.66667%}.offset-lg-3{margin-left:25%}.offset-lg-4{margin-left:33.33333%}.offset-lg-5{margin-left:41.66667%}.offset-lg-6{margin-left:50%}.offset-lg-7{margin-left:58.33333%}.offset-lg-8{margin-left:66.66667%}.offset-lg-9{margin-left:75%}.offset-lg-10{margin-left:83.33333%}.offset-lg-11{margin-left:91.66667%}}@media (min-width:1200px){.col-xl{flex-basis:0;flex-grow:1;min-width:0;max-width:100%}.row-cols-xl-1>*{flex:0 0 100%;max-width:100%}.row-cols-xl-2>*{flex:0 0 50%;max-width:50%}.row-cols-xl-3>*{flex:0 0 33.33333%;max-width:33.33333%}.row-cols-xl-4>*{flex:0 0 25%;max-width:25%}.row-cols-xl-5>*{flex:0 0 20%;max-width:20%}.row-cols-xl-6>*{flex:0 0 16.66667%;max-width:16.66667%}.col-xl-auto{flex:0 0 auto;width:auto;max-width:100%}.col-xl-1{flex:0 0 8.33333%;max-width:8.33333%}.col-xl-2{flex:0 0 16.66667%;max-width:16.66667%}.col-xl-3{flex:0 0 25%;max-width:25%}.col-xl-4{flex:0 0 33.33333%;max-width:33.33333%}.col-xl-5{flex:0 0 41.66667%;max-width:41.66667%}.col-xl-6{flex:0 0 50%;max-width:50%}.col-xl-7{flex:0 0 58.33333%;max-width:58.33333%}.col-xl-8{flex:0 0 66.66667%;max-width:66.66667%}.col-xl-9{flex:0 0 75%;max-width:75%}.col-xl-10{flex:0 0 83.33333%;max-width:83.33333%}.col-xl-11{flex:0 0 91.66667%;max-width:91.66667%}.col-xl-12{flex:0 0 100%;max-width:100%}.order-xl-first{order:-1}.order-xl-last{order:13}.order-xl-0{order:0}.order-xl-1{order:1}.order-xl-2{order:2}.order-xl-3{order:3}.order-xl-4{order:4}.order-xl-5{order:5}.order-xl-6{order:6}.order-xl-7{order:7}.order-xl-8{order:8}.order-xl-9{order:9}.order-xl-10{order:10}.order-xl-11{order:11}.order-xl-12{order:12}.offset-xl-0{margin-left:0}.offset-xl-1{margin-left:8.33333%}.offset-xl-2{margin-left:16.66667%}.offset-xl-3{margin-left:25%}.offset-xl-4{margin-left:33.33333%}.offset-xl-5{margin-left:41.66667%}.offset-xl-6{margin-left:50%}.offset-xl-7{margin-left:58.33333%}.offset-xl-8{margin-left:66.66667%}.offset-xl-9{margin-left:75%}.offset-xl-10{margin-left:83.33333%}.offset-xl-11{margin-left:91.66667%}}.table{width:100%;margin-bottom:1rem;color:#212529}.table td,.table th{padding:.75rem;vertical-align:top;border-top:1px solid #dee2e6}.table thead th{vertical-align:bottom;border-bottom:2px solid #dee2e6}.table tbody+tbody{border-top:2px solid #dee2e6}.table-sm td,.table-sm th{padding:.3rem}.table-bordered,.table-bordered td,.table-bordered th{border:1px solid #dee2e6}.table-bordered thead td,.table-bordered thead th{border-bottom-width:2px}.table-borderless tbody+tbody,.table-borderless td,.table-borderless th,.table-borderless thead th{border:0}.table-striped tbody tr:nth-of-type(odd){background-color:rgba(0,0,0,.05)}.table-hover tbody tr:hover{color:#212529;background-color:rgba(0,0,0,.075)}.table-primary,.table-primary>td,.table-primary>th{background-color:#b8daff}.table-primary tbody+tbody,.table-primary td,.table-primary th,.table-primary thead th{border-color:#7abaff}.table-hover .table-primary:hover,.table-hover .table-primary:hover>td,.table-hover .table-primary:hover>th{background-color:#9fcdff}.table-secondary,.table-secondary>td,.table-secondary>th{background-color:#d6d8db}.table-secondary tbody+tbody,.table-secondary td,.table-secondary th,.table-secondary thead th{border-color:#b3b7bb}.table-hover .table-secondary:hover,.table-hover .table-secondary:hover>td,.table-hover .table-secondary:hover>th{background-color:#c8cbcf}.table-success,.table-success>td,.table-success>th{background-color:#c3e6cb}.table-success tbody+tbody,.table-success td,.table-success th,.table-success thead th{border-color:#8fd19e}.table-hover .table-success:hover,.table-hover .table-success:hover>td,.table-hover .table-success:hover>th{background-color:#b1dfbb}.table-info,.table-info>td,.table-info>th{background-color:#bee5eb}.table-info tbody+tbody,.table-info td,.table-info th,.table-info thead th{border-color:#86cfda}.table-hover .table-info:hover,.table-hover .table-info:hover>td,.table-hover .table-info:hover>th{background-color:#abdde5}.table-warning,.table-warning>td,.table-warning>th{background-color:#ffeeba}.table-warning tbody+tbody,.table-warning td,.table-warning th,.table-warning thead th{border-color:#ffdf7e}.table-hover .table-warning:hover,.table-hover .table-warning:hover>td,.table-hover .table-warning:hover>th{background-color:#ffe8a1}.table-danger,.table-danger>td,.table-danger>th{background-color:#f5c6cb}.table-danger tbody+tbody,.table-danger td,.table-danger th,.table-danger thead th{border-color:#ed969e}.table-hover .table-danger:hover,.table-hover .table-danger:hover>td,.table-hover .table-danger:hover>th{background-color:#f1b0b7}.table-light,.table-light>td,.table-light>th{background-color:#fdfdfe}.table-light tbody+tbody,.table-light td,.table-light th,.table-light thead th{border-color:#fbfcfc}.table-hover .table-light:hover,.table-hover .table-light:hover>td,.table-hover .table-light:hover>th{background-color:#ececf6}.table-dark,.table-dark>td,.table-dark>th{background-color:#c6c8ca}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#95999c}.table-hover .table-dark:hover,.table-hover .table-dark:hover>td,.table-hover .table-dark:hover>th{background-color:#b9bbbe}.table-active,.table-active>td,.table-active>th,.table-hover .table-active:hover,.table-hover .table-active:hover>td,.table-hover .table-active:hover>th{background-color:rgba(0,0,0,.075)}.table .thead-dark th{color:#fff;background-color:#343a40;border-color:#454d55}.table .thead-light th{color:#495057;background-color:#e9ecef;border-color:#dee2e6}.table-dark{color:#fff;background-color:#343a40}.table-dark td,.table-dark th,.table-dark thead th{border-color:#454d55}.table-dark.table-bordered{border:0}.table-dark.table-striped tbody tr:nth-of-type(odd){background-color:hsla(0,0%,100%,.05)}.table-dark.table-hover tbody tr:hover{color:#fff;background-color:hsla(0,0%,100%,.075)}@media (max-width:539.98px){.table-responsive-sm{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-sm>.table-bordered{border:0}}@media (max-width:719.98px){.table-responsive-md{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-md>.table-bordered{border:0}}@media (max-width:959.98px){.table-responsive-lg{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-lg>.table-bordered{border:0}}@media (max-width:1199.98px){.table-responsive-xl{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-xl>.table-bordered{border:0}}.table-responsive{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive>.table-bordered{border:0}.form-control{display:block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control::-ms-expand{background-color:transparent;border:0}.form-control:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.form-control:focus{color:#495057;background-color:#fff;border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}input[type=date].form-control,input[type=datetime-local].form-control,input[type=month].form-control,input[type=time].form-control{appearance:none}select.form-control:focus::-ms-value{color:#495057;background-color:#fff}.form-control-file,.form-control-range{display:block;width:100%}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem;line-height:1.5}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem;line-height:1.5}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;font-size:1rem;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-right:0;padding-left:0}.form-control-sm{height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.form-control-lg{height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}select.form-control[multiple],select.form-control[size],textarea.form-control{height:auto}.form-group{margin-bottom:1rem}.form-text{display:block;margin-top:.25rem}.form-row{display:flex;flex-wrap:wrap;margin-right:-5px;margin-left:-5px}.form-row>.col,.form-row>[class*=col-]{padding-right:5px;padding-left:5px}.form-check{position:relative;display:block;padding-left:1.25rem}.form-check-input{position:absolute;margin-top:.3rem;margin-left:-1.25rem}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{color:#6c757d}.form-check-label{margin-bottom:0}.form-check-inline{display:inline-flex;align-items:center;padding-left:0;margin-right:.75rem}.form-check-inline .form-check-input{position:static;margin-top:0;margin-right:.3125rem;margin-left:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#28a745}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(40,167,69,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#28a745;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8'%3E%3Cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3E%3C/svg%3E");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-valid,.was-validated .custom-select:valid{border-color:#28a745;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5'%3E%3Cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3E%3C/svg%3E") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8'%3E%3Cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3E%3C/svg%3E") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-valid:focus,.was-validated .custom-select:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#28a745}.form-check-input.is-valid~.valid-feedback,.form-check-input.is-valid~.valid-tooltip,.was-validated .form-check-input:valid~.valid-feedback,.was-validated .form-check-input:valid~.valid-tooltip{display:block}.custom-control-input.is-valid~.custom-control-label,.was-validated .custom-control-input:valid~.custom-control-label{color:#28a745}.custom-control-input.is-valid~.custom-control-label:before,.was-validated .custom-control-input:valid~.custom-control-label:before{border-color:#28a745}.custom-control-input.is-valid:checked~.custom-control-label:before,.was-validated .custom-control-input:valid:checked~.custom-control-label:before{border-color:#34ce57;background-color:#34ce57}.custom-control-input.is-valid:focus~.custom-control-label:before,.was-validated .custom-control-input:valid:focus~.custom-control-label:before{box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.custom-control-input.is-valid:focus:not(:checked)~.custom-control-label:before,.custom-file-input.is-valid~.custom-file-label,.was-validated .custom-control-input:valid:focus:not(:checked)~.custom-control-label:before,.was-validated .custom-file-input:valid~.custom-file-label{border-color:#28a745}.custom-file-input.is-valid:focus~.custom-file-label,.was-validated .custom-file-input:valid:focus~.custom-file-label{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545'%3E%3Ccircle cx='6' cy='6' r='4.5'/%3E%3Cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3E%3Ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3E%3C/svg%3E");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-invalid,.was-validated .custom-select:invalid{border-color:#dc3545;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5'%3E%3Cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3E%3C/svg%3E") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545'%3E%3Ccircle cx='6' cy='6' r='4.5'/%3E%3Cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3E%3Ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3E%3C/svg%3E") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-invalid:focus,.was-validated .custom-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-input.is-invalid~.invalid-feedback,.form-check-input.is-invalid~.invalid-tooltip,.was-validated .form-check-input:invalid~.invalid-feedback,.was-validated .form-check-input:invalid~.invalid-tooltip{display:block}.custom-control-input.is-invalid~.custom-control-label,.was-validated .custom-control-input:invalid~.custom-control-label{color:#dc3545}.custom-control-input.is-invalid~.custom-control-label:before,.was-validated .custom-control-input:invalid~.custom-control-label:before{border-color:#dc3545}.custom-control-input.is-invalid:checked~.custom-control-label:before,.was-validated .custom-control-input:invalid:checked~.custom-control-label:before{border-color:#e4606d;background-color:#e4606d}.custom-control-input.is-invalid:focus~.custom-control-label:before,.was-validated .custom-control-input:invalid:focus~.custom-control-label:before{box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.custom-control-input.is-invalid:focus:not(:checked)~.custom-control-label:before,.custom-file-input.is-invalid~.custom-file-label,.was-validated .custom-control-input:invalid:focus:not(:checked)~.custom-control-label:before,.was-validated .custom-file-input:invalid~.custom-file-label{border-color:#dc3545}.custom-file-input.is-invalid:focus~.custom-file-label,.was-validated .custom-file-input:invalid:focus~.custom-file-label{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-inline{display:flex;flex-flow:row wrap;align-items:center}.form-inline .form-check{width:100%}@media (min-width:540px){.form-inline label{justify-content:center}.form-inline .form-group,.form-inline label{display:flex;align-items:center;margin-bottom:0}.form-inline .form-group{flex:0 0 auto;flex-flow:row wrap}.form-inline .form-control{display:inline-block;width:auto;vertical-align:middle}.form-inline .form-control-plaintext{display:inline-block}.form-inline .custom-select,.form-inline .input-group{width:auto}.form-inline .form-check{display:flex;align-items:center;justify-content:center;width:auto;padding-left:0}.form-inline .form-check-input{position:relative;flex-shrink:0;margin-top:0;margin-right:.25rem;margin-left:0}.form-inline .custom-control{align-items:center;justify-content:center}.form-inline .custom-control-label{margin-bottom:0}}.btn{display:inline-block;font-weight:400;color:#212529;text-align:center;vertical-align:middle;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;line-height:1.5;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529;text-decoration:none}.btn.focus,.btn:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.btn.disabled,.btn:disabled{opacity:.65}.btn:not(:disabled):not(.disabled){cursor:pointer}a.btn.disabled,fieldset:disabled a.btn{pointer-events:none}.btn-primary{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary.focus,.btn-primary:focus,.btn-primary:hover{color:#fff;background-color:#0069d9;border-color:#0062cc}.btn-primary.focus,.btn-primary:focus{box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:not(:disabled):not(.disabled).active,.btn-primary:not(:disabled):not(.disabled):active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0062cc;border-color:#005cbf}.btn-primary:not(:disabled):not(.disabled).active:focus,.btn-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary.focus,.btn-secondary:focus,.btn-secondary:hover{color:#fff;background-color:#5a6268;border-color:#545b62}.btn-secondary.focus,.btn-secondary:focus{box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:not(:disabled):not(.disabled).active,.btn-secondary:not(:disabled):not(.disabled):active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#545b62;border-color:#4e555b}.btn-secondary:not(:disabled):not(.disabled).active:focus,.btn-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-success{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success.focus,.btn-success:focus,.btn-success:hover{color:#fff;background-color:#218838;border-color:#1e7e34}.btn-success.focus,.btn-success:focus{box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:not(:disabled):not(.disabled).active,.btn-success:not(:disabled):not(.disabled):active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#1e7e34;border-color:#1c7430}.btn-success:not(:disabled):not(.disabled).active:focus,.btn-success:not(:disabled):not(.disabled):active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-info{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info.focus,.btn-info:focus,.btn-info:hover{color:#fff;background-color:#138496;border-color:#117a8b}.btn-info.focus,.btn-info:focus{box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-info.disabled,.btn-info:disabled{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:not(:disabled):not(.disabled).active,.btn-info:not(:disabled):not(.disabled):active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#117a8b;border-color:#10707f}.btn-info:not(:disabled):not(.disabled).active:focus,.btn-info:not(:disabled):not(.disabled):active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-warning{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning.focus,.btn-warning:focus,.btn-warning:hover{color:#212529;background-color:#e0a800;border-color:#d39e00}.btn-warning.focus,.btn-warning:focus{box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:not(:disabled):not(.disabled).active,.btn-warning:not(:disabled):not(.disabled):active,.show>.btn-warning.dropdown-toggle{color:#212529;background-color:#d39e00;border-color:#c69500}.btn-warning:not(:disabled):not(.disabled).active:focus,.btn-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger.focus,.btn-danger:focus,.btn-danger:hover{color:#fff;background-color:#c82333;border-color:#bd2130}.btn-danger.focus,.btn-danger:focus{box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:not(:disabled):not(.disabled).active,.btn-danger:not(:disabled):not(.disabled):active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#bd2130;border-color:#b21f2d}.btn-danger:not(:disabled):not(.disabled).active:focus,.btn-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-light{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light.focus,.btn-light:focus,.btn-light:hover{color:#212529;background-color:#e2e6ea;border-color:#dae0e5}.btn-light.focus,.btn-light:focus{box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-light.disabled,.btn-light:disabled{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:not(:disabled):not(.disabled).active,.btn-light:not(:disabled):not(.disabled):active,.show>.btn-light.dropdown-toggle{color:#212529;background-color:#dae0e5;border-color:#d3d9df}.btn-light:not(:disabled):not(.disabled).active:focus,.btn-light:not(:disabled):not(.disabled):active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-dark{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark.focus,.btn-dark:focus,.btn-dark:hover{color:#fff;background-color:#23272b;border-color:#1d2124}.btn-dark.focus,.btn-dark:focus{box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:not(:disabled):not(.disabled).active,.btn-dark:not(:disabled):not(.disabled):active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1d2124;border-color:#171a1d}.btn-dark:not(:disabled):not(.disabled).active:focus,.btn-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-outline-primary{color:#007bff;border-color:#007bff}.btn-outline-primary:hover{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary.focus,.btn-outline-primary:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#007bff;background-color:transparent}.btn-outline-primary:not(:disabled):not(.disabled).active,.btn-outline-primary:not(:disabled):not(.disabled):active,.show>.btn-outline-primary.dropdown-toggle{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary:not(:disabled):not(.disabled).active:focus,.btn-outline-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary.focus,.btn-outline-secondary:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-secondary:not(:disabled):not(.disabled).active,.btn-outline-secondary:not(:disabled):not(.disabled):active,.show>.btn-outline-secondary.dropdown-toggle{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary:not(:disabled):not(.disabled).active:focus,.btn-outline-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-success{color:#28a745;border-color:#28a745}.btn-outline-success:hover{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success.focus,.btn-outline-success:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#28a745;background-color:transparent}.btn-outline-success:not(:disabled):not(.disabled).active,.btn-outline-success:not(:disabled):not(.disabled):active,.show>.btn-outline-success.dropdown-toggle{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success:not(:disabled):not(.disabled).active:focus,.btn-outline-success:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-info{color:#17a2b8;border-color:#17a2b8}.btn-outline-info:hover{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info.focus,.btn-outline-info:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#17a2b8;background-color:transparent}.btn-outline-info:not(:disabled):not(.disabled).active,.btn-outline-info:not(:disabled):not(.disabled):active,.show>.btn-outline-info.dropdown-toggle{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info:not(:disabled):not(.disabled).active:focus,.btn-outline-info:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning.focus,.btn-outline-warning:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-warning:not(:disabled):not(.disabled).active,.btn-outline-warning:not(:disabled):not(.disabled):active,.show>.btn-outline-warning.dropdown-toggle{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning:not(:disabled):not(.disabled).active:focus,.btn-outline-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger.focus,.btn-outline-danger:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-danger:not(:disabled):not(.disabled).active,.btn-outline-danger:not(:disabled):not(.disabled):active,.show>.btn-outline-danger.dropdown-toggle{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger:not(:disabled):not(.disabled).active:focus,.btn-outline-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light.focus,.btn-outline-light:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-light:not(:disabled):not(.disabled).active,.btn-outline-light:not(:disabled):not(.disabled):active,.show>.btn-outline-light.dropdown-toggle{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:not(:disabled):not(.disabled).active:focus,.btn-outline-light:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-dark{color:#343a40;border-color:#343a40}.btn-outline-dark:hover{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark.focus,.btn-outline-dark:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#343a40;background-color:transparent}.btn-outline-dark:not(:disabled):not(.disabled).active,.btn-outline-dark:not(:disabled):not(.disabled):active,.show>.btn-outline-dark.dropdown-toggle{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark:not(:disabled):not(.disabled).active:focus,.btn-outline-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-link{font-weight:400;color:#007bff;text-decoration:none}.btn-link:hover{color:#0056b3}.btn-link.focus,.btn-link:focus,.btn-link:hover{text-decoration:underline}.btn-link.disabled,.btn-link:disabled{color:#6c757d;pointer-events:none}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.btn-block{display:block;width:100%}.btn-block+.btn-block{margin-top:.5rem}input[type=button].btn-block,input[type=reset].btn-block,input[type=submit].btn-block{width:100%}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{position:relative;height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.dropdown,.dropleft,.dropright,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle:after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid transparent;border-bottom:0;border-left:.3em solid transparent}.dropdown-toggle:empty:after{margin-left:0}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:10rem;padding:.5rem 0;margin:.125rem 0 0;font-size:1rem;color:#212529;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu-left{right:auto;left:0}.dropdown-menu-right{right:0;left:auto}@media (min-width:540px){.dropdown-menu-sm-left{right:auto;left:0}.dropdown-menu-sm-right{right:0;left:auto}}@media (min-width:720px){.dropdown-menu-md-left{right:auto;left:0}.dropdown-menu-md-right{right:0;left:auto}}@media (min-width:960px){.dropdown-menu-lg-left{right:auto;left:0}.dropdown-menu-lg-right{right:0;left:auto}}@media (min-width:1200px){.dropdown-menu-xl-left{right:auto;left:0}.dropdown-menu-xl-right{right:0;left:auto}}.dropup .dropdown-menu{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle:after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid transparent;border-bottom:.3em solid;border-left:.3em solid transparent}.dropup .dropdown-toggle:empty:after{margin-left:0}.dropright .dropdown-menu{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropright .dropdown-toggle:after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:0;border-bottom:.3em solid transparent;border-left:.3em solid}.dropright .dropdown-toggle:empty:after{margin-left:0}.dropright .dropdown-toggle:after{vertical-align:0}.dropleft .dropdown-menu{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropleft .dropdown-toggle:after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";display:none}.dropleft .dropdown-toggle:before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:.3em solid;border-bottom:.3em solid transparent}.dropleft .dropdown-toggle:empty:after{margin-left:0}.dropleft .dropdown-toggle:before{vertical-align:0}.dropdown-menu[x-placement^=bottom],.dropdown-menu[x-placement^=left],.dropdown-menu[x-placement^=right],.dropdown-menu[x-placement^=top]{right:auto;bottom:auto}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid #e9ecef}.dropdown-item{display:block;width:100%;padding:.25rem 1.5rem;clear:both;font-weight:400;color:#212529;text-align:inherit;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#16181b;text-decoration:none;background-color:#f8f9fa}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#007bff}.dropdown-item.disabled,.dropdown-item:disabled{color:#6c757d;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1.5rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1.5rem;color:#212529}.btn-group,.btn-group-vertical{position:relative;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;flex:1 1 auto}.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group-vertical>.btn:hover,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus,.btn-group>.btn:hover{z-index:1}.btn-toolbar{display:flex;flex-wrap:wrap;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-left:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-right-radius:0;border-bottom-right-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split:after,.dropright .dropdown-toggle-split:after,.dropup .dropdown-toggle-split:after{margin-left:0}.dropleft .dropdown-toggle-split:before{margin-right:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{flex-direction:column;align-items:flex-start;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn:not(:first-child){border-top-left-radius:0;border-top-right-radius:0}.btn-group-toggle>.btn,.btn-group-toggle>.btn-group>.btn{margin-bottom:0}.btn-group-toggle>.btn-group>.btn input[type=checkbox],.btn-group-toggle>.btn-group>.btn input[type=radio],.btn-group-toggle>.btn input[type=checkbox],.btn-group-toggle>.btn input[type=radio]{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.input-group{position:relative;display:flex;flex-wrap:wrap;align-items:stretch;width:100%}.input-group>.custom-file,.input-group>.custom-select,.input-group>.form-control,.input-group>.form-control-plaintext{position:relative;flex:1 1 auto;width:1%;min-width:0;margin-bottom:0}.input-group>.custom-file+.custom-file,.input-group>.custom-file+.custom-select,.input-group>.custom-file+.form-control,.input-group>.custom-select+.custom-file,.input-group>.custom-select+.custom-select,.input-group>.custom-select+.form-control,.input-group>.form-control+.custom-file,.input-group>.form-control+.custom-select,.input-group>.form-control+.form-control,.input-group>.form-control-plaintext+.custom-file,.input-group>.form-control-plaintext+.custom-select,.input-group>.form-control-plaintext+.form-control{margin-left:-1px}.input-group>.custom-file .custom-file-input:focus~.custom-file-label,.input-group>.custom-select:focus,.input-group>.form-control:focus{z-index:3}.input-group>.custom-file .custom-file-input:focus{z-index:4}.input-group>.custom-select:not(:last-child),.input-group>.form-control:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-select:not(:first-child),.input-group>.form-control:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.input-group>.custom-file{display:flex;align-items:center}.input-group>.custom-file:not(:last-child) .custom-file-label,.input-group>.custom-file:not(:last-child) .custom-file-label:after{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-file:not(:first-child) .custom-file-label{border-top-left-radius:0;border-bottom-left-radius:0}.input-group-append,.input-group-prepend{display:flex}.input-group-append .btn,.input-group-prepend .btn{position:relative;z-index:2}.input-group-append .btn:focus,.input-group-prepend .btn:focus{z-index:3}.input-group-append .btn+.btn,.input-group-append .btn+.input-group-text,.input-group-append .input-group-text+.btn,.input-group-append .input-group-text+.input-group-text,.input-group-prepend .btn+.btn,.input-group-prepend .btn+.input-group-text,.input-group-prepend .input-group-text+.btn,.input-group-prepend .input-group-text+.input-group-text{margin-left:-1px}.input-group-prepend{margin-right:-1px}.input-group-append{margin-left:-1px}.input-group-text{display:flex;align-items:center;padding:.375rem .75rem;margin-bottom:0;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-text input[type=checkbox],.input-group-text input[type=radio]{margin-top:0}.input-group-lg>.custom-select,.input-group-lg>.form-control:not(textarea){height:calc(1.5em + 1rem + 2px)}.input-group-lg>.custom-select,.input-group-lg>.form-control,.input-group-lg>.input-group-append>.btn,.input-group-lg>.input-group-append>.input-group-text,.input-group-lg>.input-group-prepend>.btn,.input-group-lg>.input-group-prepend>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.input-group-sm>.custom-select,.input-group-sm>.form-control:not(textarea){height:calc(1.5em + .5rem + 2px)}.input-group-sm>.custom-select,.input-group-sm>.form-control,.input-group-sm>.input-group-append>.btn,.input-group-sm>.input-group-append>.input-group-text,.input-group-sm>.input-group-prepend>.btn,.input-group-sm>.input-group-prepend>.input-group-text{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.input-group-lg>.custom-select,.input-group-sm>.custom-select{padding-right:1.75rem}.input-group>.input-group-append:last-child>.btn:not(:last-child):not(.dropdown-toggle),.input-group>.input-group-append:last-child>.input-group-text:not(:last-child),.input-group>.input-group-append:not(:last-child)>.btn,.input-group>.input-group-append:not(:last-child)>.input-group-text,.input-group>.input-group-prepend>.btn,.input-group>.input-group-prepend>.input-group-text{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.input-group-append>.btn,.input-group>.input-group-append>.input-group-text,.input-group>.input-group-prepend:first-child>.btn:not(:first-child),.input-group>.input-group-prepend:first-child>.input-group-text:not(:first-child),.input-group>.input-group-prepend:not(:first-child)>.btn,.input-group>.input-group-prepend:not(:first-child)>.input-group-text{border-top-left-radius:0;border-bottom-left-radius:0}.custom-control{position:relative;display:block;min-height:1.5rem;padding-left:1.5rem}.custom-control-inline{display:inline-flex;margin-right:1rem}.custom-control-input{position:absolute;left:0;z-index:-1;width:1rem;height:1.25rem;opacity:0}.custom-control-input:checked~.custom-control-label:before{color:#fff;border-color:#007bff;background-color:#007bff}.custom-control-input:focus~.custom-control-label:before{box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-control-input:focus:not(:checked)~.custom-control-label:before{border-color:#80bdff}.custom-control-input:not(:disabled):active~.custom-control-label:before{color:#fff;background-color:#b3d7ff;border-color:#b3d7ff}.custom-control-input:disabled~.custom-control-label,.custom-control-input[disabled]~.custom-control-label{color:#6c757d}.custom-control-input:disabled~.custom-control-label:before,.custom-control-input[disabled]~.custom-control-label:before{background-color:#e9ecef}.custom-control-label{position:relative;margin-bottom:0;vertical-align:top}.custom-control-label:before{pointer-events:none;background-color:#fff;border:1px solid #adb5bd}.custom-control-label:after,.custom-control-label:before{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;content:""}.custom-control-label:after{background:no-repeat 50%/50% 50%}.custom-checkbox .custom-control-label:before{border-radius:.25rem}.custom-checkbox .custom-control-input:checked~.custom-control-label:after{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8'%3E%3Cpath fill='%23fff' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z'/%3E%3C/svg%3E")}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label:before{border-color:#007bff;background-color:#007bff}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label:after{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='4' height='4'%3E%3Cpath stroke='%23fff' d='M0 2h4'/%3E%3C/svg%3E")}.custom-checkbox .custom-control-input:disabled:checked~.custom-control-label:before{background-color:rgba(0,123,255,.5)}.custom-checkbox .custom-control-input:disabled:indeterminate~.custom-control-label:before{background-color:rgba(0,123,255,.5)}.custom-radio .custom-control-label:before{border-radius:50%}.custom-radio .custom-control-input:checked~.custom-control-label:after{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='-4 -4 8 8'%3E%3Ccircle r='3' fill='%23fff'/%3E%3C/svg%3E")}.custom-radio .custom-control-input:disabled:checked~.custom-control-label:before{background-color:rgba(0,123,255,.5)}.custom-switch{padding-left:2.25rem}.custom-switch .custom-control-label:before{left:-2.25rem;width:1.75rem;pointer-events:all;border-radius:.5rem}.custom-switch .custom-control-label:after{top:calc(.25rem + 2px);left:calc(-2.25rem + 2px);width:calc(1rem - 4px);height:calc(1rem - 4px);background-color:#adb5bd;border-radius:.5rem;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-switch .custom-control-label:after{transition:none}}.custom-switch .custom-control-input:checked~.custom-control-label:after{background-color:#fff;transform:translateX(.75rem)}.custom-switch .custom-control-input:disabled:checked~.custom-control-label:before{background-color:rgba(0,123,255,.5)}.custom-select{display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem 1.75rem .375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;vertical-align:middle;background:#fff url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5'%3E%3Cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3E%3C/svg%3E") no-repeat right .75rem center/8px 10px;border:1px solid #ced4da;border-radius:.25rem;appearance:none}.custom-select:focus{border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-select:focus::-ms-value{color:#495057;background-color:#fff}.custom-select[multiple],.custom-select[size]:not([size="1"]){height:auto;padding-right:.75rem;background-image:none}.custom-select:disabled{color:#6c757d;background-color:#e9ecef}.custom-select::-ms-expand{display:none}.custom-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.custom-select-sm{height:calc(1.5em + .5rem + 2px);padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:.875rem}.custom-select-lg{height:calc(1.5em + 1rem + 2px);padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.custom-file{display:inline-block;margin-bottom:0}.custom-file,.custom-file-input{position:relative;width:100%;height:calc(1.5em + .75rem + 2px)}.custom-file-input{z-index:2;margin:0;opacity:0}.custom-file-input:focus~.custom-file-label{border-color:#80bdff;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-file-input:disabled~.custom-file-label,.custom-file-input[disabled]~.custom-file-label{background-color:#e9ecef}.custom-file-input:lang(en)~.custom-file-label:after{content:"Browse"}.custom-file-input~.custom-file-label[data-browse]:after{content:attr(data-browse)}.custom-file-label{left:0;z-index:1;height:calc(1.5em + .75rem + 2px);font-weight:400;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem}.custom-file-label,.custom-file-label:after{position:absolute;top:0;right:0;padding:.375rem .75rem;line-height:1.5;color:#495057}.custom-file-label:after{bottom:0;z-index:3;display:block;height:calc(1.5em + .75rem);content:"Browse";background-color:#e9ecef;border-left:inherit;border-radius:0 .25rem .25rem 0}.custom-range{width:100%;height:1.4rem;padding:0;background-color:transparent;appearance:none}.custom-range:focus{outline:none}.custom-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-ms-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range::-moz-focus-outer{border:0}.custom-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#007bff;border:0;border-radius:1rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-webkit-slider-thumb{transition:none}}.custom-range::-webkit-slider-thumb:active{background-color:#b3d7ff}.custom-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#007bff;border:0;border-radius:1rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-moz-range-thumb{transition:none}}.custom-range::-moz-range-thumb:active{background-color:#b3d7ff}.custom-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-ms-thumb{width:1rem;height:1rem;margin-top:0;margin-right:.2rem;margin-left:.2rem;background-color:#007bff;border:0;border-radius:1rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-ms-thumb{transition:none}}.custom-range::-ms-thumb:active{background-color:#b3d7ff}.custom-range::-ms-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:transparent;border-color:transparent;border-width:.5rem}.custom-range::-ms-fill-lower,.custom-range::-ms-fill-upper{background-color:#dee2e6;border-radius:1rem}.custom-range::-ms-fill-upper{margin-right:15px}.custom-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.custom-range:disabled::-webkit-slider-runnable-track{cursor:default}.custom-range:disabled::-moz-range-thumb{background-color:#adb5bd}.custom-range:disabled::-moz-range-track{cursor:default}.custom-range:disabled::-ms-thumb{background-color:#adb5bd}.custom-control-label:before,.custom-file-label,.custom-select{transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-control-label:before,.custom-file-label,.custom-select{transition:none}}.nav{display:flex;flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem}.nav-link:focus,.nav-link:hover{text-decoration:none}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-item{margin-bottom:-1px}.nav-tabs .nav-link{border:1px solid transparent;border-top-left-radius:.25rem;border-top-right-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-left-radius:0;border-top-right-radius:0}.nav-pills .nav-link{border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#007bff}.nav-fill .nav-item{flex:1 1 auto;text-align:center}.nav-justified .nav-item{flex-basis:0;flex-grow:1;text-align:center}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;padding:.5rem 1rem}.navbar,.navbar .container,.navbar .container-fluid,.navbar .container-lg,.navbar .container-md,.navbar .container-sm,.navbar .container-xl{display:flex;flex-wrap:wrap;align-items:center;justify-content:space-between}.navbar-brand{display:inline-block;padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;line-height:inherit;white-space:nowrap}.navbar-brand:focus,.navbar-brand:hover{text-decoration:none}.navbar-nav{display:flex;flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static;float:none}.navbar-text{display:inline-block;padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{flex-basis:100%;flex-grow:1;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem}.navbar-toggler:focus,.navbar-toggler:hover{text-decoration:none}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;content:"";background:no-repeat 50%;background-size:100% 100%}@media (max-width:539.98px){.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{padding-right:0;padding-left:0}}@media (min-width:540px){.navbar-expand-sm{flex-flow:row nowrap;justify-content:flex-start}.navbar-expand-sm .navbar-nav{flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{flex-wrap:nowrap}.navbar-expand-sm .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}}@media (max-width:719.98px){.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{padding-right:0;padding-left:0}}@media (min-width:720px){.navbar-expand-md{flex-flow:row nowrap;justify-content:flex-start}.navbar-expand-md .navbar-nav{flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{flex-wrap:nowrap}.navbar-expand-md .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}}@media (max-width:959.98px){.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{padding-right:0;padding-left:0}}@media (min-width:960px){.navbar-expand-lg{flex-flow:row nowrap;justify-content:flex-start}.navbar-expand-lg .navbar-nav{flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{flex-wrap:nowrap}.navbar-expand-lg .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}}@media (max-width:1199.98px){.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{padding-right:0;padding-left:0}}@media (min-width:1200px){.navbar-expand-xl{flex-flow:row nowrap;justify-content:flex-start}.navbar-expand-xl .navbar-nav{flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{flex-wrap:nowrap}.navbar-expand-xl .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}}.navbar-expand{flex-flow:row nowrap;justify-content:flex-start}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{padding-right:0;padding-left:0}.navbar-expand .navbar-nav{flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{flex-wrap:nowrap}.navbar-expand .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-light .navbar-brand,.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.5)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .active>.nav-link,.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .nav-link.show,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.5);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30'%3E%3Cpath stroke='rgba(0,0,0,0.5)' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E")}.navbar-light .navbar-text{color:rgba(0,0,0,.5)}.navbar-light .navbar-text a,.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand,.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:hsla(0,0%,100%,.5)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:hsla(0,0%,100%,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:hsla(0,0%,100%,.25)}.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .nav-link.show,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:hsla(0,0%,100%,.5);border-color:hsla(0,0%,100%,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30'%3E%3Cpath stroke='rgba(255,255,255,0.5)' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E")}.navbar-dark .navbar-text{color:hsla(0,0%,100%,.5)}.navbar-dark .navbar-text a,.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:flex;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0;border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card>.list-group:last-child{border-bottom-width:0;border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-body{flex:1 1 auto;min-height:1px;padding:1.25rem}.card-title{margin-bottom:.75rem}.card-subtitle{margin-top:-.375rem}.card-subtitle,.card-text:last-child{margin-bottom:0}.card-link:hover{text-decoration:none}.card-link+.card-link{margin-left:1.25rem}.card-header{padding:.75rem 1.25rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-header+.list-group .list-group-item:first-child{border-top:0}.card-footer{padding:.75rem 1.25rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-bottom:-.75rem;border-bottom:0}.card-header-pills,.card-header-tabs{margin-right:-.625rem;margin-left:-.625rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1.25rem}.card-img,.card-img-bottom,.card-img-top{flex-shrink:0;width:100%}.card-img,.card-img-top{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-deck .card{margin-bottom:15px}@media (min-width:540px){.card-deck{display:flex;flex-flow:row wrap;margin-right:-15px;margin-left:-15px}.card-deck .card{flex:1 0 0%;margin-right:15px;margin-bottom:0;margin-left:15px}}.card-group>.card{margin-bottom:15px}@media (min-width:540px){.card-group{display:flex;flex-flow:row wrap}.card-group>.card{flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}.card-group>.card:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-right-radius:0}.card-group>.card:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-left-radius:0}}.card-columns .card{margin-bottom:.75rem}@media (min-width:540px){.card-columns{column-count:3;column-gap:1.25rem;orphans:1;widows:1}.card-columns .card{display:inline-block;width:100%}}.accordion>.card{overflow:hidden}.accordion>.card:not(:last-of-type){border-bottom:0;border-bottom-right-radius:0;border-bottom-left-radius:0}.accordion>.card:not(:first-of-type){border-top-left-radius:0;border-top-right-radius:0}.accordion>.card>.card-header{border-radius:0;margin-bottom:-1px}.breadcrumb{flex-wrap:wrap;padding:.75rem 1rem;margin-bottom:1rem;list-style:none;background-color:#e9ecef;border-radius:.25rem}.breadcrumb,.breadcrumb-item{display:flex}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item:before{display:inline-block;padding-right:.5rem;color:#6c757d;content:"/"}.breadcrumb-item+.breadcrumb-item:hover:before{text-decoration:underline;text-decoration:none}.breadcrumb-item.active{color:#6c757d}.pagination{display:flex;padding-left:0;list-style:none;border-radius:.25rem}.page-link{position:relative;display:block;padding:.5rem .75rem;margin-left:-1px;line-height:1.25;color:#007bff;background-color:#fff;border:1px solid #dee2e6}.page-link:hover{z-index:2;color:#0056b3;text-decoration:none;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.page-item:first-child .page-link{margin-left:0;border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.page-item:last-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.page-item.active .page-link{z-index:3;color:#fff;background-color:#007bff;border-color:#007bff}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;cursor:auto;background-color:#fff;border-color:#dee2e6}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem;line-height:1.5}.pagination-lg .page-item:first-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem;line-height:1.5}.pagination-sm .page-item:first-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.badge{display:inline-block;padding:.25em .4em;font-size:75%;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.badge{transition:none}}a.badge:focus,a.badge:hover{text-decoration:none}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.badge-pill{padding-right:.6em;padding-left:.6em;border-radius:10rem}.badge-primary{color:#fff;background-color:#007bff}a.badge-primary:focus,a.badge-primary:hover{color:#fff;background-color:#0062cc}a.badge-primary.focus,a.badge-primary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.badge-secondary{color:#fff;background-color:#6c757d}a.badge-secondary:focus,a.badge-secondary:hover{color:#fff;background-color:#545b62}a.badge-secondary.focus,a.badge-secondary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.badge-success{color:#fff;background-color:#28a745}a.badge-success:focus,a.badge-success:hover{color:#fff;background-color:#1e7e34}a.badge-success.focus,a.badge-success:focus{outline:0;box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.badge-info{color:#fff;background-color:#17a2b8}a.badge-info:focus,a.badge-info:hover{color:#fff;background-color:#117a8b}a.badge-info.focus,a.badge-info:focus{outline:0;box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.badge-warning{color:#212529;background-color:#ffc107}a.badge-warning:focus,a.badge-warning:hover{color:#212529;background-color:#d39e00}a.badge-warning.focus,a.badge-warning:focus{outline:0;box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.badge-danger{color:#fff;background-color:#dc3545}a.badge-danger:focus,a.badge-danger:hover{color:#fff;background-color:#bd2130}a.badge-danger.focus,a.badge-danger:focus{outline:0;box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.badge-light{color:#212529;background-color:#f8f9fa}a.badge-light:focus,a.badge-light:hover{color:#212529;background-color:#dae0e5}a.badge-light.focus,a.badge-light:focus{outline:0;box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.badge-dark{color:#fff;background-color:#343a40}a.badge-dark:focus,a.badge-dark:hover{color:#fff;background-color:#1d2124}a.badge-dark.focus,a.badge-dark:focus{outline:0;box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.jumbotron{padding:2rem 1rem;margin-bottom:2rem;background-color:#e9ecef;border-radius:.3rem}@media (min-width:540px){.jumbotron{padding:4rem 2rem}}.jumbotron-fluid{padding-right:0;padding-left:0;border-radius:0}.alert{position:relative;padding:.75rem 1.25rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:4rem}.alert-dismissible .close{position:absolute;top:0;right:0;padding:.75rem 1.25rem;color:inherit}.alert-primary{color:#004085;background-color:#cce5ff;border-color:#b8daff}.alert-primary hr{border-top-color:#9fcdff}.alert-primary .alert-link{color:#002752}.alert-secondary{color:#383d41;background-color:#e2e3e5;border-color:#d6d8db}.alert-secondary hr{border-top-color:#c8cbcf}.alert-secondary .alert-link{color:#202326}.alert-success{color:#155724;background-color:#d4edda;border-color:#c3e6cb}.alert-success hr{border-top-color:#b1dfbb}.alert-success .alert-link{color:#0b2e13}.alert-info{color:#0c5460;background-color:#d1ecf1;border-color:#bee5eb}.alert-info hr{border-top-color:#abdde5}.alert-info .alert-link{color:#062c33}.alert-warning{color:#856404;background-color:#fff3cd;border-color:#ffeeba}.alert-warning hr{border-top-color:#ffe8a1}.alert-warning .alert-link{color:#533f03}.alert-danger{color:#721c24;background-color:#f8d7da;border-color:#f5c6cb}.alert-danger hr{border-top-color:#f1b0b7}.alert-danger .alert-link{color:#491217}.alert-light{color:#818182;background-color:#fefefe;border-color:#fdfdfe}.alert-light hr{border-top-color:#ececf6}.alert-light .alert-link{color:#686868}.alert-dark{color:#1b1e21;background-color:#d6d8d9;border-color:#c6c8ca}.alert-dark hr{border-top-color:#b9bbbe}.alert-dark .alert-link{color:#040505}@keyframes progress-bar-stripes{0%{background-position:1rem 0}to{background-position:0 0}}.progress{height:1rem;line-height:0;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress,.progress-bar{display:flex;overflow:hidden}.progress-bar{flex-direction:column;justify-content:center;color:#fff;text-align:center;white-space:nowrap;background-color:#007bff;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg,hsla(0,0%,100%,.15) 25%,transparent 0,transparent 50%,hsla(0,0%,100%,.15) 0,hsla(0,0%,100%,.15) 75%,transparent 0,transparent);background-size:1rem 1rem}.progress-bar-animated{animation:progress-bar-stripes 1s linear infinite}@media (prefers-reduced-motion:reduce){.progress-bar-animated{animation:none}}.media{display:flex;align-items:flex-start}.media-body{flex:1}.list-group{display:flex;flex-direction:column;padding-left:0;margin-bottom:0;border-radius:.25rem}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.75rem 1.25rem;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-left-radius:inherit;border-top-right-radius:inherit}.list-group-item:last-child{border-bottom-right-radius:inherit;border-bottom-left-radius:inherit}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#007bff;border-color:#007bff}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{flex-direction:row}.list-group-horizontal>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media (min-width:540px){.list-group-horizontal-sm{flex-direction:row}.list-group-horizontal-sm>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-sm>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:720px){.list-group-horizontal-md{flex-direction:row}.list-group-horizontal-md>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-md>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:960px){.list-group-horizontal-lg{flex-direction:row}.list-group-horizontal-lg>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-lg>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{flex-direction:row}.list-group-horizontal-xl>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xl>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush{border-radius:0}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#004085;background-color:#b8daff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#004085;background-color:#9fcdff}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#004085;border-color:#004085}.list-group-item-secondary{color:#383d41;background-color:#d6d8db}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#383d41;background-color:#c8cbcf}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#383d41;border-color:#383d41}.list-group-item-success{color:#155724;background-color:#c3e6cb}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#155724;background-color:#b1dfbb}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#155724;border-color:#155724}.list-group-item-info{color:#0c5460;background-color:#bee5eb}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#0c5460;background-color:#abdde5}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#0c5460;border-color:#0c5460}.list-group-item-warning{color:#856404;background-color:#ffeeba}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#856404;background-color:#ffe8a1}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#856404;border-color:#856404}.list-group-item-danger{color:#721c24;background-color:#f5c6cb}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#721c24;background-color:#f1b0b7}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#721c24;border-color:#721c24}.list-group-item-light{color:#818182;background-color:#fdfdfe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#818182;background-color:#ececf6}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#818182;border-color:#818182}.list-group-item-dark{color:#1b1e21;background-color:#c6c8ca}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#1b1e21;background-color:#b9bbbe}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#1b1e21;border-color:#1b1e21}.close{float:right;font-size:1.5rem;font-weight:700;line-height:1;color:#000;text-shadow:0 1px 0 #fff;opacity:.5}.close:hover{color:#000;text-decoration:none}.close:not(:disabled):not(.disabled):focus,.close:not(:disabled):not(.disabled):hover{opacity:.75}button.close{padding:0;background-color:transparent;border:0}a.close.disabled{pointer-events:none}.toast{max-width:350px;overflow:hidden;font-size:.875rem;background-color:hsla(0,0%,100%,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .25rem .75rem rgba(0,0,0,.1);backdrop-filter:blur(10px);opacity:0;border-radius:.25rem}.toast:not(:last-child){margin-bottom:.75rem}.toast.showing{opacity:1}.toast.show{display:block;opacity:1}.toast.hide{display:none}.toast-header{display:flex;align-items:center;padding:.25rem .75rem;color:#6c757d;background-color:hsla(0,0%,100%,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-body{padding:.75rem}.modal-open{overflow:hidden}.modal-open .modal{overflow-x:hidden;overflow-y:auto}.modal{position:fixed;top:0;left:0;z-index:1050;display:none;width:100%;height:100%;overflow:hidden;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:transform .3s ease-out;transform:translateY(-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{transform:none}.modal.modal-static .modal-dialog{transform:scale(1.02)}.modal-dialog-scrollable{display:flex;max-height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 1rem);overflow:hidden}.modal-dialog-scrollable .modal-footer,.modal-dialog-scrollable .modal-header{flex-shrink:0}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:flex;align-items:center;min-height:calc(100% - 1rem)}.modal-dialog-centered:before{display:block;height:calc(100vh - 1rem);height:min-content;content:""}.modal-dialog-centered.modal-dialog-scrollable{flex-direction:column;justify-content:center;height:100%}.modal-dialog-centered.modal-dialog-scrollable .modal-content{max-height:none}.modal-dialog-centered.modal-dialog-scrollable:before{content:none}.modal-content{position:relative;display:flex;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:flex;align-items:flex-start;justify-content:space-between;padding:1rem;border-bottom:1px solid #dee2e6;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.modal-header .close{padding:1rem;margin:-1rem -1rem -1rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;flex:1 1 auto;padding:1rem}.modal-footer{display:flex;flex-wrap:wrap;align-items:center;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-right-radius:calc(.3rem - 1px);border-bottom-left-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}.modal-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}@media (min-width:540px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{max-height:calc(100% - 3.5rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-dialog-centered:before{height:calc(100vh - 3.5rem);height:min-content}.modal-sm{max-width:300px}}@media (min-width:960px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.tooltip{position:absolute;z-index:1070;display:block;margin:0;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .arrow:before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[x-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[x-placement^=top] .arrow,.bs-tooltip-top .arrow{bottom:0}.bs-tooltip-auto[x-placement^=top] .arrow:before,.bs-tooltip-top .arrow:before{top:0;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[x-placement^=right],.bs-tooltip-right{padding:0 .4rem}.bs-tooltip-auto[x-placement^=right] .arrow,.bs-tooltip-right .arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=right] .arrow:before,.bs-tooltip-right .arrow:before{right:0;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-auto[x-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[x-placement^=bottom] .arrow,.bs-tooltip-bottom .arrow{top:0}.bs-tooltip-auto[x-placement^=bottom] .arrow:before,.bs-tooltip-bottom .arrow:before{bottom:0;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[x-placement^=left],.bs-tooltip-left{padding:0 .4rem}.bs-tooltip-auto[x-placement^=left] .arrow,.bs-tooltip-left .arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=left] .arrow:before,.bs-tooltip-left .arrow:before{left:0;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{top:0;left:0;z-index:1060;max-width:276px;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover,.popover .arrow{position:absolute;display:block}.popover .arrow{width:1rem;height:.5rem;margin:0 .3rem}.popover .arrow:after,.popover .arrow:before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[x-placement^=top],.bs-popover-top{margin-bottom:.5rem}.bs-popover-auto[x-placement^=top]>.arrow,.bs-popover-top>.arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=top]>.arrow:before,.bs-popover-top>.arrow:before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=top]>.arrow:after,.bs-popover-top>.arrow:after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[x-placement^=right],.bs-popover-right{margin-left:.5rem}.bs-popover-auto[x-placement^=right]>.arrow,.bs-popover-right>.arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=right]>.arrow:before,.bs-popover-right>.arrow:before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=right]>.arrow:after,.bs-popover-right>.arrow:after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-auto[x-placement^=bottom],.bs-popover-bottom{margin-top:.5rem}.bs-popover-auto[x-placement^=bottom]>.arrow,.bs-popover-bottom>.arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=bottom]>.arrow:before,.bs-popover-bottom>.arrow:before{top:0;border-width:0 .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=bottom]>.arrow:after,.bs-popover-bottom>.arrow:after{top:1px;border-width:0 .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[x-placement^=bottom] .popover-header:before,.bs-popover-bottom .popover-header:before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-.5rem;content:"";border-bottom:1px solid #f7f7f7}.bs-popover-auto[x-placement^=left],.bs-popover-left{margin-right:.5rem}.bs-popover-auto[x-placement^=left]>.arrow,.bs-popover-left>.arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=left]>.arrow:before,.bs-popover-left>.arrow:before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=left]>.arrow:after,.bs-popover-left>.arrow:after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem .75rem;margin-bottom:0;font-size:1rem;background-color:#f7f7f7;border-bottom:1px solid #ebebeb;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:.5rem .75rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner:after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;backface-visibility:hidden;transition:transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-right,.carousel-item-next:not(.carousel-item-left){transform:translateX(100%)}.active.carousel-item-left,.carousel-item-prev:not(.carousel-item-right){transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;transform:none}.carousel-fade .carousel-item-next.carousel-item-left,.carousel-fade .carousel-item-prev.carousel-item-right,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:flex;align-items:center;justify-content:center;width:15%;color:#fff;text-align:center;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:20px;height:20px;background:no-repeat 50%/100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8'%3E%3Cpath d='M5.25 0l-4 4 4 4 1.5-1.5L4.25 4l2.5-2.5L5.25 0z'/%3E%3C/svg%3E")}.carousel-control-next-icon{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8'%3E%3Cpath d='M2.75 0l-1.5 1.5L3.75 4l-2.5 2.5L2.75 8l4-4-4-4z'/%3E%3C/svg%3E")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:15;display:flex;justify-content:center;padding-left:0;margin-right:15%;margin-left:15%;list-style:none}.carousel-indicators li{box-sizing:content-box;flex:0 1 auto;width:30px;height:3px;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators li{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:20px;left:15%;z-index:10;padding-top:20px;padding-bottom:20px;color:#fff;text-align:center}@keyframes spinner-border{to{transform:rotate(1turn)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;border:.25em solid;border-right:.25em solid transparent;border-radius:50%;animation:spinner-border .75s linear infinite}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;background-color:currentColor;border-radius:50%;opacity:0;animation:spinner-grow .75s linear infinite}.spinner-grow-sm{width:1rem;height:1rem}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.bg-primary{background-color:#007bff!important}a.bg-primary:focus,a.bg-primary:hover,button.bg-primary:focus,button.bg-primary:hover{background-color:#0062cc!important}.bg-secondary{background-color:#6c757d!important}a.bg-secondary:focus,a.bg-secondary:hover,button.bg-secondary:focus,button.bg-secondary:hover{background-color:#545b62!important}.bg-success{background-color:#28a745!important}a.bg-success:focus,a.bg-success:hover,button.bg-success:focus,button.bg-success:hover{background-color:#1e7e34!important}.bg-info{background-color:#17a2b8!important}a.bg-info:focus,a.bg-info:hover,button.bg-info:focus,button.bg-info:hover{background-color:#117a8b!important}.bg-warning{background-color:#ffc107!important}a.bg-warning:focus,a.bg-warning:hover,button.bg-warning:focus,button.bg-warning:hover{background-color:#d39e00!important}.bg-danger{background-color:#dc3545!important}a.bg-danger:focus,a.bg-danger:hover,button.bg-danger:focus,button.bg-danger:hover{background-color:#bd2130!important}.bg-light{background-color:#f8f9fa!important}a.bg-light:focus,a.bg-light:hover,button.bg-light:focus,button.bg-light:hover{background-color:#dae0e5!important}.bg-dark{background-color:#343a40!important}a.bg-dark:focus,a.bg-dark:hover,button.bg-dark:focus,button.bg-dark:hover{background-color:#1d2124!important}.bg-white{background-color:#fff!important}.bg-transparent{background-color:transparent!important}.border{border:1px solid #dee2e6!important}.border-top{border-top:1px solid #dee2e6!important}.border-right{border-right:1px solid #dee2e6!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-left{border-left:1px solid #dee2e6!important}.border-0{border:0!important}.border-top-0{border-top:0!important}.border-right-0{border-right:0!important}.border-bottom-0{border-bottom:0!important}.border-left-0{border-left:0!important}.border-primary{border-color:#007bff!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#28a745!important}.border-info{border-color:#17a2b8!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#343a40!important}.border-white{border-color:#fff!important}.rounded-sm{border-radius:.2rem!important}.rounded{border-radius:.25rem!important}.rounded-top{border-top-left-radius:.25rem!important}.rounded-right,.rounded-top{border-top-right-radius:.25rem!important}.rounded-bottom,.rounded-right{border-bottom-right-radius:.25rem!important}.rounded-bottom,.rounded-left{border-bottom-left-radius:.25rem!important}.rounded-left{border-top-left-radius:.25rem!important}.rounded-lg{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-0{border-radius:0!important}.clearfix:after{display:block;clear:both;content:""}.d-none{display:none!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:flex!important}.d-inline-flex{display:inline-flex!important}@media (min-width:540px){.d-sm-none{display:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:flex!important}.d-sm-inline-flex{display:inline-flex!important}}@media (min-width:720px){.d-md-none{display:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:flex!important}.d-md-inline-flex{display:inline-flex!important}}@media (min-width:960px){.d-lg-none{display:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:flex!important}.d-lg-inline-flex{display:inline-flex!important}}@media (min-width:1200px){.d-xl-none{display:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:flex!important}.d-xl-inline-flex{display:inline-flex!important}}@media print{.d-print-none{display:none!important}.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:flex!important}.d-print-inline-flex{display:inline-flex!important}}.embed-responsive{position:relative;display:block;width:100%;padding:0;overflow:hidden}.embed-responsive:before{display:block;content:""}.embed-responsive .embed-responsive-item,.embed-responsive embed,.embed-responsive iframe,.embed-responsive object,.embed-responsive video{position:absolute;top:0;bottom:0;left:0;width:100%;height:100%;border:0}.embed-responsive-21by9:before{padding-top:42.85714%}.embed-responsive-16by9:before{padding-top:56.25%}.embed-responsive-4by3:before{padding-top:75%}.embed-responsive-1by1:before{padding-top:100%}.flex-row{flex-direction:row!important}.flex-column{flex-direction:column!important}.flex-row-reverse{flex-direction:row-reverse!important}.flex-column-reverse{flex-direction:column-reverse!important}.flex-wrap{flex-wrap:wrap!important}.flex-nowrap{flex-wrap:nowrap!important}.flex-wrap-reverse{flex-wrap:wrap-reverse!important}.flex-fill{flex:1 1 auto!important}.flex-grow-0{flex-grow:0!important}.flex-grow-1{flex-grow:1!important}.flex-shrink-0{flex-shrink:0!important}.flex-shrink-1{flex-shrink:1!important}.justify-content-start{justify-content:flex-start!important}.justify-content-end{justify-content:flex-end!important}.justify-content-center{justify-content:center!important}.justify-content-between{justify-content:space-between!important}.justify-content-around{justify-content:space-around!important}.align-items-start{align-items:flex-start!important}.align-items-end{align-items:flex-end!important}.align-items-center{align-items:center!important}.align-items-baseline{align-items:baseline!important}.align-items-stretch{align-items:stretch!important}.align-content-start{align-content:flex-start!important}.align-content-end{align-content:flex-end!important}.align-content-center{align-content:center!important}.align-content-between{align-content:space-between!important}.align-content-around{align-content:space-around!important}.align-content-stretch{align-content:stretch!important}.align-self-auto{align-self:auto!important}.align-self-start{align-self:flex-start!important}.align-self-end{align-self:flex-end!important}.align-self-center{align-self:center!important}.align-self-baseline{align-self:baseline!important}.align-self-stretch{align-self:stretch!important}@media (min-width:540px){.flex-sm-row{flex-direction:row!important}.flex-sm-column{flex-direction:column!important}.flex-sm-row-reverse{flex-direction:row-reverse!important}.flex-sm-column-reverse{flex-direction:column-reverse!important}.flex-sm-wrap{flex-wrap:wrap!important}.flex-sm-nowrap{flex-wrap:nowrap!important}.flex-sm-wrap-reverse{flex-wrap:wrap-reverse!important}.flex-sm-fill{flex:1 1 auto!important}.flex-sm-grow-0{flex-grow:0!important}.flex-sm-grow-1{flex-grow:1!important}.flex-sm-shrink-0{flex-shrink:0!important}.flex-sm-shrink-1{flex-shrink:1!important}.justify-content-sm-start{justify-content:flex-start!important}.justify-content-sm-end{justify-content:flex-end!important}.justify-content-sm-center{justify-content:center!important}.justify-content-sm-between{justify-content:space-between!important}.justify-content-sm-around{justify-content:space-around!important}.align-items-sm-start{align-items:flex-start!important}.align-items-sm-end{align-items:flex-end!important}.align-items-sm-center{align-items:center!important}.align-items-sm-baseline{align-items:baseline!important}.align-items-sm-stretch{align-items:stretch!important}.align-content-sm-start{align-content:flex-start!important}.align-content-sm-end{align-content:flex-end!important}.align-content-sm-center{align-content:center!important}.align-content-sm-between{align-content:space-between!important}.align-content-sm-around{align-content:space-around!important}.align-content-sm-stretch{align-content:stretch!important}.align-self-sm-auto{align-self:auto!important}.align-self-sm-start{align-self:flex-start!important}.align-self-sm-end{align-self:flex-end!important}.align-self-sm-center{align-self:center!important}.align-self-sm-baseline{align-self:baseline!important}.align-self-sm-stretch{align-self:stretch!important}}@media (min-width:720px){.flex-md-row{flex-direction:row!important}.flex-md-column{flex-direction:column!important}.flex-md-row-reverse{flex-direction:row-reverse!important}.flex-md-column-reverse{flex-direction:column-reverse!important}.flex-md-wrap{flex-wrap:wrap!important}.flex-md-nowrap{flex-wrap:nowrap!important}.flex-md-wrap-reverse{flex-wrap:wrap-reverse!important}.flex-md-fill{flex:1 1 auto!important}.flex-md-grow-0{flex-grow:0!important}.flex-md-grow-1{flex-grow:1!important}.flex-md-shrink-0{flex-shrink:0!important}.flex-md-shrink-1{flex-shrink:1!important}.justify-content-md-start{justify-content:flex-start!important}.justify-content-md-end{justify-content:flex-end!important}.justify-content-md-center{justify-content:center!important}.justify-content-md-between{justify-content:space-between!important}.justify-content-md-around{justify-content:space-around!important}.align-items-md-start{align-items:flex-start!important}.align-items-md-end{align-items:flex-end!important}.align-items-md-center{align-items:center!important}.align-items-md-baseline{align-items:baseline!important}.align-items-md-stretch{align-items:stretch!important}.align-content-md-start{align-content:flex-start!important}.align-content-md-end{align-content:flex-end!important}.align-content-md-center{align-content:center!important}.align-content-md-between{align-content:space-between!important}.align-content-md-around{align-content:space-around!important}.align-content-md-stretch{align-content:stretch!important}.align-self-md-auto{align-self:auto!important}.align-self-md-start{align-self:flex-start!important}.align-self-md-end{align-self:flex-end!important}.align-self-md-center{align-self:center!important}.align-self-md-baseline{align-self:baseline!important}.align-self-md-stretch{align-self:stretch!important}}@media (min-width:960px){.flex-lg-row{flex-direction:row!important}.flex-lg-column{flex-direction:column!important}.flex-lg-row-reverse{flex-direction:row-reverse!important}.flex-lg-column-reverse{flex-direction:column-reverse!important}.flex-lg-wrap{flex-wrap:wrap!important}.flex-lg-nowrap{flex-wrap:nowrap!important}.flex-lg-wrap-reverse{flex-wrap:wrap-reverse!important}.flex-lg-fill{flex:1 1 auto!important}.flex-lg-grow-0{flex-grow:0!important}.flex-lg-grow-1{flex-grow:1!important}.flex-lg-shrink-0{flex-shrink:0!important}.flex-lg-shrink-1{flex-shrink:1!important}.justify-content-lg-start{justify-content:flex-start!important}.justify-content-lg-end{justify-content:flex-end!important}.justify-content-lg-center{justify-content:center!important}.justify-content-lg-between{justify-content:space-between!important}.justify-content-lg-around{justify-content:space-around!important}.align-items-lg-start{align-items:flex-start!important}.align-items-lg-end{align-items:flex-end!important}.align-items-lg-center{align-items:center!important}.align-items-lg-baseline{align-items:baseline!important}.align-items-lg-stretch{align-items:stretch!important}.align-content-lg-start{align-content:flex-start!important}.align-content-lg-end{align-content:flex-end!important}.align-content-lg-center{align-content:center!important}.align-content-lg-between{align-content:space-between!important}.align-content-lg-around{align-content:space-around!important}.align-content-lg-stretch{align-content:stretch!important}.align-self-lg-auto{align-self:auto!important}.align-self-lg-start{align-self:flex-start!important}.align-self-lg-end{align-self:flex-end!important}.align-self-lg-center{align-self:center!important}.align-self-lg-baseline{align-self:baseline!important}.align-self-lg-stretch{align-self:stretch!important}}@media (min-width:1200px){.flex-xl-row{flex-direction:row!important}.flex-xl-column{flex-direction:column!important}.flex-xl-row-reverse{flex-direction:row-reverse!important}.flex-xl-column-reverse{flex-direction:column-reverse!important}.flex-xl-wrap{flex-wrap:wrap!important}.flex-xl-nowrap{flex-wrap:nowrap!important}.flex-xl-wrap-reverse{flex-wrap:wrap-reverse!important}.flex-xl-fill{flex:1 1 auto!important}.flex-xl-grow-0{flex-grow:0!important}.flex-xl-grow-1{flex-grow:1!important}.flex-xl-shrink-0{flex-shrink:0!important}.flex-xl-shrink-1{flex-shrink:1!important}.justify-content-xl-start{justify-content:flex-start!important}.justify-content-xl-end{justify-content:flex-end!important}.justify-content-xl-center{justify-content:center!important}.justify-content-xl-between{justify-content:space-between!important}.justify-content-xl-around{justify-content:space-around!important}.align-items-xl-start{align-items:flex-start!important}.align-items-xl-end{align-items:flex-end!important}.align-items-xl-center{align-items:center!important}.align-items-xl-baseline{align-items:baseline!important}.align-items-xl-stretch{align-items:stretch!important}.align-content-xl-start{align-content:flex-start!important}.align-content-xl-end{align-content:flex-end!important}.align-content-xl-center{align-content:center!important}.align-content-xl-between{align-content:space-between!important}.align-content-xl-around{align-content:space-around!important}.align-content-xl-stretch{align-content:stretch!important}.align-self-xl-auto{align-self:auto!important}.align-self-xl-start{align-self:flex-start!important}.align-self-xl-end{align-self:flex-end!important}.align-self-xl-center{align-self:center!important}.align-self-xl-baseline{align-self:baseline!important}.align-self-xl-stretch{align-self:stretch!important}}.float-left{float:left!important}.float-right{float:right!important}.float-none{float:none!important}@media (min-width:540px){.float-sm-left{float:left!important}.float-sm-right{float:right!important}.float-sm-none{float:none!important}}@media (min-width:720px){.float-md-left{float:left!important}.float-md-right{float:right!important}.float-md-none{float:none!important}}@media (min-width:960px){.float-lg-left{float:left!important}.float-lg-right{float:right!important}.float-lg-none{float:none!important}}@media (min-width:1200px){.float-xl-left{float:left!important}.float-xl-right{float:right!important}.float-xl-none{float:none!important}}.user-select-all{user-select:all!important}.user-select-auto{user-select:auto!important}.user-select-none{user-select:none!important}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:sticky!important}.fixed-top{top:0}.fixed-bottom,.fixed-top{position:fixed;right:0;left:0;z-index:1030}.fixed-bottom{bottom:0}@supports (position:sticky){.sticky-top{position:sticky;top:0;z-index:1020}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;overflow:visible;clip:auto;white-space:normal}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mw-100{max-width:100%!important}.mh-100{max-height:100%!important}.min-vw-100{min-width:100vw!important}.min-vh-100{min-height:100vh!important}.vw-100{width:100vw!important}.vh-100{height:100vh!important}.m-0{margin:0!important}.mt-0,.my-0{margin-top:0!important}.mr-0,.mx-0{margin-right:0!important}.mb-0,.my-0{margin-bottom:0!important}.ml-0,.mx-0{margin-left:0!important}.m-1{margin:.25rem!important}.mt-1,.my-1{margin-top:.25rem!important}.mr-1,.mx-1{margin-right:.25rem!important}.mb-1,.my-1{margin-bottom:.25rem!important}.ml-1,.mx-1{margin-left:.25rem!important}.m-2{margin:.5rem!important}.mt-2,.my-2{margin-top:.5rem!important}.mr-2,.mx-2{margin-right:.5rem!important}.mb-2,.my-2{margin-bottom:.5rem!important}.ml-2,.mx-2{margin-left:.5rem!important}.m-3{margin:1rem!important}.mt-3,.my-3{margin-top:1rem!important}.mr-3,.mx-3{margin-right:1rem!important}.mb-3,.my-3{margin-bottom:1rem!important}.ml-3,.mx-3{margin-left:1rem!important}.m-4{margin:1.5rem!important}.mt-4,.my-4{margin-top:1.5rem!important}.mr-4,.mx-4{margin-right:1.5rem!important}.mb-4,.my-4{margin-bottom:1.5rem!important}.ml-4,.mx-4{margin-left:1.5rem!important}.m-5{margin:3rem!important}.mt-5,.my-5{margin-top:3rem!important}.mr-5,.mx-5{margin-right:3rem!important}.mb-5,.my-5{margin-bottom:3rem!important}.ml-5,.mx-5{margin-left:3rem!important}.p-0{padding:0!important}.pt-0,.py-0{padding-top:0!important}.pr-0,.px-0{padding-right:0!important}.pb-0,.py-0{padding-bottom:0!important}.pl-0,.px-0{padding-left:0!important}.p-1{padding:.25rem!important}.pt-1,.py-1{padding-top:.25rem!important}.pr-1,.px-1{padding-right:.25rem!important}.pb-1,.py-1{padding-bottom:.25rem!important}.pl-1,.px-1{padding-left:.25rem!important}.p-2{padding:.5rem!important}.pt-2,.py-2{padding-top:.5rem!important}.pr-2,.px-2{padding-right:.5rem!important}.pb-2,.py-2{padding-bottom:.5rem!important}.pl-2,.px-2{padding-left:.5rem!important}.p-3{padding:1rem!important}.pt-3,.py-3{padding-top:1rem!important}.pr-3,.px-3{padding-right:1rem!important}.pb-3,.py-3{padding-bottom:1rem!important}.pl-3,.px-3{padding-left:1rem!important}.p-4{padding:1.5rem!important}.pt-4,.py-4{padding-top:1.5rem!important}.pr-4,.px-4{padding-right:1.5rem!important}.pb-4,.py-4{padding-bottom:1.5rem!important}.pl-4,.px-4{padding-left:1.5rem!important}.p-5{padding:3rem!important}.pt-5,.py-5{padding-top:3rem!important}.pr-5,.px-5{padding-right:3rem!important}.pb-5,.py-5{padding-bottom:3rem!important}.pl-5,.px-5{padding-left:3rem!important}.m-n1{margin:-.25rem!important}.mt-n1,.my-n1{margin-top:-.25rem!important}.mr-n1,.mx-n1{margin-right:-.25rem!important}.mb-n1,.my-n1{margin-bottom:-.25rem!important}.ml-n1,.mx-n1{margin-left:-.25rem!important}.m-n2{margin:-.5rem!important}.mt-n2,.my-n2{margin-top:-.5rem!important}.mr-n2,.mx-n2{margin-right:-.5rem!important}.mb-n2,.my-n2{margin-bottom:-.5rem!important}.ml-n2,.mx-n2{margin-left:-.5rem!important}.m-n3{margin:-1rem!important}.mt-n3,.my-n3{margin-top:-1rem!important}.mr-n3,.mx-n3{margin-right:-1rem!important}.mb-n3,.my-n3{margin-bottom:-1rem!important}.ml-n3,.mx-n3{margin-left:-1rem!important}.m-n4{margin:-1.5rem!important}.mt-n4,.my-n4{margin-top:-1.5rem!important}.mr-n4,.mx-n4{margin-right:-1.5rem!important}.mb-n4,.my-n4{margin-bottom:-1.5rem!important}.ml-n4,.mx-n4{margin-left:-1.5rem!important}.m-n5{margin:-3rem!important}.mt-n5,.my-n5{margin-top:-3rem!important}.mr-n5,.mx-n5{margin-right:-3rem!important}.mb-n5,.my-n5{margin-bottom:-3rem!important}.ml-n5,.mx-n5{margin-left:-3rem!important}.m-auto{margin:auto!important}.mt-auto,.my-auto{margin-top:auto!important}.mr-auto,.mx-auto{margin-right:auto!important}.mb-auto,.my-auto{margin-bottom:auto!important}.ml-auto,.mx-auto{margin-left:auto!important}@media (min-width:540px){.m-sm-0{margin:0!important}.mt-sm-0,.my-sm-0{margin-top:0!important}.mr-sm-0,.mx-sm-0{margin-right:0!important}.mb-sm-0,.my-sm-0{margin-bottom:0!important}.ml-sm-0,.mx-sm-0{margin-left:0!important}.m-sm-1{margin:.25rem!important}.mt-sm-1,.my-sm-1{margin-top:.25rem!important}.mr-sm-1,.mx-sm-1{margin-right:.25rem!important}.mb-sm-1,.my-sm-1{margin-bottom:.25rem!important}.ml-sm-1,.mx-sm-1{margin-left:.25rem!important}.m-sm-2{margin:.5rem!important}.mt-sm-2,.my-sm-2{margin-top:.5rem!important}.mr-sm-2,.mx-sm-2{margin-right:.5rem!important}.mb-sm-2,.my-sm-2{margin-bottom:.5rem!important}.ml-sm-2,.mx-sm-2{margin-left:.5rem!important}.m-sm-3{margin:1rem!important}.mt-sm-3,.my-sm-3{margin-top:1rem!important}.mr-sm-3,.mx-sm-3{margin-right:1rem!important}.mb-sm-3,.my-sm-3{margin-bottom:1rem!important}.ml-sm-3,.mx-sm-3{margin-left:1rem!important}.m-sm-4{margin:1.5rem!important}.mt-sm-4,.my-sm-4{margin-top:1.5rem!important}.mr-sm-4,.mx-sm-4{margin-right:1.5rem!important}.mb-sm-4,.my-sm-4{margin-bottom:1.5rem!important}.ml-sm-4,.mx-sm-4{margin-left:1.5rem!important}.m-sm-5{margin:3rem!important}.mt-sm-5,.my-sm-5{margin-top:3rem!important}.mr-sm-5,.mx-sm-5{margin-right:3rem!important}.mb-sm-5,.my-sm-5{margin-bottom:3rem!important}.ml-sm-5,.mx-sm-5{margin-left:3rem!important}.p-sm-0{padding:0!important}.pt-sm-0,.py-sm-0{padding-top:0!important}.pr-sm-0,.px-sm-0{padding-right:0!important}.pb-sm-0,.py-sm-0{padding-bottom:0!important}.pl-sm-0,.px-sm-0{padding-left:0!important}.p-sm-1{padding:.25rem!important}.pt-sm-1,.py-sm-1{padding-top:.25rem!important}.pr-sm-1,.px-sm-1{padding-right:.25rem!important}.pb-sm-1,.py-sm-1{padding-bottom:.25rem!important}.pl-sm-1,.px-sm-1{padding-left:.25rem!important}.p-sm-2{padding:.5rem!important}.pt-sm-2,.py-sm-2{padding-top:.5rem!important}.pr-sm-2,.px-sm-2{padding-right:.5rem!important}.pb-sm-2,.py-sm-2{padding-bottom:.5rem!important}.pl-sm-2,.px-sm-2{padding-left:.5rem!important}.p-sm-3{padding:1rem!important}.pt-sm-3,.py-sm-3{padding-top:1rem!important}.pr-sm-3,.px-sm-3{padding-right:1rem!important}.pb-sm-3,.py-sm-3{padding-bottom:1rem!important}.pl-sm-3,.px-sm-3{padding-left:1rem!important}.p-sm-4{padding:1.5rem!important}.pt-sm-4,.py-sm-4{padding-top:1.5rem!important}.pr-sm-4,.px-sm-4{padding-right:1.5rem!important}.pb-sm-4,.py-sm-4{padding-bottom:1.5rem!important}.pl-sm-4,.px-sm-4{padding-left:1.5rem!important}.p-sm-5{padding:3rem!important}.pt-sm-5,.py-sm-5{padding-top:3rem!important}.pr-sm-5,.px-sm-5{padding-right:3rem!important}.pb-sm-5,.py-sm-5{padding-bottom:3rem!important}.pl-sm-5,.px-sm-5{padding-left:3rem!important}.m-sm-n1{margin:-.25rem!important}.mt-sm-n1,.my-sm-n1{margin-top:-.25rem!important}.mr-sm-n1,.mx-sm-n1{margin-right:-.25rem!important}.mb-sm-n1,.my-sm-n1{margin-bottom:-.25rem!important}.ml-sm-n1,.mx-sm-n1{margin-left:-.25rem!important}.m-sm-n2{margin:-.5rem!important}.mt-sm-n2,.my-sm-n2{margin-top:-.5rem!important}.mr-sm-n2,.mx-sm-n2{margin-right:-.5rem!important}.mb-sm-n2,.my-sm-n2{margin-bottom:-.5rem!important}.ml-sm-n2,.mx-sm-n2{margin-left:-.5rem!important}.m-sm-n3{margin:-1rem!important}.mt-sm-n3,.my-sm-n3{margin-top:-1rem!important}.mr-sm-n3,.mx-sm-n3{margin-right:-1rem!important}.mb-sm-n3,.my-sm-n3{margin-bottom:-1rem!important}.ml-sm-n3,.mx-sm-n3{margin-left:-1rem!important}.m-sm-n4{margin:-1.5rem!important}.mt-sm-n4,.my-sm-n4{margin-top:-1.5rem!important}.mr-sm-n4,.mx-sm-n4{margin-right:-1.5rem!important}.mb-sm-n4,.my-sm-n4{margin-bottom:-1.5rem!important}.ml-sm-n4,.mx-sm-n4{margin-left:-1.5rem!important}.m-sm-n5{margin:-3rem!important}.mt-sm-n5,.my-sm-n5{margin-top:-3rem!important}.mr-sm-n5,.mx-sm-n5{margin-right:-3rem!important}.mb-sm-n5,.my-sm-n5{margin-bottom:-3rem!important}.ml-sm-n5,.mx-sm-n5{margin-left:-3rem!important}.m-sm-auto{margin:auto!important}.mt-sm-auto,.my-sm-auto{margin-top:auto!important}.mr-sm-auto,.mx-sm-auto{margin-right:auto!important}.mb-sm-auto,.my-sm-auto{margin-bottom:auto!important}.ml-sm-auto,.mx-sm-auto{margin-left:auto!important}}@media (min-width:720px){.m-md-0{margin:0!important}.mt-md-0,.my-md-0{margin-top:0!important}.mr-md-0,.mx-md-0{margin-right:0!important}.mb-md-0,.my-md-0{margin-bottom:0!important}.ml-md-0,.mx-md-0{margin-left:0!important}.m-md-1{margin:.25rem!important}.mt-md-1,.my-md-1{margin-top:.25rem!important}.mr-md-1,.mx-md-1{margin-right:.25rem!important}.mb-md-1,.my-md-1{margin-bottom:.25rem!important}.ml-md-1,.mx-md-1{margin-left:.25rem!important}.m-md-2{margin:.5rem!important}.mt-md-2,.my-md-2{margin-top:.5rem!important}.mr-md-2,.mx-md-2{margin-right:.5rem!important}.mb-md-2,.my-md-2{margin-bottom:.5rem!important}.ml-md-2,.mx-md-2{margin-left:.5rem!important}.m-md-3{margin:1rem!important}.mt-md-3,.my-md-3{margin-top:1rem!important}.mr-md-3,.mx-md-3{margin-right:1rem!important}.mb-md-3,.my-md-3{margin-bottom:1rem!important}.ml-md-3,.mx-md-3{margin-left:1rem!important}.m-md-4{margin:1.5rem!important}.mt-md-4,.my-md-4{margin-top:1.5rem!important}.mr-md-4,.mx-md-4{margin-right:1.5rem!important}.mb-md-4,.my-md-4{margin-bottom:1.5rem!important}.ml-md-4,.mx-md-4{margin-left:1.5rem!important}.m-md-5{margin:3rem!important}.mt-md-5,.my-md-5{margin-top:3rem!important}.mr-md-5,.mx-md-5{margin-right:3rem!important}.mb-md-5,.my-md-5{margin-bottom:3rem!important}.ml-md-5,.mx-md-5{margin-left:3rem!important}.p-md-0{padding:0!important}.pt-md-0,.py-md-0{padding-top:0!important}.pr-md-0,.px-md-0{padding-right:0!important}.pb-md-0,.py-md-0{padding-bottom:0!important}.pl-md-0,.px-md-0{padding-left:0!important}.p-md-1{padding:.25rem!important}.pt-md-1,.py-md-1{padding-top:.25rem!important}.pr-md-1,.px-md-1{padding-right:.25rem!important}.pb-md-1,.py-md-1{padding-bottom:.25rem!important}.pl-md-1,.px-md-1{padding-left:.25rem!important}.p-md-2{padding:.5rem!important}.pt-md-2,.py-md-2{padding-top:.5rem!important}.pr-md-2,.px-md-2{padding-right:.5rem!important}.pb-md-2,.py-md-2{padding-bottom:.5rem!important}.pl-md-2,.px-md-2{padding-left:.5rem!important}.p-md-3{padding:1rem!important}.pt-md-3,.py-md-3{padding-top:1rem!important}.pr-md-3,.px-md-3{padding-right:1rem!important}.pb-md-3,.py-md-3{padding-bottom:1rem!important}.pl-md-3,.px-md-3{padding-left:1rem!important}.p-md-4{padding:1.5rem!important}.pt-md-4,.py-md-4{padding-top:1.5rem!important}.pr-md-4,.px-md-4{padding-right:1.5rem!important}.pb-md-4,.py-md-4{padding-bottom:1.5rem!important}.pl-md-4,.px-md-4{padding-left:1.5rem!important}.p-md-5{padding:3rem!important}.pt-md-5,.py-md-5{padding-top:3rem!important}.pr-md-5,.px-md-5{padding-right:3rem!important}.pb-md-5,.py-md-5{padding-bottom:3rem!important}.pl-md-5,.px-md-5{padding-left:3rem!important}.m-md-n1{margin:-.25rem!important}.mt-md-n1,.my-md-n1{margin-top:-.25rem!important}.mr-md-n1,.mx-md-n1{margin-right:-.25rem!important}.mb-md-n1,.my-md-n1{margin-bottom:-.25rem!important}.ml-md-n1,.mx-md-n1{margin-left:-.25rem!important}.m-md-n2{margin:-.5rem!important}.mt-md-n2,.my-md-n2{margin-top:-.5rem!important}.mr-md-n2,.mx-md-n2{margin-right:-.5rem!important}.mb-md-n2,.my-md-n2{margin-bottom:-.5rem!important}.ml-md-n2,.mx-md-n2{margin-left:-.5rem!important}.m-md-n3{margin:-1rem!important}.mt-md-n3,.my-md-n3{margin-top:-1rem!important}.mr-md-n3,.mx-md-n3{margin-right:-1rem!important}.mb-md-n3,.my-md-n3{margin-bottom:-1rem!important}.ml-md-n3,.mx-md-n3{margin-left:-1rem!important}.m-md-n4{margin:-1.5rem!important}.mt-md-n4,.my-md-n4{margin-top:-1.5rem!important}.mr-md-n4,.mx-md-n4{margin-right:-1.5rem!important}.mb-md-n4,.my-md-n4{margin-bottom:-1.5rem!important}.ml-md-n4,.mx-md-n4{margin-left:-1.5rem!important}.m-md-n5{margin:-3rem!important}.mt-md-n5,.my-md-n5{margin-top:-3rem!important}.mr-md-n5,.mx-md-n5{margin-right:-3rem!important}.mb-md-n5,.my-md-n5{margin-bottom:-3rem!important}.ml-md-n5,.mx-md-n5{margin-left:-3rem!important}.m-md-auto{margin:auto!important}.mt-md-auto,.my-md-auto{margin-top:auto!important}.mr-md-auto,.mx-md-auto{margin-right:auto!important}.mb-md-auto,.my-md-auto{margin-bottom:auto!important}.ml-md-auto,.mx-md-auto{margin-left:auto!important}}@media (min-width:960px){.m-lg-0{margin:0!important}.mt-lg-0,.my-lg-0{margin-top:0!important}.mr-lg-0,.mx-lg-0{margin-right:0!important}.mb-lg-0,.my-lg-0{margin-bottom:0!important}.ml-lg-0,.mx-lg-0{margin-left:0!important}.m-lg-1{margin:.25rem!important}.mt-lg-1,.my-lg-1{margin-top:.25rem!important}.mr-lg-1,.mx-lg-1{margin-right:.25rem!important}.mb-lg-1,.my-lg-1{margin-bottom:.25rem!important}.ml-lg-1,.mx-lg-1{margin-left:.25rem!important}.m-lg-2{margin:.5rem!important}.mt-lg-2,.my-lg-2{margin-top:.5rem!important}.mr-lg-2,.mx-lg-2{margin-right:.5rem!important}.mb-lg-2,.my-lg-2{margin-bottom:.5rem!important}.ml-lg-2,.mx-lg-2{margin-left:.5rem!important}.m-lg-3{margin:1rem!important}.mt-lg-3,.my-lg-3{margin-top:1rem!important}.mr-lg-3,.mx-lg-3{margin-right:1rem!important}.mb-lg-3,.my-lg-3{margin-bottom:1rem!important}.ml-lg-3,.mx-lg-3{margin-left:1rem!important}.m-lg-4{margin:1.5rem!important}.mt-lg-4,.my-lg-4{margin-top:1.5rem!important}.mr-lg-4,.mx-lg-4{margin-right:1.5rem!important}.mb-lg-4,.my-lg-4{margin-bottom:1.5rem!important}.ml-lg-4,.mx-lg-4{margin-left:1.5rem!important}.m-lg-5{margin:3rem!important}.mt-lg-5,.my-lg-5{margin-top:3rem!important}.mr-lg-5,.mx-lg-5{margin-right:3rem!important}.mb-lg-5,.my-lg-5{margin-bottom:3rem!important}.ml-lg-5,.mx-lg-5{margin-left:3rem!important}.p-lg-0{padding:0!important}.pt-lg-0,.py-lg-0{padding-top:0!important}.pr-lg-0,.px-lg-0{padding-right:0!important}.pb-lg-0,.py-lg-0{padding-bottom:0!important}.pl-lg-0,.px-lg-0{padding-left:0!important}.p-lg-1{padding:.25rem!important}.pt-lg-1,.py-lg-1{padding-top:.25rem!important}.pr-lg-1,.px-lg-1{padding-right:.25rem!important}.pb-lg-1,.py-lg-1{padding-bottom:.25rem!important}.pl-lg-1,.px-lg-1{padding-left:.25rem!important}.p-lg-2{padding:.5rem!important}.pt-lg-2,.py-lg-2{padding-top:.5rem!important}.pr-lg-2,.px-lg-2{padding-right:.5rem!important}.pb-lg-2,.py-lg-2{padding-bottom:.5rem!important}.pl-lg-2,.px-lg-2{padding-left:.5rem!important}.p-lg-3{padding:1rem!important}.pt-lg-3,.py-lg-3{padding-top:1rem!important}.pr-lg-3,.px-lg-3{padding-right:1rem!important}.pb-lg-3,.py-lg-3{padding-bottom:1rem!important}.pl-lg-3,.px-lg-3{padding-left:1rem!important}.p-lg-4{padding:1.5rem!important}.pt-lg-4,.py-lg-4{padding-top:1.5rem!important}.pr-lg-4,.px-lg-4{padding-right:1.5rem!important}.pb-lg-4,.py-lg-4{padding-bottom:1.5rem!important}.pl-lg-4,.px-lg-4{padding-left:1.5rem!important}.p-lg-5{padding:3rem!important}.pt-lg-5,.py-lg-5{padding-top:3rem!important}.pr-lg-5,.px-lg-5{padding-right:3rem!important}.pb-lg-5,.py-lg-5{padding-bottom:3rem!important}.pl-lg-5,.px-lg-5{padding-left:3rem!important}.m-lg-n1{margin:-.25rem!important}.mt-lg-n1,.my-lg-n1{margin-top:-.25rem!important}.mr-lg-n1,.mx-lg-n1{margin-right:-.25rem!important}.mb-lg-n1,.my-lg-n1{margin-bottom:-.25rem!important}.ml-lg-n1,.mx-lg-n1{margin-left:-.25rem!important}.m-lg-n2{margin:-.5rem!important}.mt-lg-n2,.my-lg-n2{margin-top:-.5rem!important}.mr-lg-n2,.mx-lg-n2{margin-right:-.5rem!important}.mb-lg-n2,.my-lg-n2{margin-bottom:-.5rem!important}.ml-lg-n2,.mx-lg-n2{margin-left:-.5rem!important}.m-lg-n3{margin:-1rem!important}.mt-lg-n3,.my-lg-n3{margin-top:-1rem!important}.mr-lg-n3,.mx-lg-n3{margin-right:-1rem!important}.mb-lg-n3,.my-lg-n3{margin-bottom:-1rem!important}.ml-lg-n3,.mx-lg-n3{margin-left:-1rem!important}.m-lg-n4{margin:-1.5rem!important}.mt-lg-n4,.my-lg-n4{margin-top:-1.5rem!important}.mr-lg-n4,.mx-lg-n4{margin-right:-1.5rem!important}.mb-lg-n4,.my-lg-n4{margin-bottom:-1.5rem!important}.ml-lg-n4,.mx-lg-n4{margin-left:-1.5rem!important}.m-lg-n5{margin:-3rem!important}.mt-lg-n5,.my-lg-n5{margin-top:-3rem!important}.mr-lg-n5,.mx-lg-n5{margin-right:-3rem!important}.mb-lg-n5,.my-lg-n5{margin-bottom:-3rem!important}.ml-lg-n5,.mx-lg-n5{margin-left:-3rem!important}.m-lg-auto{margin:auto!important}.mt-lg-auto,.my-lg-auto{margin-top:auto!important}.mr-lg-auto,.mx-lg-auto{margin-right:auto!important}.mb-lg-auto,.my-lg-auto{margin-bottom:auto!important}.ml-lg-auto,.mx-lg-auto{margin-left:auto!important}}@media (min-width:1200px){.m-xl-0{margin:0!important}.mt-xl-0,.my-xl-0{margin-top:0!important}.mr-xl-0,.mx-xl-0{margin-right:0!important}.mb-xl-0,.my-xl-0{margin-bottom:0!important}.ml-xl-0,.mx-xl-0{margin-left:0!important}.m-xl-1{margin:.25rem!important}.mt-xl-1,.my-xl-1{margin-top:.25rem!important}.mr-xl-1,.mx-xl-1{margin-right:.25rem!important}.mb-xl-1,.my-xl-1{margin-bottom:.25rem!important}.ml-xl-1,.mx-xl-1{margin-left:.25rem!important}.m-xl-2{margin:.5rem!important}.mt-xl-2,.my-xl-2{margin-top:.5rem!important}.mr-xl-2,.mx-xl-2{margin-right:.5rem!important}.mb-xl-2,.my-xl-2{margin-bottom:.5rem!important}.ml-xl-2,.mx-xl-2{margin-left:.5rem!important}.m-xl-3{margin:1rem!important}.mt-xl-3,.my-xl-3{margin-top:1rem!important}.mr-xl-3,.mx-xl-3{margin-right:1rem!important}.mb-xl-3,.my-xl-3{margin-bottom:1rem!important}.ml-xl-3,.mx-xl-3{margin-left:1rem!important}.m-xl-4{margin:1.5rem!important}.mt-xl-4,.my-xl-4{margin-top:1.5rem!important}.mr-xl-4,.mx-xl-4{margin-right:1.5rem!important}.mb-xl-4,.my-xl-4{margin-bottom:1.5rem!important}.ml-xl-4,.mx-xl-4{margin-left:1.5rem!important}.m-xl-5{margin:3rem!important}.mt-xl-5,.my-xl-5{margin-top:3rem!important}.mr-xl-5,.mx-xl-5{margin-right:3rem!important}.mb-xl-5,.my-xl-5{margin-bottom:3rem!important}.ml-xl-5,.mx-xl-5{margin-left:3rem!important}.p-xl-0{padding:0!important}.pt-xl-0,.py-xl-0{padding-top:0!important}.pr-xl-0,.px-xl-0{padding-right:0!important}.pb-xl-0,.py-xl-0{padding-bottom:0!important}.pl-xl-0,.px-xl-0{padding-left:0!important}.p-xl-1{padding:.25rem!important}.pt-xl-1,.py-xl-1{padding-top:.25rem!important}.pr-xl-1,.px-xl-1{padding-right:.25rem!important}.pb-xl-1,.py-xl-1{padding-bottom:.25rem!important}.pl-xl-1,.px-xl-1{padding-left:.25rem!important}.p-xl-2{padding:.5rem!important}.pt-xl-2,.py-xl-2{padding-top:.5rem!important}.pr-xl-2,.px-xl-2{padding-right:.5rem!important}.pb-xl-2,.py-xl-2{padding-bottom:.5rem!important}.pl-xl-2,.px-xl-2{padding-left:.5rem!important}.p-xl-3{padding:1rem!important}.pt-xl-3,.py-xl-3{padding-top:1rem!important}.pr-xl-3,.px-xl-3{padding-right:1rem!important}.pb-xl-3,.py-xl-3{padding-bottom:1rem!important}.pl-xl-3,.px-xl-3{padding-left:1rem!important}.p-xl-4{padding:1.5rem!important}.pt-xl-4,.py-xl-4{padding-top:1.5rem!important}.pr-xl-4,.px-xl-4{padding-right:1.5rem!important}.pb-xl-4,.py-xl-4{padding-bottom:1.5rem!important}.pl-xl-4,.px-xl-4{padding-left:1.5rem!important}.p-xl-5{padding:3rem!important}.pt-xl-5,.py-xl-5{padding-top:3rem!important}.pr-xl-5,.px-xl-5{padding-right:3rem!important}.pb-xl-5,.py-xl-5{padding-bottom:3rem!important}.pl-xl-5,.px-xl-5{padding-left:3rem!important}.m-xl-n1{margin:-.25rem!important}.mt-xl-n1,.my-xl-n1{margin-top:-.25rem!important}.mr-xl-n1,.mx-xl-n1{margin-right:-.25rem!important}.mb-xl-n1,.my-xl-n1{margin-bottom:-.25rem!important}.ml-xl-n1,.mx-xl-n1{margin-left:-.25rem!important}.m-xl-n2{margin:-.5rem!important}.mt-xl-n2,.my-xl-n2{margin-top:-.5rem!important}.mr-xl-n2,.mx-xl-n2{margin-right:-.5rem!important}.mb-xl-n2,.my-xl-n2{margin-bottom:-.5rem!important}.ml-xl-n2,.mx-xl-n2{margin-left:-.5rem!important}.m-xl-n3{margin:-1rem!important}.mt-xl-n3,.my-xl-n3{margin-top:-1rem!important}.mr-xl-n3,.mx-xl-n3{margin-right:-1rem!important}.mb-xl-n3,.my-xl-n3{margin-bottom:-1rem!important}.ml-xl-n3,.mx-xl-n3{margin-left:-1rem!important}.m-xl-n4{margin:-1.5rem!important}.mt-xl-n4,.my-xl-n4{margin-top:-1.5rem!important}.mr-xl-n4,.mx-xl-n4{margin-right:-1.5rem!important}.mb-xl-n4,.my-xl-n4{margin-bottom:-1.5rem!important}.ml-xl-n4,.mx-xl-n4{margin-left:-1.5rem!important}.m-xl-n5{margin:-3rem!important}.mt-xl-n5,.my-xl-n5{margin-top:-3rem!important}.mr-xl-n5,.mx-xl-n5{margin-right:-3rem!important}.mb-xl-n5,.my-xl-n5{margin-bottom:-3rem!important}.ml-xl-n5,.mx-xl-n5{margin-left:-3rem!important}.m-xl-auto{margin:auto!important}.mt-xl-auto,.my-xl-auto{margin-top:auto!important}.mr-xl-auto,.mx-xl-auto{margin-right:auto!important}.mb-xl-auto,.my-xl-auto{margin-bottom:auto!important}.ml-xl-auto,.mx-xl-auto{margin-left:auto!important}}.stretched-link:after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;pointer-events:auto;content:"";background-color:transparent}.text-monospace{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace!important}.text-justify{text-align:justify!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.text-left{text-align:left!important}.text-right{text-align:right!important}.text-center{text-align:center!important}@media (min-width:540px){.text-sm-left{text-align:left!important}.text-sm-right{text-align:right!important}.text-sm-center{text-align:center!important}}@media (min-width:720px){.text-md-left{text-align:left!important}.text-md-right{text-align:right!important}.text-md-center{text-align:center!important}}@media (min-width:960px){.text-lg-left{text-align:left!important}.text-lg-right{text-align:right!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.text-xl-left{text-align:left!important}.text-xl-right{text-align:right!important}.text-xl-center{text-align:center!important}}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.font-weight-light{font-weight:300!important}.font-weight-lighter{font-weight:lighter!important}.font-weight-normal{font-weight:400!important}.font-weight-bold{font-weight:700!important}.font-weight-bolder{font-weight:bolder!important}.font-italic{font-style:italic!important}.text-white{color:#fff!important}.text-primary{color:#007bff!important}a.text-primary:focus,a.text-primary:hover{color:#0056b3!important}.text-secondary{color:#6c757d!important}a.text-secondary:focus,a.text-secondary:hover{color:#494f54!important}.text-success{color:#28a745!important}a.text-success:focus,a.text-success:hover{color:#19692c!important}.text-info{color:#17a2b8!important}a.text-info:focus,a.text-info:hover{color:#0f6674!important}.text-warning{color:#ffc107!important}a.text-warning:focus,a.text-warning:hover{color:#ba8b00!important}.text-danger{color:#dc3545!important}a.text-danger:focus,a.text-danger:hover{color:#a71d2a!important}.text-light{color:#f8f9fa!important}a.text-light:focus,a.text-light:hover{color:#cbd3da!important}.text-dark{color:#343a40!important}a.text-dark:focus,a.text-dark:hover{color:#121416!important}.text-body{color:#212529!important}.text-muted{color:#6c757d!important}.text-black-50{color:rgba(0,0,0,.5)!important}.text-white-50{color:hsla(0,0%,100%,.5)!important}.text-hide{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.text-decoration-none{text-decoration:none!important}.text-break{word-wrap:break-word!important}.text-reset{color:inherit!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media print{*,:after,:before{text-shadow:none!important;box-shadow:none!important}a:not(.btn){text-decoration:underline}abbr[title]:after{content:" (" attr(title) ")"}pre{white-space:pre-wrap!important}blockquote,pre{border:1px solid #adb5bd;page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}@page{size:a3}.container,body{min-width:960px!important}.navbar{display:none}.badge{border:1px solid #000}.table{border-collapse:collapse!important}.table td,.table th{background-color:#fff!important}.table-bordered td,.table-bordered th{border:1px solid #dee2e6!important}.table-dark{color:inherit}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#dee2e6}.table .thead-dark th{color:inherit;border-color:#dee2e6}}html{font-size:var(--pst-font-size-base);scroll-padding-top:calc(var(--pst-header-height) + 12px)}body{padding-top:calc(var(--pst-header-height) + 20px);background-color:#fff;font-family:var(--pst-font-family-base);font-weight:400;line-height:1.65;color:rgba(var(--pst-color-text-base),1)}p{margin-bottom:1.15rem;font-size:1em;color:rgba(var(--pst-color-paragraph),1)}p.rubric{border-bottom:1px solid #c9c9c9}a{color:rgba(var(--pst-color-link),1);text-decoration:none}a:hover{color:rgba(var(--pst-color-link-hover),1);text-decoration:underline}a.headerlink{color:rgba(var(--pst-color-headerlink),1);font-size:.8em;padding:0 4px;text-decoration:none}a.headerlink:hover{background-color:rgba(var(--pst-color-headerlink),1);color:rgba(var(--pst-color-headerlink-hover),1)}.heading-style,h1,h2,h3,h4,h5,h6{margin:2.75rem 0 1.05rem;font-family:var(--pst-font-family-heading);font-weight:400;line-height:1.15}h1{margin-top:0;font-size:var(--pst-font-size-h1);color:rgba(var(--pst-color-h1),1)}h2{font-size:var(--pst-font-size-h2);color:rgba(var(--pst-color-h2),1)}h3{font-size:var(--pst-font-size-h3);color:rgba(var(--pst-color-h3),1)}h4{font-size:var(--pst-font-size-h4);color:rgba(var(--pst-color-h4),1)}h5{font-size:var(--pst-font-size-h5);color:rgba(var(--pst-color-h5),1)}h6{font-size:var(--pst-font-size-h6);color:rgba(var(--pst-color-h6),1)}.text_small,small{font-size:var(--pst-font-size-milli)}hr{border:0;border-top:1px solid #e5e5e5}code,kbd,pre,samp{font-family:var(--pst-font-family-monospace)}code{color:rgba(var(--pst-color-inline-code),1)}pre{margin:1.5em 0;padding:10px;background-color:rgba(var(--pst-color-preformatted-background),1);color:rgba(var(--pst-color-preformatted-text),1);line-height:1.2em;border:1px solid #c9c9c9;box-shadow:1px 1px 1px #d8d8d8}.navbar{position:fixed;min-height:var(--pst-header-height);width:100%;padding:0}.navbar .container-xl{height:100%}@media (min-width:960px){.navbar #navbar-end>.navbar-end-item{display:inline-block}}.navbar-brand{position:relative;height:var(--pst-header-height);width:auto;padding:.5rem 0}.navbar-brand img{max-width:100%;height:100%;width:auto}.navbar-light{background:#fff!important;box-shadow:0 .125rem .25rem 0 rgba(0,0,0,.11)}.navbar-light .navbar-nav li a.nav-link{padding:0 .5rem;color:rgba(var(--pst-color-navbar-link),1)}.navbar-light .navbar-nav li a.nav-link:hover{color:rgba(var(--pst-color-navbar-link-hover),1)}.navbar-light .navbar-nav>.active>.nav-link{font-weight:600;color:rgba(var(--pst-color-navbar-link-active),1)}.navbar-header a{padding:0 15px}.admonition{margin:1.5625em auto;padding:0 .6rem .8rem!important;overflow:hidden;page-break-inside:avoid;border-left:.2rem solid;border-left-color:rgba(var(--pst-color-admonition-default),1);border-bottom-color:rgba(var(--pst-color-admonition-default),1);border-right-color:rgba(var(--pst-color-admonition-default),1);border-top-color:rgba(var(--pst-color-admonition-default),1);border-radius:.1rem;box-shadow:0 .2rem .5rem rgba(0,0,0,.05),0 0 .05rem rgba(0,0,0,.1);transition:color .25s,background-color .25s,border-color .25s}.admonition :last-child{margin-bottom:0}.admonition p.admonition-title~*{padding:0 1.4rem}.admonition>ol,.admonition>ul{margin-left:1em}.admonition .admonition-title{position:relative;margin:0 -.6rem!important;padding:.4rem .6rem .4rem 2rem;font-weight:700;background-color:rgba(var(--pst-color-admonition-default),.1)}.admonition .admonition-title:before{position:absolute;left:.6rem;width:1rem;height:1rem;color:rgba(var(--pst-color-admonition-default),1);font-family:Font Awesome\ 5 Free;font-weight:900;content:var(--pst-icon-admonition-default)}.admonition .admonition-title+*{margin-top:.4em}.admonition.attention{border-color:rgba(var(--pst-color-admonition-attention),1)}.admonition.attention .admonition-title{background-color:rgba(var(--pst-color-admonition-attention),.1)}.admonition.attention .admonition-title:before{color:rgba(var(--pst-color-admonition-attention),1);content:var(--pst-icon-admonition-attention)}.admonition.caution{border-color:rgba(var(--pst-color-admonition-caution),1)}.admonition.caution .admonition-title{background-color:rgba(var(--pst-color-admonition-caution),.1)}.admonition.caution .admonition-title:before{color:rgba(var(--pst-color-admonition-caution),1);content:var(--pst-icon-admonition-caution)}.admonition.warning{border-color:rgba(var(--pst-color-admonition-warning),1)}.admonition.warning .admonition-title{background-color:rgba(var(--pst-color-admonition-warning),.1)}.admonition.warning .admonition-title:before{color:rgba(var(--pst-color-admonition-warning),1);content:var(--pst-icon-admonition-warning)}.admonition.danger{border-color:rgba(var(--pst-color-admonition-danger),1)}.admonition.danger .admonition-title{background-color:rgba(var(--pst-color-admonition-danger),.1)}.admonition.danger .admonition-title:before{color:rgba(var(--pst-color-admonition-danger),1);content:var(--pst-icon-admonition-danger)}.admonition.error{border-color:rgba(var(--pst-color-admonition-error),1)}.admonition.error .admonition-title{background-color:rgba(var(--pst-color-admonition-error),.1)}.admonition.error .admonition-title:before{color:rgba(var(--pst-color-admonition-error),1);content:var(--pst-icon-admonition-error)}.admonition.hint{border-color:rgba(var(--pst-color-admonition-hint),1)}.admonition.hint .admonition-title{background-color:rgba(var(--pst-color-admonition-hint),.1)}.admonition.hint .admonition-title:before{color:rgba(var(--pst-color-admonition-hint),1);content:var(--pst-icon-admonition-hint)}.admonition.tip{border-color:rgba(var(--pst-color-admonition-tip),1)}.admonition.tip .admonition-title{background-color:rgba(var(--pst-color-admonition-tip),.1)}.admonition.tip .admonition-title:before{color:rgba(var(--pst-color-admonition-tip),1);content:var(--pst-icon-admonition-tip)}.admonition.important{border-color:rgba(var(--pst-color-admonition-important),1)}.admonition.important .admonition-title{background-color:rgba(var(--pst-color-admonition-important),.1)}.admonition.important .admonition-title:before{color:rgba(var(--pst-color-admonition-important),1);content:var(--pst-icon-admonition-important)}.admonition.note{border-color:rgba(var(--pst-color-admonition-note),1)}.admonition.note .admonition-title{background-color:rgba(var(--pst-color-admonition-note),.1)}.admonition.note .admonition-title:before{color:rgba(var(--pst-color-admonition-note),1);content:var(--pst-icon-admonition-note)}div.deprecated{margin-bottom:10px;margin-top:10px;padding:7px;background-color:#f3e5e5;border:1px solid #eed3d7;border-radius:.5rem}div.deprecated p{color:#b94a48;display:inline}.topic{background-color:#eee}.seealso dd{margin-top:0;margin-bottom:0}.viewcode-back{font-family:var(--pst-font-family-base)}.viewcode-block:target{background-color:#f4debf;border-top:1px solid #ac9;border-bottom:1px solid #ac9}span.guilabel{border:1px solid #7fbbe3;background:#e7f2fa;font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}table.field-list{border-collapse:separate;border-spacing:10px;margin-left:1px}table.field-list th.field-name{padding:1px 8px 1px 5px;white-space:nowrap;background-color:#eee}table.field-list td.field-body p{font-style:italic}table.field-list td.field-body p>strong{font-style:normal}table.field-list td.field-body blockquote{border-left:none;margin:0 0 .3em;padding-left:30px}.table.autosummary td:first-child{white-space:nowrap}footer{width:100%;border-top:1px solid #ccc;padding:10px}footer .footer-item p{margin-bottom:0}.bd-search{position:relative;padding:1rem 15px;margin-right:-15px;margin-left:-15px}.bd-search .icon{position:absolute;color:#a4a6a7;left:25px;top:25px}.bd-search input{border-radius:0;border:0;border-bottom:1px solid #e5e5e5;padding-left:35px}.bd-toc{-ms-flex-order:2;order:2;height:calc(100vh - 2rem);overflow-y:auto}@supports (position:-webkit-sticky) or (position:sticky){.bd-toc{position:-webkit-sticky;position:sticky;top:calc(var(--pst-header-height) + 20px);height:calc(100vh - 5rem);overflow-y:auto}}.bd-toc .onthispage{color:#a4a6a7}.section-nav{padding-left:0;border-left:1px solid #eee;border-bottom:none}.section-nav ul{padding-left:1rem}.toc-entry,.toc-entry a{display:block}.toc-entry a{padding:.125rem 1.5rem;color:rgba(var(--pst-color-toc-link),1)}@media (min-width:1200px){.toc-entry a{padding-right:0}}.toc-entry a:hover{color:rgba(var(--pst-color-toc-link-hover),1);text-decoration:none}.bd-sidebar{padding-top:1em}@media (min-width:720px){.bd-sidebar{border-right:1px solid rgba(0,0,0,.1)}@supports (position:-webkit-sticky) or (position:sticky){.bd-sidebar{position:-webkit-sticky;position:sticky;top:calc(var(--pst-header-height) + 20px);z-index:1000;height:calc(100vh - var(--pst-header-height) - 20px)}}}.bd-sidebar.no-sidebar{border-right:0}.bd-links{padding-top:1rem;padding-bottom:1rem;margin-right:-15px;margin-left:-15px}@media (min-width:720px){.bd-links{display:block!important}@supports (position:-webkit-sticky) or (position:sticky){.bd-links{max-height:calc(100vh - 11rem);overflow-y:auto}}}.bd-sidenav{display:none}.bd-content{padding-top:20px}.bd-content .section{max-width:100%}.bd-content .section table{display:block;overflow:auto}.bd-toc-link{display:block;padding:.25rem 1.5rem;font-weight:600;color:rgba(0,0,0,.65)}.bd-toc-link:hover{color:rgba(0,0,0,.85);text-decoration:none}.bd-toc-item.active{margin-bottom:1rem}.bd-toc-item.active:not(:first-child){margin-top:1rem}.bd-toc-item.active>.bd-toc-link{color:rgba(0,0,0,.85)}.bd-toc-item.active>.bd-toc-link:hover{background-color:transparent}.bd-toc-item.active>.bd-sidenav{display:block}nav.bd-links p.caption{font-size:var(--pst-sidebar-caption-font-size);text-transform:uppercase;font-weight:700;position:relative;margin-top:1.25em;margin-bottom:.5em;padding:0 1.5rem;color:rgba(var(--pst-color-sidebar-caption),1)}nav.bd-links p.caption:first-child{margin-top:0}.bd-sidebar .nav{font-size:var(--pst-sidebar-font-size)}.bd-sidebar .nav ul{list-style:none;padding:0 0 0 1.5rem}.bd-sidebar .nav li>a{display:block;padding:.25rem 1.5rem;color:rgba(var(--pst-color-sidebar-link),1)}.bd-sidebar .nav li>a:hover{color:rgba(var(--pst-color-sidebar-link-hover),1);text-decoration:none;background-color:transparent}.bd-sidebar .nav li>a.reference.external:after{font-family:Font Awesome\ 5 Free;font-weight:900;content:"\f35d";font-size:.75em;margin-left:.3em}.bd-sidebar .nav .active:hover>a,.bd-sidebar .nav .active>a{font-weight:600;color:rgba(var(--pst-color-sidebar-link-active),1)}.toc-h2{font-size:.85rem}.toc-h3{font-size:.75rem}.toc-h4{font-size:.65rem}.toc-entry>.nav-link.active{font-weight:600;color:#130654;color:rgba(var(--pst-color-toc-link-active),1);background-color:transparent;border-left:2px solid rgba(var(--pst-color-toc-link-active),1)}.nav-link:hover{border-style:none}#navbar-main-elements li.nav-item i{font-size:.7rem;padding-left:2px;vertical-align:middle}.bd-toc .nav .nav{display:none}.bd-toc .nav .nav.visible,.bd-toc .nav>.active>ul{display:block}.prev-next-bottom{margin:20px 0}.prev-next-bottom a.left-prev,.prev-next-bottom a.right-next{padding:10px;border:1px solid rgba(0,0,0,.2);max-width:45%;overflow-x:hidden;color:rgba(0,0,0,.65)}.prev-next-bottom a.left-prev{float:left}.prev-next-bottom a.left-prev:before{content:"<< "}.prev-next-bottom a.right-next{float:right}.prev-next-bottom a.right-next:after{content:" >>"}.alert{padding-bottom:0}.alert-info a{color:#e83e8c}#navbar-icon-links i.fa,#navbar-icon-links i.fab,#navbar-icon-links i.far,#navbar-icon-links i.fas{vertical-align:middle;font-style:normal;font-size:1.5rem;line-height:1.25}#navbar-icon-links i.fa-github-square:before{color:#333}#navbar-icon-links i.fa-twitter-square:before{color:#55acee}#navbar-icon-links i.fa-gitlab:before{color:#548}#navbar-icon-links i.fa-bitbucket:before{color:#0052cc}.tocsection{border-left:1px solid #eee;padding:.3rem 1.5rem}.tocsection i{padding-right:.5rem}.editthispage{padding-top:2rem}.editthispage a{color:#130754}.xr-wrap[hidden]{display:block!important}.toctree-checkbox{position:absolute;display:none}.toctree-checkbox~ul{display:none}.toctree-checkbox~label i{transform:rotate(0deg)}.toctree-checkbox:checked~ul{display:block}.toctree-checkbox:checked~label i{transform:rotate(180deg)}.bd-sidebar li{position:relative}.bd-sidebar label{position:absolute;top:0;right:0;height:30px;width:30px;cursor:pointer;display:flex;justify-content:center;align-items:center}.bd-sidebar label:hover{background:rgba(var(--pst-color-sidebar-expander-background-hover),1)}.bd-sidebar label i{display:inline-block;font-size:.75rem;text-align:center}.bd-sidebar label i:hover{color:rgba(var(--pst-color-sidebar-link-hover),1)}.bd-sidebar li.has-children>.reference{padding-right:30px}div.doctest>div.highlight span.gp,span.linenos,table.highlighttable td.linenos{user-select:none!important;-webkit-user-select:text!important;-webkit-user-select:none!important;-moz-user-select:none!important;-ms-user-select:none!important} \ No newline at end of file diff --git a/0.4/_static/css/theme.css b/0.4/_static/css/theme.css new file mode 100644 index 00000000..3f6e79da --- /dev/null +++ b/0.4/_static/css/theme.css @@ -0,0 +1,117 @@ +:root { + /***************************************************************************** + * Theme config + **/ + --pst-header-height: 60px; + + /***************************************************************************** + * Font size + **/ + --pst-font-size-base: 15px; /* base font size - applied at body / html level */ + + /* heading font sizes */ + --pst-font-size-h1: 36px; + --pst-font-size-h2: 32px; + --pst-font-size-h3: 26px; + --pst-font-size-h4: 21px; + --pst-font-size-h5: 18px; + --pst-font-size-h6: 16px; + + /* smaller then heading font sizes*/ + --pst-font-size-milli: 12px; + + --pst-sidebar-font-size: .9em; + --pst-sidebar-caption-font-size: .9em; + + /***************************************************************************** + * Font family + **/ + /* These are adapted from https://systemfontstack.com/ */ + --pst-font-family-base-system: -apple-system, BlinkMacSystemFont, Segoe UI, "Helvetica Neue", + Arial, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol; + --pst-font-family-monospace-system: "SFMono-Regular", Menlo, Consolas, Monaco, + Liberation Mono, Lucida Console, monospace; + + --pst-font-family-base: var(--pst-font-family-base-system); + --pst-font-family-heading: var(--pst-font-family-base); + --pst-font-family-monospace: var(--pst-font-family-monospace-system); + + /***************************************************************************** + * Color + * + * Colors are defined in rgb string way, "red, green, blue" + **/ + --pst-color-primary: 19, 6, 84; + --pst-color-success: 40, 167, 69; + --pst-color-info: 0, 123, 255; /*23, 162, 184;*/ + --pst-color-warning: 255, 193, 7; + --pst-color-danger: 220, 53, 69; + --pst-color-text-base: 51, 51, 51; + + --pst-color-h1: var(--pst-color-primary); + --pst-color-h2: var(--pst-color-primary); + --pst-color-h3: var(--pst-color-text-base); + --pst-color-h4: var(--pst-color-text-base); + --pst-color-h5: var(--pst-color-text-base); + --pst-color-h6: var(--pst-color-text-base); + --pst-color-paragraph: var(--pst-color-text-base); + --pst-color-link: 0, 91, 129; + --pst-color-link-hover: 227, 46, 0; + --pst-color-headerlink: 198, 15, 15; + --pst-color-headerlink-hover: 255, 255, 255; + --pst-color-preformatted-text: 34, 34, 34; + --pst-color-preformatted-background: 250, 250, 250; + --pst-color-inline-code: 232, 62, 140; + + --pst-color-active-navigation: 19, 6, 84; + --pst-color-navbar-link: 77, 77, 77; + --pst-color-navbar-link-hover: var(--pst-color-active-navigation); + --pst-color-navbar-link-active: var(--pst-color-active-navigation); + --pst-color-sidebar-link: 77, 77, 77; + --pst-color-sidebar-link-hover: var(--pst-color-active-navigation); + --pst-color-sidebar-link-active: var(--pst-color-active-navigation); + --pst-color-sidebar-expander-background-hover: 244, 244, 244; + --pst-color-sidebar-caption: 77, 77, 77; + --pst-color-toc-link: 119, 117, 122; + --pst-color-toc-link-hover: var(--pst-color-active-navigation); + --pst-color-toc-link-active: var(--pst-color-active-navigation); + + /***************************************************************************** + * Icon + **/ + + /* font awesome icons*/ + --pst-icon-check-circle: '\f058'; + --pst-icon-info-circle: '\f05a'; + --pst-icon-exclamation-triangle: '\f071'; + --pst-icon-exclamation-circle: '\f06a'; + --pst-icon-times-circle: '\f057'; + --pst-icon-lightbulb: '\f0eb'; + + /***************************************************************************** + * Admonitions + **/ + + --pst-color-admonition-default: var(--pst-color-info); + --pst-color-admonition-note: var(--pst-color-info); + --pst-color-admonition-attention: var(--pst-color-warning); + --pst-color-admonition-caution: var(--pst-color-warning); + --pst-color-admonition-warning: var(--pst-color-warning); + --pst-color-admonition-danger: var(--pst-color-danger); + --pst-color-admonition-error: var(--pst-color-danger); + --pst-color-admonition-hint: var(--pst-color-success); + --pst-color-admonition-tip: var(--pst-color-success); + --pst-color-admonition-important: var(--pst-color-success); + + --pst-icon-admonition-default: var(--pst-icon-info-circle); + --pst-icon-admonition-note: var(--pst-icon-info-circle); + --pst-icon-admonition-attention: var(--pst-icon-exclamation-circle); + --pst-icon-admonition-caution: var(--pst-icon-exclamation-triangle); + --pst-icon-admonition-warning: var(--pst-icon-exclamation-triangle); + --pst-icon-admonition-danger: var(--pst-icon-exclamation-triangle); + --pst-icon-admonition-error: var(--pst-icon-times-circle); + --pst-icon-admonition-hint: var(--pst-icon-lightbulb); + --pst-icon-admonition-tip: var(--pst-icon-lightbulb); + --pst-icon-admonition-important: var(--pst-icon-exclamation-circle); + +} diff --git a/0.4/_static/custom.css b/0.4/_static/custom.css new file mode 100644 index 00000000..668b8429 --- /dev/null +++ b/0.4/_static/custom.css @@ -0,0 +1,120 @@ +h1.site-logo { + font-size: 30px !important; +} + +h1.site-logo small { + font-size: 20px !important; +} + +code { + display: inline-block; + border-radius: 4px; + padding: 0 4px; + background-color: #eee; + color: rgb(232, 62, 140); +} + +.right-next, .left-prev { + border-radius: 8px; + border-width: 0px !important; + box-shadow: 2px 2px 6px rgba(0, 0, 0, 0.2); +} + +.right-next:hover, .left-prev:hover { + text-decoration: none; +} + +.admonition { + border-radius: 8px; + border-width: 0; + box-shadow: 0 0 0 !important; +} + +.note { background-color: rgba(0, 123, 255, 0.1); } +.note * { color: rgb(69 94 121); } + +.warning { background-color: rgb(220 150 40 / 10%); } +.warning * { color: rgb(105 72 28); } + +.input_area, .output_area, .output_area img { + border-radius: 8px !important; + border-width: 0 !important; + margin: 8px 0 8px 0; +} + +.output_area { + padding: 4px; + background-color: hsl(227 60% 11% / 0.7) !important; +} + +.output_area pre { + color: #fff; + line-height: 20px !important; +} + +.input_area pre { + background-color: rgba(0 0 0 / 3%) !important; + padding: 12px !important; + line-height: 20px; +} + +.ansi-green-intense-fg { + color: #64d88b !important; +} + +#site-navigation { + background-color: #fafafa; +} + +.container, .container-lg, .container-md, .container-sm, .container-xl { + max-width: inherit !important; +} + +h1, h2 { + font-weight: bold !important; +} + +#main-content .section { + max-width: 900px !important; + margin: 0 auto !important; + font-size: 16px; +} + +p.caption { + font-weight: bold; +} + +h2 { + padding-bottom: 5px; + border-bottom: 1px solid #ccc; +} + +h3 { + margin-top: 1.5rem; +} + +tbody, thead, pre { + border: 1px solid rgba(0, 0, 0, 0.25); +} + +table td, th { + padding: 8px; +} + +table p { + margin-bottom: 0; +} + +table td code { + white-space: nowrap; +} + +table tr, +table th { + border-bottom: 1px solid rgba(0, 0, 0, 0.1); +} + +table tr:last-child { + border-bottom: 0; +} + diff --git a/0.4/_static/doctools.js b/0.4/_static/doctools.js new file mode 100644 index 00000000..d06a71d7 --- /dev/null +++ b/0.4/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/0.4/_static/documentation_options.js b/0.4/_static/documentation_options.js new file mode 100644 index 00000000..c141a072 --- /dev/null +++ b/0.4/_static/documentation_options.js @@ -0,0 +1,13 @@ +const DOCUMENTATION_OPTIONS = { + VERSION: '0.4', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'dirhtml', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: true, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/0.4/_static/file.png b/0.4/_static/file.png new file mode 100644 index 00000000..a858a410 Binary files /dev/null and b/0.4/_static/file.png differ diff --git a/0.4/_static/images/logo_binder.svg b/0.4/_static/images/logo_binder.svg new file mode 100644 index 00000000..45fecf75 --- /dev/null +++ b/0.4/_static/images/logo_binder.svg @@ -0,0 +1,19 @@ + + + + +logo + + + + + + + + diff --git a/0.4/_static/images/logo_colab.png b/0.4/_static/images/logo_colab.png new file mode 100644 index 00000000..b7560ec2 Binary files /dev/null and b/0.4/_static/images/logo_colab.png differ diff --git a/0.4/_static/images/logo_jupyterhub.svg b/0.4/_static/images/logo_jupyterhub.svg new file mode 100644 index 00000000..60cfe9f2 --- /dev/null +++ b/0.4/_static/images/logo_jupyterhub.svg @@ -0,0 +1 @@ +logo_jupyterhubHub diff --git a/0.4/_static/js/index.1c5a1a01449ed65a7b51.js b/0.4/_static/js/index.1c5a1a01449ed65a7b51.js new file mode 100644 index 00000000..b71f7fcc --- /dev/null +++ b/0.4/_static/js/index.1c5a1a01449ed65a7b51.js @@ -0,0 +1,32 @@ +!function(t){var e={};function n(i){if(e[i])return e[i].exports;var o=e[i]={i:i,l:!1,exports:{}};return t[i].call(o.exports,o,o.exports,n),o.l=!0,o.exports}n.m=t,n.c=e,n.d=function(t,e,i){n.o(t,e)||Object.defineProperty(t,e,{enumerable:!0,get:i})},n.r=function(t){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(t,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(t,"__esModule",{value:!0})},n.t=function(t,e){if(1&e&&(t=n(t)),8&e)return t;if(4&e&&"object"==typeof t&&t&&t.__esModule)return t;var i=Object.create(null);if(n.r(i),Object.defineProperty(i,"default",{enumerable:!0,value:t}),2&e&&"string"!=typeof t)for(var o in t)n.d(i,o,function(e){return t[e]}.bind(null,o));return i},n.n=function(t){var e=t&&t.__esModule?function(){return t.default}:function(){return t};return n.d(e,"a",e),e},n.o=function(t,e){return Object.prototype.hasOwnProperty.call(t,e)},n.p="",n(n.s=2)}([function(t,e){t.exports=jQuery},function(t,e,n){"use strict";n.r(e),function(t){ +/**! + * @fileOverview Kickass library to create and place poppers near their reference elements. + * @version 1.16.1 + * @license + * Copyright (c) 2016 Federico Zivolo and contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ +var n="undefined"!=typeof window&&"undefined"!=typeof document&&"undefined"!=typeof navigator,i=function(){for(var t=["Edge","Trident","Firefox"],e=0;e=0)return 1;return 0}();var o=n&&window.Promise?function(t){var e=!1;return function(){e||(e=!0,window.Promise.resolve().then((function(){e=!1,t()})))}}:function(t){var e=!1;return function(){e||(e=!0,setTimeout((function(){e=!1,t()}),i))}};function r(t){return t&&"[object Function]"==={}.toString.call(t)}function s(t,e){if(1!==t.nodeType)return[];var n=t.ownerDocument.defaultView.getComputedStyle(t,null);return e?n[e]:n}function a(t){return"HTML"===t.nodeName?t:t.parentNode||t.host}function l(t){if(!t)return document.body;switch(t.nodeName){case"HTML":case"BODY":return t.ownerDocument.body;case"#document":return t.body}var e=s(t),n=e.overflow,i=e.overflowX,o=e.overflowY;return/(auto|scroll|overlay)/.test(n+o+i)?t:l(a(t))}function c(t){return t&&t.referenceNode?t.referenceNode:t}var u=n&&!(!window.MSInputMethodContext||!document.documentMode),h=n&&/MSIE 10/.test(navigator.userAgent);function f(t){return 11===t?u:10===t?h:u||h}function d(t){if(!t)return document.documentElement;for(var e=f(10)?document.body:null,n=t.offsetParent||null;n===e&&t.nextElementSibling;)n=(t=t.nextElementSibling).offsetParent;var i=n&&n.nodeName;return i&&"BODY"!==i&&"HTML"!==i?-1!==["TH","TD","TABLE"].indexOf(n.nodeName)&&"static"===s(n,"position")?d(n):n:t?t.ownerDocument.documentElement:document.documentElement}function p(t){return null!==t.parentNode?p(t.parentNode):t}function m(t,e){if(!(t&&t.nodeType&&e&&e.nodeType))return document.documentElement;var n=t.compareDocumentPosition(e)&Node.DOCUMENT_POSITION_FOLLOWING,i=n?t:e,o=n?e:t,r=document.createRange();r.setStart(i,0),r.setEnd(o,0);var s,a,l=r.commonAncestorContainer;if(t!==l&&e!==l||i.contains(o))return"BODY"===(a=(s=l).nodeName)||"HTML"!==a&&d(s.firstElementChild)!==s?d(l):l;var c=p(t);return c.host?m(c.host,e):m(t,p(e).host)}function g(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"top",n="top"===e?"scrollTop":"scrollLeft",i=t.nodeName;if("BODY"===i||"HTML"===i){var o=t.ownerDocument.documentElement,r=t.ownerDocument.scrollingElement||o;return r[n]}return t[n]}function v(t,e){var n=arguments.length>2&&void 0!==arguments[2]&&arguments[2],i=g(e,"top"),o=g(e,"left"),r=n?-1:1;return t.top+=i*r,t.bottom+=i*r,t.left+=o*r,t.right+=o*r,t}function _(t,e){var n="x"===e?"Left":"Top",i="Left"===n?"Right":"Bottom";return parseFloat(t["border"+n+"Width"])+parseFloat(t["border"+i+"Width"])}function b(t,e,n,i){return Math.max(e["offset"+t],e["scroll"+t],n["client"+t],n["offset"+t],n["scroll"+t],f(10)?parseInt(n["offset"+t])+parseInt(i["margin"+("Height"===t?"Top":"Left")])+parseInt(i["margin"+("Height"===t?"Bottom":"Right")]):0)}function y(t){var e=t.body,n=t.documentElement,i=f(10)&&getComputedStyle(n);return{height:b("Height",e,n,i),width:b("Width",e,n,i)}}var w=function(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")},E=function(){function t(t,e){for(var n=0;n2&&void 0!==arguments[2]&&arguments[2],i=f(10),o="HTML"===e.nodeName,r=D(t),a=D(e),c=l(t),u=s(e),h=parseFloat(u.borderTopWidth),d=parseFloat(u.borderLeftWidth);n&&o&&(a.top=Math.max(a.top,0),a.left=Math.max(a.left,0));var p=S({top:r.top-a.top-h,left:r.left-a.left-d,width:r.width,height:r.height});if(p.marginTop=0,p.marginLeft=0,!i&&o){var m=parseFloat(u.marginTop),g=parseFloat(u.marginLeft);p.top-=h-m,p.bottom-=h-m,p.left-=d-g,p.right-=d-g,p.marginTop=m,p.marginLeft=g}return(i&&!n?e.contains(c):e===c&&"BODY"!==c.nodeName)&&(p=v(p,e)),p}function k(t){var e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],n=t.ownerDocument.documentElement,i=N(t,n),o=Math.max(n.clientWidth,window.innerWidth||0),r=Math.max(n.clientHeight,window.innerHeight||0),s=e?0:g(n),a=e?0:g(n,"left"),l={top:s-i.top+i.marginTop,left:a-i.left+i.marginLeft,width:o,height:r};return S(l)}function O(t){var e=t.nodeName;if("BODY"===e||"HTML"===e)return!1;if("fixed"===s(t,"position"))return!0;var n=a(t);return!!n&&O(n)}function A(t){if(!t||!t.parentElement||f())return document.documentElement;for(var e=t.parentElement;e&&"none"===s(e,"transform");)e=e.parentElement;return e||document.documentElement}function I(t,e,n,i){var o=arguments.length>4&&void 0!==arguments[4]&&arguments[4],r={top:0,left:0},s=o?A(t):m(t,c(e));if("viewport"===i)r=k(s,o);else{var u=void 0;"scrollParent"===i?"BODY"===(u=l(a(e))).nodeName&&(u=t.ownerDocument.documentElement):u="window"===i?t.ownerDocument.documentElement:i;var h=N(u,s,o);if("HTML"!==u.nodeName||O(s))r=h;else{var f=y(t.ownerDocument),d=f.height,p=f.width;r.top+=h.top-h.marginTop,r.bottom=d+h.top,r.left+=h.left-h.marginLeft,r.right=p+h.left}}var g="number"==typeof(n=n||0);return r.left+=g?n:n.left||0,r.top+=g?n:n.top||0,r.right-=g?n:n.right||0,r.bottom-=g?n:n.bottom||0,r}function x(t){return t.width*t.height}function j(t,e,n,i,o){var r=arguments.length>5&&void 0!==arguments[5]?arguments[5]:0;if(-1===t.indexOf("auto"))return t;var s=I(n,i,r,o),a={top:{width:s.width,height:e.top-s.top},right:{width:s.right-e.right,height:s.height},bottom:{width:s.width,height:s.bottom-e.bottom},left:{width:e.left-s.left,height:s.height}},l=Object.keys(a).map((function(t){return C({key:t},a[t],{area:x(a[t])})})).sort((function(t,e){return e.area-t.area})),c=l.filter((function(t){var e=t.width,i=t.height;return e>=n.clientWidth&&i>=n.clientHeight})),u=c.length>0?c[0].key:l[0].key,h=t.split("-")[1];return u+(h?"-"+h:"")}function L(t,e,n){var i=arguments.length>3&&void 0!==arguments[3]?arguments[3]:null,o=i?A(e):m(e,c(n));return N(n,o,i)}function P(t){var e=t.ownerDocument.defaultView.getComputedStyle(t),n=parseFloat(e.marginTop||0)+parseFloat(e.marginBottom||0),i=parseFloat(e.marginLeft||0)+parseFloat(e.marginRight||0);return{width:t.offsetWidth+i,height:t.offsetHeight+n}}function F(t){var e={left:"right",right:"left",bottom:"top",top:"bottom"};return t.replace(/left|right|bottom|top/g,(function(t){return e[t]}))}function R(t,e,n){n=n.split("-")[0];var i=P(t),o={width:i.width,height:i.height},r=-1!==["right","left"].indexOf(n),s=r?"top":"left",a=r?"left":"top",l=r?"height":"width",c=r?"width":"height";return o[s]=e[s]+e[l]/2-i[l]/2,o[a]=n===a?e[a]-i[c]:e[F(a)],o}function M(t,e){return Array.prototype.find?t.find(e):t.filter(e)[0]}function B(t,e,n){return(void 0===n?t:t.slice(0,function(t,e,n){if(Array.prototype.findIndex)return t.findIndex((function(t){return t[e]===n}));var i=M(t,(function(t){return t[e]===n}));return t.indexOf(i)}(t,"name",n))).forEach((function(t){t.function&&console.warn("`modifier.function` is deprecated, use `modifier.fn`!");var n=t.function||t.fn;t.enabled&&r(n)&&(e.offsets.popper=S(e.offsets.popper),e.offsets.reference=S(e.offsets.reference),e=n(e,t))})),e}function H(){if(!this.state.isDestroyed){var t={instance:this,styles:{},arrowStyles:{},attributes:{},flipped:!1,offsets:{}};t.offsets.reference=L(this.state,this.popper,this.reference,this.options.positionFixed),t.placement=j(this.options.placement,t.offsets.reference,this.popper,this.reference,this.options.modifiers.flip.boundariesElement,this.options.modifiers.flip.padding),t.originalPlacement=t.placement,t.positionFixed=this.options.positionFixed,t.offsets.popper=R(this.popper,t.offsets.reference,t.placement),t.offsets.popper.position=this.options.positionFixed?"fixed":"absolute",t=B(this.modifiers,t),this.state.isCreated?this.options.onUpdate(t):(this.state.isCreated=!0,this.options.onCreate(t))}}function q(t,e){return t.some((function(t){var n=t.name;return t.enabled&&n===e}))}function Q(t){for(var e=[!1,"ms","Webkit","Moz","O"],n=t.charAt(0).toUpperCase()+t.slice(1),i=0;i1&&void 0!==arguments[1]&&arguments[1],n=Z.indexOf(t),i=Z.slice(n+1).concat(Z.slice(0,n));return e?i.reverse():i}var et="flip",nt="clockwise",it="counterclockwise";function ot(t,e,n,i){var o=[0,0],r=-1!==["right","left"].indexOf(i),s=t.split(/(\+|\-)/).map((function(t){return t.trim()})),a=s.indexOf(M(s,(function(t){return-1!==t.search(/,|\s/)})));s[a]&&-1===s[a].indexOf(",")&&console.warn("Offsets separated by white space(s) are deprecated, use a comma (,) instead.");var l=/\s*,\s*|\s+/,c=-1!==a?[s.slice(0,a).concat([s[a].split(l)[0]]),[s[a].split(l)[1]].concat(s.slice(a+1))]:[s];return(c=c.map((function(t,i){var o=(1===i?!r:r)?"height":"width",s=!1;return t.reduce((function(t,e){return""===t[t.length-1]&&-1!==["+","-"].indexOf(e)?(t[t.length-1]=e,s=!0,t):s?(t[t.length-1]+=e,s=!1,t):t.concat(e)}),[]).map((function(t){return function(t,e,n,i){var o=t.match(/((?:\-|\+)?\d*\.?\d*)(.*)/),r=+o[1],s=o[2];if(!r)return t;if(0===s.indexOf("%")){var a=void 0;switch(s){case"%p":a=n;break;case"%":case"%r":default:a=i}return S(a)[e]/100*r}if("vh"===s||"vw"===s){return("vh"===s?Math.max(document.documentElement.clientHeight,window.innerHeight||0):Math.max(document.documentElement.clientWidth,window.innerWidth||0))/100*r}return r}(t,o,e,n)}))}))).forEach((function(t,e){t.forEach((function(n,i){X(n)&&(o[e]+=n*("-"===t[i-1]?-1:1))}))})),o}var rt={placement:"bottom",positionFixed:!1,eventsEnabled:!0,removeOnDestroy:!1,onCreate:function(){},onUpdate:function(){},modifiers:{shift:{order:100,enabled:!0,fn:function(t){var e=t.placement,n=e.split("-")[0],i=e.split("-")[1];if(i){var o=t.offsets,r=o.reference,s=o.popper,a=-1!==["bottom","top"].indexOf(n),l=a?"left":"top",c=a?"width":"height",u={start:T({},l,r[l]),end:T({},l,r[l]+r[c]-s[c])};t.offsets.popper=C({},s,u[i])}return t}},offset:{order:200,enabled:!0,fn:function(t,e){var n=e.offset,i=t.placement,o=t.offsets,r=o.popper,s=o.reference,a=i.split("-")[0],l=void 0;return l=X(+n)?[+n,0]:ot(n,r,s,a),"left"===a?(r.top+=l[0],r.left-=l[1]):"right"===a?(r.top+=l[0],r.left+=l[1]):"top"===a?(r.left+=l[0],r.top-=l[1]):"bottom"===a&&(r.left+=l[0],r.top+=l[1]),t.popper=r,t},offset:0},preventOverflow:{order:300,enabled:!0,fn:function(t,e){var n=e.boundariesElement||d(t.instance.popper);t.instance.reference===n&&(n=d(n));var i=Q("transform"),o=t.instance.popper.style,r=o.top,s=o.left,a=o[i];o.top="",o.left="",o[i]="";var l=I(t.instance.popper,t.instance.reference,e.padding,n,t.positionFixed);o.top=r,o.left=s,o[i]=a,e.boundaries=l;var c=e.priority,u=t.offsets.popper,h={primary:function(t){var n=u[t];return u[t]l[t]&&!e.escapeWithReference&&(i=Math.min(u[n],l[t]-("right"===t?u.width:u.height))),T({},n,i)}};return c.forEach((function(t){var e=-1!==["left","top"].indexOf(t)?"primary":"secondary";u=C({},u,h[e](t))})),t.offsets.popper=u,t},priority:["left","right","top","bottom"],padding:5,boundariesElement:"scrollParent"},keepTogether:{order:400,enabled:!0,fn:function(t){var e=t.offsets,n=e.popper,i=e.reference,o=t.placement.split("-")[0],r=Math.floor,s=-1!==["top","bottom"].indexOf(o),a=s?"right":"bottom",l=s?"left":"top",c=s?"width":"height";return n[a]r(i[a])&&(t.offsets.popper[l]=r(i[a])),t}},arrow:{order:500,enabled:!0,fn:function(t,e){var n;if(!G(t.instance.modifiers,"arrow","keepTogether"))return t;var i=e.element;if("string"==typeof i){if(!(i=t.instance.popper.querySelector(i)))return t}else if(!t.instance.popper.contains(i))return console.warn("WARNING: `arrow.element` must be child of its popper element!"),t;var o=t.placement.split("-")[0],r=t.offsets,a=r.popper,l=r.reference,c=-1!==["left","right"].indexOf(o),u=c?"height":"width",h=c?"Top":"Left",f=h.toLowerCase(),d=c?"left":"top",p=c?"bottom":"right",m=P(i)[u];l[p]-ma[p]&&(t.offsets.popper[f]+=l[f]+m-a[p]),t.offsets.popper=S(t.offsets.popper);var g=l[f]+l[u]/2-m/2,v=s(t.instance.popper),_=parseFloat(v["margin"+h]),b=parseFloat(v["border"+h+"Width"]),y=g-t.offsets.popper[f]-_-b;return y=Math.max(Math.min(a[u]-m,y),0),t.arrowElement=i,t.offsets.arrow=(T(n={},f,Math.round(y)),T(n,d,""),n),t},element:"[x-arrow]"},flip:{order:600,enabled:!0,fn:function(t,e){if(q(t.instance.modifiers,"inner"))return t;if(t.flipped&&t.placement===t.originalPlacement)return t;var n=I(t.instance.popper,t.instance.reference,e.padding,e.boundariesElement,t.positionFixed),i=t.placement.split("-")[0],o=F(i),r=t.placement.split("-")[1]||"",s=[];switch(e.behavior){case et:s=[i,o];break;case nt:s=tt(i);break;case it:s=tt(i,!0);break;default:s=e.behavior}return s.forEach((function(a,l){if(i!==a||s.length===l+1)return t;i=t.placement.split("-")[0],o=F(i);var c=t.offsets.popper,u=t.offsets.reference,h=Math.floor,f="left"===i&&h(c.right)>h(u.left)||"right"===i&&h(c.left)h(u.top)||"bottom"===i&&h(c.top)h(n.right),m=h(c.top)h(n.bottom),v="left"===i&&d||"right"===i&&p||"top"===i&&m||"bottom"===i&&g,_=-1!==["top","bottom"].indexOf(i),b=!!e.flipVariations&&(_&&"start"===r&&d||_&&"end"===r&&p||!_&&"start"===r&&m||!_&&"end"===r&&g),y=!!e.flipVariationsByContent&&(_&&"start"===r&&p||_&&"end"===r&&d||!_&&"start"===r&&g||!_&&"end"===r&&m),w=b||y;(f||v||w)&&(t.flipped=!0,(f||v)&&(i=s[l+1]),w&&(r=function(t){return"end"===t?"start":"start"===t?"end":t}(r)),t.placement=i+(r?"-"+r:""),t.offsets.popper=C({},t.offsets.popper,R(t.instance.popper,t.offsets.reference,t.placement)),t=B(t.instance.modifiers,t,"flip"))})),t},behavior:"flip",padding:5,boundariesElement:"viewport",flipVariations:!1,flipVariationsByContent:!1},inner:{order:700,enabled:!1,fn:function(t){var e=t.placement,n=e.split("-")[0],i=t.offsets,o=i.popper,r=i.reference,s=-1!==["left","right"].indexOf(n),a=-1===["top","left"].indexOf(n);return o[s?"left":"top"]=r[n]-(a?o[s?"width":"height"]:0),t.placement=F(e),t.offsets.popper=S(o),t}},hide:{order:800,enabled:!0,fn:function(t){if(!G(t.instance.modifiers,"hide","preventOverflow"))return t;var e=t.offsets.reference,n=M(t.instance.modifiers,(function(t){return"preventOverflow"===t.name})).boundaries;if(e.bottomn.right||e.top>n.bottom||e.right2&&void 0!==arguments[2]?arguments[2]:{};w(this,t),this.scheduleUpdate=function(){return requestAnimationFrame(i.update)},this.update=o(this.update.bind(this)),this.options=C({},t.Defaults,s),this.state={isDestroyed:!1,isCreated:!1,scrollParents:[]},this.reference=e&&e.jquery?e[0]:e,this.popper=n&&n.jquery?n[0]:n,this.options.modifiers={},Object.keys(C({},t.Defaults.modifiers,s.modifiers)).forEach((function(e){i.options.modifiers[e]=C({},t.Defaults.modifiers[e]||{},s.modifiers?s.modifiers[e]:{})})),this.modifiers=Object.keys(this.options.modifiers).map((function(t){return C({name:t},i.options.modifiers[t])})).sort((function(t,e){return t.order-e.order})),this.modifiers.forEach((function(t){t.enabled&&r(t.onLoad)&&t.onLoad(i.reference,i.popper,i.options,t,i.state)})),this.update();var a=this.options.eventsEnabled;a&&this.enableEventListeners(),this.state.eventsEnabled=a}return E(t,[{key:"update",value:function(){return H.call(this)}},{key:"destroy",value:function(){return W.call(this)}},{key:"enableEventListeners",value:function(){return Y.call(this)}},{key:"disableEventListeners",value:function(){return z.call(this)}}]),t}();st.Utils=("undefined"!=typeof window?window:t).PopperUtils,st.placements=J,st.Defaults=rt,e.default=st}.call(this,n(4))},function(t,e,n){t.exports=n(5)},function(t,e,n){ +/*! + * Bootstrap v4.5.0 (https://getbootstrap.com/) + * Copyright 2011-2020 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + */ +!function(t,e,n){"use strict";function i(t,e){for(var n=0;n=4)throw new Error("Bootstrap's JavaScript requires at least jQuery v1.9.1 but less than v4.0.0")}};c.jQueryDetection(),e.fn.emulateTransitionEnd=l,e.event.special[c.TRANSITION_END]={bindType:"transitionend",delegateType:"transitionend",handle:function(t){if(e(t.target).is(this))return t.handleObj.handler.apply(this,arguments)}};var u="alert",h=e.fn[u],f=function(){function t(t){this._element=t}var n=t.prototype;return n.close=function(t){var e=this._element;t&&(e=this._getRootElement(t)),this._triggerCloseEvent(e).isDefaultPrevented()||this._removeElement(e)},n.dispose=function(){e.removeData(this._element,"bs.alert"),this._element=null},n._getRootElement=function(t){var n=c.getSelectorFromElement(t),i=!1;return n&&(i=document.querySelector(n)),i||(i=e(t).closest(".alert")[0]),i},n._triggerCloseEvent=function(t){var n=e.Event("close.bs.alert");return e(t).trigger(n),n},n._removeElement=function(t){var n=this;if(e(t).removeClass("show"),e(t).hasClass("fade")){var i=c.getTransitionDurationFromElement(t);e(t).one(c.TRANSITION_END,(function(e){return n._destroyElement(t,e)})).emulateTransitionEnd(i)}else this._destroyElement(t)},n._destroyElement=function(t){e(t).detach().trigger("closed.bs.alert").remove()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.alert");o||(o=new t(this),i.data("bs.alert",o)),"close"===n&&o[n](this)}))},t._handleDismiss=function(t){return function(e){e&&e.preventDefault(),t.close(this)}},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.alert.data-api",'[data-dismiss="alert"]',f._handleDismiss(new f)),e.fn[u]=f._jQueryInterface,e.fn[u].Constructor=f,e.fn[u].noConflict=function(){return e.fn[u]=h,f._jQueryInterface};var d=e.fn.button,p=function(){function t(t){this._element=t}var n=t.prototype;return n.toggle=function(){var t=!0,n=!0,i=e(this._element).closest('[data-toggle="buttons"]')[0];if(i){var o=this._element.querySelector('input:not([type="hidden"])');if(o){if("radio"===o.type)if(o.checked&&this._element.classList.contains("active"))t=!1;else{var r=i.querySelector(".active");r&&e(r).removeClass("active")}t&&("checkbox"!==o.type&&"radio"!==o.type||(o.checked=!this._element.classList.contains("active")),e(o).trigger("change")),o.focus(),n=!1}}this._element.hasAttribute("disabled")||this._element.classList.contains("disabled")||(n&&this._element.setAttribute("aria-pressed",!this._element.classList.contains("active")),t&&e(this._element).toggleClass("active"))},n.dispose=function(){e.removeData(this._element,"bs.button"),this._element=null},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.button");i||(i=new t(this),e(this).data("bs.button",i)),"toggle"===n&&i[n]()}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.button.data-api",'[data-toggle^="button"]',(function(t){var n=t.target,i=n;if(e(n).hasClass("btn")||(n=e(n).closest(".btn")[0]),!n||n.hasAttribute("disabled")||n.classList.contains("disabled"))t.preventDefault();else{var o=n.querySelector('input:not([type="hidden"])');if(o&&(o.hasAttribute("disabled")||o.classList.contains("disabled")))return void t.preventDefault();"LABEL"===i.tagName&&o&&"checkbox"===o.type&&t.preventDefault(),p._jQueryInterface.call(e(n),"toggle")}})).on("focus.bs.button.data-api blur.bs.button.data-api",'[data-toggle^="button"]',(function(t){var n=e(t.target).closest(".btn")[0];e(n).toggleClass("focus",/^focus(in)?$/.test(t.type))})),e(window).on("load.bs.button.data-api",(function(){for(var t=[].slice.call(document.querySelectorAll('[data-toggle="buttons"] .btn')),e=0,n=t.length;e0,this._pointerEvent=Boolean(window.PointerEvent||window.MSPointerEvent),this._addEventListeners()}var n=t.prototype;return n.next=function(){this._isSliding||this._slide("next")},n.nextWhenVisible=function(){!document.hidden&&e(this._element).is(":visible")&&"hidden"!==e(this._element).css("visibility")&&this.next()},n.prev=function(){this._isSliding||this._slide("prev")},n.pause=function(t){t||(this._isPaused=!0),this._element.querySelector(".carousel-item-next, .carousel-item-prev")&&(c.triggerTransitionEnd(this._element),this.cycle(!0)),clearInterval(this._interval),this._interval=null},n.cycle=function(t){t||(this._isPaused=!1),this._interval&&(clearInterval(this._interval),this._interval=null),this._config.interval&&!this._isPaused&&(this._interval=setInterval((document.visibilityState?this.nextWhenVisible:this.next).bind(this),this._config.interval))},n.to=function(t){var n=this;this._activeElement=this._element.querySelector(".active.carousel-item");var i=this._getItemIndex(this._activeElement);if(!(t>this._items.length-1||t<0))if(this._isSliding)e(this._element).one("slid.bs.carousel",(function(){return n.to(t)}));else{if(i===t)return this.pause(),void this.cycle();var o=t>i?"next":"prev";this._slide(o,this._items[t])}},n.dispose=function(){e(this._element).off(g),e.removeData(this._element,"bs.carousel"),this._items=null,this._config=null,this._element=null,this._interval=null,this._isPaused=null,this._isSliding=null,this._activeElement=null,this._indicatorsElement=null},n._getConfig=function(t){return t=a(a({},_),t),c.typeCheckConfig(m,t,b),t},n._handleSwipe=function(){var t=Math.abs(this.touchDeltaX);if(!(t<=40)){var e=t/this.touchDeltaX;this.touchDeltaX=0,e>0&&this.prev(),e<0&&this.next()}},n._addEventListeners=function(){var t=this;this._config.keyboard&&e(this._element).on("keydown.bs.carousel",(function(e){return t._keydown(e)})),"hover"===this._config.pause&&e(this._element).on("mouseenter.bs.carousel",(function(e){return t.pause(e)})).on("mouseleave.bs.carousel",(function(e){return t.cycle(e)})),this._config.touch&&this._addTouchEventListeners()},n._addTouchEventListeners=function(){var t=this;if(this._touchSupported){var n=function(e){t._pointerEvent&&y[e.originalEvent.pointerType.toUpperCase()]?t.touchStartX=e.originalEvent.clientX:t._pointerEvent||(t.touchStartX=e.originalEvent.touches[0].clientX)},i=function(e){t._pointerEvent&&y[e.originalEvent.pointerType.toUpperCase()]&&(t.touchDeltaX=e.originalEvent.clientX-t.touchStartX),t._handleSwipe(),"hover"===t._config.pause&&(t.pause(),t.touchTimeout&&clearTimeout(t.touchTimeout),t.touchTimeout=setTimeout((function(e){return t.cycle(e)}),500+t._config.interval))};e(this._element.querySelectorAll(".carousel-item img")).on("dragstart.bs.carousel",(function(t){return t.preventDefault()})),this._pointerEvent?(e(this._element).on("pointerdown.bs.carousel",(function(t){return n(t)})),e(this._element).on("pointerup.bs.carousel",(function(t){return i(t)})),this._element.classList.add("pointer-event")):(e(this._element).on("touchstart.bs.carousel",(function(t){return n(t)})),e(this._element).on("touchmove.bs.carousel",(function(e){return function(e){e.originalEvent.touches&&e.originalEvent.touches.length>1?t.touchDeltaX=0:t.touchDeltaX=e.originalEvent.touches[0].clientX-t.touchStartX}(e)})),e(this._element).on("touchend.bs.carousel",(function(t){return i(t)})))}},n._keydown=function(t){if(!/input|textarea/i.test(t.target.tagName))switch(t.which){case 37:t.preventDefault(),this.prev();break;case 39:t.preventDefault(),this.next()}},n._getItemIndex=function(t){return this._items=t&&t.parentNode?[].slice.call(t.parentNode.querySelectorAll(".carousel-item")):[],this._items.indexOf(t)},n._getItemByDirection=function(t,e){var n="next"===t,i="prev"===t,o=this._getItemIndex(e),r=this._items.length-1;if((i&&0===o||n&&o===r)&&!this._config.wrap)return e;var s=(o+("prev"===t?-1:1))%this._items.length;return-1===s?this._items[this._items.length-1]:this._items[s]},n._triggerSlideEvent=function(t,n){var i=this._getItemIndex(t),o=this._getItemIndex(this._element.querySelector(".active.carousel-item")),r=e.Event("slide.bs.carousel",{relatedTarget:t,direction:n,from:o,to:i});return e(this._element).trigger(r),r},n._setActiveIndicatorElement=function(t){if(this._indicatorsElement){var n=[].slice.call(this._indicatorsElement.querySelectorAll(".active"));e(n).removeClass("active");var i=this._indicatorsElement.children[this._getItemIndex(t)];i&&e(i).addClass("active")}},n._slide=function(t,n){var i,o,r,s=this,a=this._element.querySelector(".active.carousel-item"),l=this._getItemIndex(a),u=n||a&&this._getItemByDirection(t,a),h=this._getItemIndex(u),f=Boolean(this._interval);if("next"===t?(i="carousel-item-left",o="carousel-item-next",r="left"):(i="carousel-item-right",o="carousel-item-prev",r="right"),u&&e(u).hasClass("active"))this._isSliding=!1;else if(!this._triggerSlideEvent(u,r).isDefaultPrevented()&&a&&u){this._isSliding=!0,f&&this.pause(),this._setActiveIndicatorElement(u);var d=e.Event("slid.bs.carousel",{relatedTarget:u,direction:r,from:l,to:h});if(e(this._element).hasClass("slide")){e(u).addClass(o),c.reflow(u),e(a).addClass(i),e(u).addClass(i);var p=parseInt(u.getAttribute("data-interval"),10);p?(this._config.defaultInterval=this._config.defaultInterval||this._config.interval,this._config.interval=p):this._config.interval=this._config.defaultInterval||this._config.interval;var m=c.getTransitionDurationFromElement(a);e(a).one(c.TRANSITION_END,(function(){e(u).removeClass(i+" "+o).addClass("active"),e(a).removeClass("active "+o+" "+i),s._isSliding=!1,setTimeout((function(){return e(s._element).trigger(d)}),0)})).emulateTransitionEnd(m)}else e(a).removeClass("active"),e(u).addClass("active"),this._isSliding=!1,e(this._element).trigger(d);f&&this.cycle()}},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.carousel"),o=a(a({},_),e(this).data());"object"==typeof n&&(o=a(a({},o),n));var r="string"==typeof n?n:o.slide;if(i||(i=new t(this,o),e(this).data("bs.carousel",i)),"number"==typeof n)i.to(n);else if("string"==typeof r){if(void 0===i[r])throw new TypeError('No method named "'+r+'"');i[r]()}else o.interval&&o.ride&&(i.pause(),i.cycle())}))},t._dataApiClickHandler=function(n){var i=c.getSelectorFromElement(this);if(i){var o=e(i)[0];if(o&&e(o).hasClass("carousel")){var r=a(a({},e(o).data()),e(this).data()),s=this.getAttribute("data-slide-to");s&&(r.interval=!1),t._jQueryInterface.call(e(o),r),s&&e(o).data("bs.carousel").to(s),n.preventDefault()}}},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return _}}]),t}();e(document).on("click.bs.carousel.data-api","[data-slide], [data-slide-to]",w._dataApiClickHandler),e(window).on("load.bs.carousel.data-api",(function(){for(var t=[].slice.call(document.querySelectorAll('[data-ride="carousel"]')),n=0,i=t.length;n0&&(this._selector=s,this._triggerArray.push(r))}this._parent=this._config.parent?this._getParent():null,this._config.parent||this._addAriaAndCollapsedClass(this._element,this._triggerArray),this._config.toggle&&this.toggle()}var n=t.prototype;return n.toggle=function(){e(this._element).hasClass("show")?this.hide():this.show()},n.show=function(){var n,i,o=this;if(!(this._isTransitioning||e(this._element).hasClass("show")||(this._parent&&0===(n=[].slice.call(this._parent.querySelectorAll(".show, .collapsing")).filter((function(t){return"string"==typeof o._config.parent?t.getAttribute("data-parent")===o._config.parent:t.classList.contains("collapse")}))).length&&(n=null),n&&(i=e(n).not(this._selector).data("bs.collapse"))&&i._isTransitioning))){var r=e.Event("show.bs.collapse");if(e(this._element).trigger(r),!r.isDefaultPrevented()){n&&(t._jQueryInterface.call(e(n).not(this._selector),"hide"),i||e(n).data("bs.collapse",null));var s=this._getDimension();e(this._element).removeClass("collapse").addClass("collapsing"),this._element.style[s]=0,this._triggerArray.length&&e(this._triggerArray).removeClass("collapsed").attr("aria-expanded",!0),this.setTransitioning(!0);var a="scroll"+(s[0].toUpperCase()+s.slice(1)),l=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,(function(){e(o._element).removeClass("collapsing").addClass("collapse show"),o._element.style[s]="",o.setTransitioning(!1),e(o._element).trigger("shown.bs.collapse")})).emulateTransitionEnd(l),this._element.style[s]=this._element[a]+"px"}}},n.hide=function(){var t=this;if(!this._isTransitioning&&e(this._element).hasClass("show")){var n=e.Event("hide.bs.collapse");if(e(this._element).trigger(n),!n.isDefaultPrevented()){var i=this._getDimension();this._element.style[i]=this._element.getBoundingClientRect()[i]+"px",c.reflow(this._element),e(this._element).addClass("collapsing").removeClass("collapse show");var o=this._triggerArray.length;if(o>0)for(var r=0;r0},i._getOffset=function(){var t=this,e={};return"function"==typeof this._config.offset?e.fn=function(e){return e.offsets=a(a({},e.offsets),t._config.offset(e.offsets,t._element)||{}),e}:e.offset=this._config.offset,e},i._getPopperConfig=function(){var t={placement:this._getPlacement(),modifiers:{offset:this._getOffset(),flip:{enabled:this._config.flip},preventOverflow:{boundariesElement:this._config.boundary}}};return"static"===this._config.display&&(t.modifiers.applyStyle={enabled:!1}),a(a({},t),this._config.popperConfig)},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.dropdown");if(i||(i=new t(this,"object"==typeof n?n:null),e(this).data("bs.dropdown",i)),"string"==typeof n){if(void 0===i[n])throw new TypeError('No method named "'+n+'"');i[n]()}}))},t._clearMenus=function(n){if(!n||3!==n.which&&("keyup"!==n.type||9===n.which))for(var i=[].slice.call(document.querySelectorAll('[data-toggle="dropdown"]')),o=0,r=i.length;o0&&s--,40===n.which&&sdocument.documentElement.clientHeight;!this._isBodyOverflowing&&t&&(this._element.style.paddingLeft=this._scrollbarWidth+"px"),this._isBodyOverflowing&&!t&&(this._element.style.paddingRight=this._scrollbarWidth+"px")},n._resetAdjustments=function(){this._element.style.paddingLeft="",this._element.style.paddingRight=""},n._checkScrollbar=function(){var t=document.body.getBoundingClientRect();this._isBodyOverflowing=Math.round(t.left+t.right)
',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:0,container:!1,fallbackPlacement:"flip",boundary:"scrollParent",sanitize:!0,sanitizeFn:null,whiteList:M,popperConfig:null},K={HIDE:"hide.bs.tooltip",HIDDEN:"hidden.bs.tooltip",SHOW:"show.bs.tooltip",SHOWN:"shown.bs.tooltip",INSERTED:"inserted.bs.tooltip",CLICK:"click.bs.tooltip",FOCUSIN:"focusin.bs.tooltip",FOCUSOUT:"focusout.bs.tooltip",MOUSEENTER:"mouseenter.bs.tooltip",MOUSELEAVE:"mouseleave.bs.tooltip"},$=function(){function t(t,e){if(void 0===n)throw new TypeError("Bootstrap's tooltips require Popper.js (https://popper.js.org/)");this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this.element=t,this.config=this._getConfig(e),this.tip=null,this._setListeners()}var i=t.prototype;return i.enable=function(){this._isEnabled=!0},i.disable=function(){this._isEnabled=!1},i.toggleEnabled=function(){this._isEnabled=!this._isEnabled},i.toggle=function(t){if(this._isEnabled)if(t){var n=this.constructor.DATA_KEY,i=e(t.currentTarget).data(n);i||(i=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(n,i)),i._activeTrigger.click=!i._activeTrigger.click,i._isWithActiveTrigger()?i._enter(null,i):i._leave(null,i)}else{if(e(this.getTipElement()).hasClass("show"))return void this._leave(null,this);this._enter(null,this)}},i.dispose=function(){clearTimeout(this._timeout),e.removeData(this.element,this.constructor.DATA_KEY),e(this.element).off(this.constructor.EVENT_KEY),e(this.element).closest(".modal").off("hide.bs.modal",this._hideModalHandler),this.tip&&e(this.tip).remove(),this._isEnabled=null,this._timeout=null,this._hoverState=null,this._activeTrigger=null,this._popper&&this._popper.destroy(),this._popper=null,this.element=null,this.config=null,this.tip=null},i.show=function(){var t=this;if("none"===e(this.element).css("display"))throw new Error("Please use show on visible elements");var i=e.Event(this.constructor.Event.SHOW);if(this.isWithContent()&&this._isEnabled){e(this.element).trigger(i);var o=c.findShadowRoot(this.element),r=e.contains(null!==o?o:this.element.ownerDocument.documentElement,this.element);if(i.isDefaultPrevented()||!r)return;var s=this.getTipElement(),a=c.getUID(this.constructor.NAME);s.setAttribute("id",a),this.element.setAttribute("aria-describedby",a),this.setContent(),this.config.animation&&e(s).addClass("fade");var l="function"==typeof this.config.placement?this.config.placement.call(this,s,this.element):this.config.placement,u=this._getAttachment(l);this.addAttachmentClass(u);var h=this._getContainer();e(s).data(this.constructor.DATA_KEY,this),e.contains(this.element.ownerDocument.documentElement,this.tip)||e(s).appendTo(h),e(this.element).trigger(this.constructor.Event.INSERTED),this._popper=new n(this.element,s,this._getPopperConfig(u)),e(s).addClass("show"),"ontouchstart"in document.documentElement&&e(document.body).children().on("mouseover",null,e.noop);var f=function(){t.config.animation&&t._fixTransition();var n=t._hoverState;t._hoverState=null,e(t.element).trigger(t.constructor.Event.SHOWN),"out"===n&&t._leave(null,t)};if(e(this.tip).hasClass("fade")){var d=c.getTransitionDurationFromElement(this.tip);e(this.tip).one(c.TRANSITION_END,f).emulateTransitionEnd(d)}else f()}},i.hide=function(t){var n=this,i=this.getTipElement(),o=e.Event(this.constructor.Event.HIDE),r=function(){"show"!==n._hoverState&&i.parentNode&&i.parentNode.removeChild(i),n._cleanTipClass(),n.element.removeAttribute("aria-describedby"),e(n.element).trigger(n.constructor.Event.HIDDEN),null!==n._popper&&n._popper.destroy(),t&&t()};if(e(this.element).trigger(o),!o.isDefaultPrevented()){if(e(i).removeClass("show"),"ontouchstart"in document.documentElement&&e(document.body).children().off("mouseover",null,e.noop),this._activeTrigger.click=!1,this._activeTrigger.focus=!1,this._activeTrigger.hover=!1,e(this.tip).hasClass("fade")){var s=c.getTransitionDurationFromElement(i);e(i).one(c.TRANSITION_END,r).emulateTransitionEnd(s)}else r();this._hoverState=""}},i.update=function(){null!==this._popper&&this._popper.scheduleUpdate()},i.isWithContent=function(){return Boolean(this.getTitle())},i.addAttachmentClass=function(t){e(this.getTipElement()).addClass("bs-tooltip-"+t)},i.getTipElement=function(){return this.tip=this.tip||e(this.config.template)[0],this.tip},i.setContent=function(){var t=this.getTipElement();this.setElementContent(e(t.querySelectorAll(".tooltip-inner")),this.getTitle()),e(t).removeClass("fade show")},i.setElementContent=function(t,n){"object"!=typeof n||!n.nodeType&&!n.jquery?this.config.html?(this.config.sanitize&&(n=q(n,this.config.whiteList,this.config.sanitizeFn)),t.html(n)):t.text(n):this.config.html?e(n).parent().is(t)||t.empty().append(n):t.text(e(n).text())},i.getTitle=function(){var t=this.element.getAttribute("data-original-title");return t||(t="function"==typeof this.config.title?this.config.title.call(this.element):this.config.title),t},i._getPopperConfig=function(t){var e=this;return a(a({},{placement:t,modifiers:{offset:this._getOffset(),flip:{behavior:this.config.fallbackPlacement},arrow:{element:".arrow"},preventOverflow:{boundariesElement:this.config.boundary}},onCreate:function(t){t.originalPlacement!==t.placement&&e._handlePopperPlacementChange(t)},onUpdate:function(t){return e._handlePopperPlacementChange(t)}}),this.config.popperConfig)},i._getOffset=function(){var t=this,e={};return"function"==typeof this.config.offset?e.fn=function(e){return e.offsets=a(a({},e.offsets),t.config.offset(e.offsets,t.element)||{}),e}:e.offset=this.config.offset,e},i._getContainer=function(){return!1===this.config.container?document.body:c.isElement(this.config.container)?e(this.config.container):e(document).find(this.config.container)},i._getAttachment=function(t){return z[t.toUpperCase()]},i._setListeners=function(){var t=this;this.config.trigger.split(" ").forEach((function(n){if("click"===n)e(t.element).on(t.constructor.Event.CLICK,t.config.selector,(function(e){return t.toggle(e)}));else if("manual"!==n){var i="hover"===n?t.constructor.Event.MOUSEENTER:t.constructor.Event.FOCUSIN,o="hover"===n?t.constructor.Event.MOUSELEAVE:t.constructor.Event.FOCUSOUT;e(t.element).on(i,t.config.selector,(function(e){return t._enter(e)})).on(o,t.config.selector,(function(e){return t._leave(e)}))}})),this._hideModalHandler=function(){t.element&&t.hide()},e(this.element).closest(".modal").on("hide.bs.modal",this._hideModalHandler),this.config.selector?this.config=a(a({},this.config),{},{trigger:"manual",selector:""}):this._fixTitle()},i._fixTitle=function(){var t=typeof this.element.getAttribute("data-original-title");(this.element.getAttribute("title")||"string"!==t)&&(this.element.setAttribute("data-original-title",this.element.getAttribute("title")||""),this.element.setAttribute("title",""))},i._enter=function(t,n){var i=this.constructor.DATA_KEY;(n=n||e(t.currentTarget).data(i))||(n=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(i,n)),t&&(n._activeTrigger["focusin"===t.type?"focus":"hover"]=!0),e(n.getTipElement()).hasClass("show")||"show"===n._hoverState?n._hoverState="show":(clearTimeout(n._timeout),n._hoverState="show",n.config.delay&&n.config.delay.show?n._timeout=setTimeout((function(){"show"===n._hoverState&&n.show()}),n.config.delay.show):n.show())},i._leave=function(t,n){var i=this.constructor.DATA_KEY;(n=n||e(t.currentTarget).data(i))||(n=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(i,n)),t&&(n._activeTrigger["focusout"===t.type?"focus":"hover"]=!1),n._isWithActiveTrigger()||(clearTimeout(n._timeout),n._hoverState="out",n.config.delay&&n.config.delay.hide?n._timeout=setTimeout((function(){"out"===n._hoverState&&n.hide()}),n.config.delay.hide):n.hide())},i._isWithActiveTrigger=function(){for(var t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1},i._getConfig=function(t){var n=e(this.element).data();return Object.keys(n).forEach((function(t){-1!==V.indexOf(t)&&delete n[t]})),"number"==typeof(t=a(a(a({},this.constructor.Default),n),"object"==typeof t&&t?t:{})).delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),c.typeCheckConfig(Q,t,this.constructor.DefaultType),t.sanitize&&(t.template=q(t.template,t.whiteList,t.sanitizeFn)),t},i._getDelegateConfig=function(){var t={};if(this.config)for(var e in this.config)this.constructor.Default[e]!==this.config[e]&&(t[e]=this.config[e]);return t},i._cleanTipClass=function(){var t=e(this.getTipElement()),n=t.attr("class").match(U);null!==n&&n.length&&t.removeClass(n.join(""))},i._handlePopperPlacementChange=function(t){this.tip=t.instance.popper,this._cleanTipClass(),this.addAttachmentClass(this._getAttachment(t.placement))},i._fixTransition=function(){var t=this.getTipElement(),n=this.config.animation;null===t.getAttribute("x-placement")&&(e(t).removeClass("fade"),this.config.animation=!1,this.hide(),this.show(),this.config.animation=n)},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.tooltip"),o="object"==typeof n&&n;if((i||!/dispose|hide/.test(n))&&(i||(i=new t(this,o),e(this).data("bs.tooltip",i)),"string"==typeof n)){if(void 0===i[n])throw new TypeError('No method named "'+n+'"');i[n]()}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return X}},{key:"NAME",get:function(){return Q}},{key:"DATA_KEY",get:function(){return"bs.tooltip"}},{key:"Event",get:function(){return K}},{key:"EVENT_KEY",get:function(){return".bs.tooltip"}},{key:"DefaultType",get:function(){return Y}}]),t}();e.fn[Q]=$._jQueryInterface,e.fn[Q].Constructor=$,e.fn[Q].noConflict=function(){return e.fn[Q]=W,$._jQueryInterface};var G="popover",J=e.fn[G],Z=new RegExp("(^|\\s)bs-popover\\S+","g"),tt=a(a({},$.Default),{},{placement:"right",trigger:"click",content:"",template:''}),et=a(a({},$.DefaultType),{},{content:"(string|element|function)"}),nt={HIDE:"hide.bs.popover",HIDDEN:"hidden.bs.popover",SHOW:"show.bs.popover",SHOWN:"shown.bs.popover",INSERTED:"inserted.bs.popover",CLICK:"click.bs.popover",FOCUSIN:"focusin.bs.popover",FOCUSOUT:"focusout.bs.popover",MOUSEENTER:"mouseenter.bs.popover",MOUSELEAVE:"mouseleave.bs.popover"},it=function(t){var n,i;function r(){return t.apply(this,arguments)||this}i=t,(n=r).prototype=Object.create(i.prototype),n.prototype.constructor=n,n.__proto__=i;var s=r.prototype;return s.isWithContent=function(){return this.getTitle()||this._getContent()},s.addAttachmentClass=function(t){e(this.getTipElement()).addClass("bs-popover-"+t)},s.getTipElement=function(){return this.tip=this.tip||e(this.config.template)[0],this.tip},s.setContent=function(){var t=e(this.getTipElement());this.setElementContent(t.find(".popover-header"),this.getTitle());var n=this._getContent();"function"==typeof n&&(n=n.call(this.element)),this.setElementContent(t.find(".popover-body"),n),t.removeClass("fade show")},s._getContent=function(){return this.element.getAttribute("data-content")||this.config.content},s._cleanTipClass=function(){var t=e(this.getTipElement()),n=t.attr("class").match(Z);null!==n&&n.length>0&&t.removeClass(n.join(""))},r._jQueryInterface=function(t){return this.each((function(){var n=e(this).data("bs.popover"),i="object"==typeof t?t:null;if((n||!/dispose|hide/.test(t))&&(n||(n=new r(this,i),e(this).data("bs.popover",n)),"string"==typeof t)){if(void 0===n[t])throw new TypeError('No method named "'+t+'"');n[t]()}}))},o(r,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return tt}},{key:"NAME",get:function(){return G}},{key:"DATA_KEY",get:function(){return"bs.popover"}},{key:"Event",get:function(){return nt}},{key:"EVENT_KEY",get:function(){return".bs.popover"}},{key:"DefaultType",get:function(){return et}}]),r}($);e.fn[G]=it._jQueryInterface,e.fn[G].Constructor=it,e.fn[G].noConflict=function(){return e.fn[G]=J,it._jQueryInterface};var ot="scrollspy",rt=e.fn[ot],st={offset:10,method:"auto",target:""},at={offset:"number",method:"string",target:"(string|element)"},lt=function(){function t(t,n){var i=this;this._element=t,this._scrollElement="BODY"===t.tagName?window:t,this._config=this._getConfig(n),this._selector=this._config.target+" .nav-link,"+this._config.target+" .list-group-item,"+this._config.target+" .dropdown-item",this._offsets=[],this._targets=[],this._activeTarget=null,this._scrollHeight=0,e(this._scrollElement).on("scroll.bs.scrollspy",(function(t){return i._process(t)})),this.refresh(),this._process()}var n=t.prototype;return n.refresh=function(){var t=this,n=this._scrollElement===this._scrollElement.window?"offset":"position",i="auto"===this._config.method?n:this._config.method,o="position"===i?this._getScrollTop():0;this._offsets=[],this._targets=[],this._scrollHeight=this._getScrollHeight(),[].slice.call(document.querySelectorAll(this._selector)).map((function(t){var n,r=c.getSelectorFromElement(t);if(r&&(n=document.querySelector(r)),n){var s=n.getBoundingClientRect();if(s.width||s.height)return[e(n)[i]().top+o,r]}return null})).filter((function(t){return t})).sort((function(t,e){return t[0]-e[0]})).forEach((function(e){t._offsets.push(e[0]),t._targets.push(e[1])}))},n.dispose=function(){e.removeData(this._element,"bs.scrollspy"),e(this._scrollElement).off(".bs.scrollspy"),this._element=null,this._scrollElement=null,this._config=null,this._selector=null,this._offsets=null,this._targets=null,this._activeTarget=null,this._scrollHeight=null},n._getConfig=function(t){if("string"!=typeof(t=a(a({},st),"object"==typeof t&&t?t:{})).target&&c.isElement(t.target)){var n=e(t.target).attr("id");n||(n=c.getUID(ot),e(t.target).attr("id",n)),t.target="#"+n}return c.typeCheckConfig(ot,t,at),t},n._getScrollTop=function(){return this._scrollElement===window?this._scrollElement.pageYOffset:this._scrollElement.scrollTop},n._getScrollHeight=function(){return this._scrollElement.scrollHeight||Math.max(document.body.scrollHeight,document.documentElement.scrollHeight)},n._getOffsetHeight=function(){return this._scrollElement===window?window.innerHeight:this._scrollElement.getBoundingClientRect().height},n._process=function(){var t=this._getScrollTop()+this._config.offset,e=this._getScrollHeight(),n=this._config.offset+e-this._getOffsetHeight();if(this._scrollHeight!==e&&this.refresh(),t>=n){var i=this._targets[this._targets.length-1];this._activeTarget!==i&&this._activate(i)}else{if(this._activeTarget&&t0)return this._activeTarget=null,void this._clear();for(var o=this._offsets.length;o--;)this._activeTarget!==this._targets[o]&&t>=this._offsets[o]&&(void 0===this._offsets[o+1]||t li > .active":".active";i=(i=e.makeArray(e(o).find(s)))[i.length-1]}var a=e.Event("hide.bs.tab",{relatedTarget:this._element}),l=e.Event("show.bs.tab",{relatedTarget:i});if(i&&e(i).trigger(a),e(this._element).trigger(l),!l.isDefaultPrevented()&&!a.isDefaultPrevented()){r&&(n=document.querySelector(r)),this._activate(this._element,o);var u=function(){var n=e.Event("hidden.bs.tab",{relatedTarget:t._element}),o=e.Event("shown.bs.tab",{relatedTarget:i});e(i).trigger(n),e(t._element).trigger(o)};n?this._activate(n,n.parentNode,u):u()}}},n.dispose=function(){e.removeData(this._element,"bs.tab"),this._element=null},n._activate=function(t,n,i){var o=this,r=(!n||"UL"!==n.nodeName&&"OL"!==n.nodeName?e(n).children(".active"):e(n).find("> li > .active"))[0],s=i&&r&&e(r).hasClass("fade"),a=function(){return o._transitionComplete(t,r,i)};if(r&&s){var l=c.getTransitionDurationFromElement(r);e(r).removeClass("show").one(c.TRANSITION_END,a).emulateTransitionEnd(l)}else a()},n._transitionComplete=function(t,n,i){if(n){e(n).removeClass("active");var o=e(n.parentNode).find("> .dropdown-menu .active")[0];o&&e(o).removeClass("active"),"tab"===n.getAttribute("role")&&n.setAttribute("aria-selected",!1)}if(e(t).addClass("active"),"tab"===t.getAttribute("role")&&t.setAttribute("aria-selected",!0),c.reflow(t),t.classList.contains("fade")&&t.classList.add("show"),t.parentNode&&e(t.parentNode).hasClass("dropdown-menu")){var r=e(t).closest(".dropdown")[0];if(r){var s=[].slice.call(r.querySelectorAll(".dropdown-toggle"));e(s).addClass("active")}t.setAttribute("aria-expanded",!0)}i&&i()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.tab");if(o||(o=new t(this),i.data("bs.tab",o)),"string"==typeof n){if(void 0===o[n])throw new TypeError('No method named "'+n+'"');o[n]()}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.tab.data-api",'[data-toggle="tab"], [data-toggle="pill"], [data-toggle="list"]',(function(t){t.preventDefault(),ut._jQueryInterface.call(e(this),"show")})),e.fn.tab=ut._jQueryInterface,e.fn.tab.Constructor=ut,e.fn.tab.noConflict=function(){return e.fn.tab=ct,ut._jQueryInterface};var ht=e.fn.toast,ft={animation:"boolean",autohide:"boolean",delay:"number"},dt={animation:!0,autohide:!0,delay:500},pt=function(){function t(t,e){this._element=t,this._config=this._getConfig(e),this._timeout=null,this._setListeners()}var n=t.prototype;return n.show=function(){var t=this,n=e.Event("show.bs.toast");if(e(this._element).trigger(n),!n.isDefaultPrevented()){this._config.animation&&this._element.classList.add("fade");var i=function(){t._element.classList.remove("showing"),t._element.classList.add("show"),e(t._element).trigger("shown.bs.toast"),t._config.autohide&&(t._timeout=setTimeout((function(){t.hide()}),t._config.delay))};if(this._element.classList.remove("hide"),c.reflow(this._element),this._element.classList.add("showing"),this._config.animation){var o=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,i).emulateTransitionEnd(o)}else i()}},n.hide=function(){if(this._element.classList.contains("show")){var t=e.Event("hide.bs.toast");e(this._element).trigger(t),t.isDefaultPrevented()||this._close()}},n.dispose=function(){clearTimeout(this._timeout),this._timeout=null,this._element.classList.contains("show")&&this._element.classList.remove("show"),e(this._element).off("click.dismiss.bs.toast"),e.removeData(this._element,"bs.toast"),this._element=null,this._config=null},n._getConfig=function(t){return t=a(a(a({},dt),e(this._element).data()),"object"==typeof t&&t?t:{}),c.typeCheckConfig("toast",t,this.constructor.DefaultType),t},n._setListeners=function(){var t=this;e(this._element).on("click.dismiss.bs.toast",'[data-dismiss="toast"]',(function(){return t.hide()}))},n._close=function(){var t=this,n=function(){t._element.classList.add("hide"),e(t._element).trigger("hidden.bs.toast")};if(this._element.classList.remove("show"),this._config.animation){var i=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,n).emulateTransitionEnd(i)}else n()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.toast");if(o||(o=new t(this,"object"==typeof n&&n),i.data("bs.toast",o)),"string"==typeof n){if(void 0===o[n])throw new TypeError('No method named "'+n+'"');o[n](this)}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"DefaultType",get:function(){return ft}},{key:"Default",get:function(){return dt}}]),t}();e.fn.toast=pt._jQueryInterface,e.fn.toast.Constructor=pt,e.fn.toast.noConflict=function(){return e.fn.toast=ht,pt._jQueryInterface},t.Alert=f,t.Button=p,t.Carousel=w,t.Collapse=D,t.Dropdown=x,t.Modal=F,t.Popover=it,t.Scrollspy=lt,t.Tab=ut,t.Toast=pt,t.Tooltip=$,t.Util=c,Object.defineProperty(t,"__esModule",{value:!0})}(e,n(0),n(1))},function(t,e){var n;n=function(){return this}();try{n=n||new Function("return this")()}catch(t){"object"==typeof window&&(n=window)}t.exports=n},function(t,e,n){"use strict";n.r(e);n(0),n(3),n.p;$(document).ready(()=>{!function(){var t=document.getElementById("bd-docs-nav");let e=parseInt(sessionStorage.getItem("sidebar-scroll-top"),10);if(isNaN(e)){var n,i=t.querySelectorAll(".active"),o=0;for(n=i.length-1;n>0;n--){var r=i[n];void 0!==r&&(o+=r.offsetTop)}o-=t.offsetTop,void 0!==r&&o>.5*t.clientHeight&&(t.scrollTop=o-.2*t.clientHeight)}else t.scrollTop=e;window.addEventListener("beforeunload",()=>{sessionStorage.setItem("sidebar-scroll-top",t.scrollTop)})}(),$(window).on("activate.bs.scrollspy",(function(){document.querySelectorAll("#bd-toc-nav a").forEach(t=>{t.parentElement.classList.remove("active")}),document.querySelectorAll("#bd-toc-nav a.active").forEach(t=>{t.parentElement.classList.add("active")})}))})}]); \ No newline at end of file diff --git a/0.4/_static/language_data.js b/0.4/_static/language_data.js new file mode 100644 index 00000000..250f5665 --- /dev/null +++ b/0.4/_static/language_data.js @@ -0,0 +1,199 @@ +/* + * language_data.js + * ~~~~~~~~~~~~~~~~ + * + * This script contains the language-specific data used by searchtools.js, + * namely the list of stopwords, stemmer, scorer and splitter. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"]; + + +/* Non-minified version is copied as a separate JS file, is available */ + +/** + * Porter Stemmer + */ +var Stemmer = function() { + + var step2list = { + ational: 'ate', + tional: 'tion', + enci: 'ence', + anci: 'ance', + izer: 'ize', + bli: 'ble', + alli: 'al', + entli: 'ent', + eli: 'e', + ousli: 'ous', + ization: 'ize', + ation: 'ate', + ator: 'ate', + alism: 'al', + iveness: 'ive', + fulness: 'ful', + ousness: 'ous', + aliti: 'al', + iviti: 'ive', + biliti: 'ble', + logi: 'log' + }; + + var step3list = { + icate: 'ic', + ative: '', + alize: 'al', + iciti: 'ic', + ical: 'ic', + ful: '', + ness: '' + }; + + var c = "[^aeiou]"; // consonant + var v = "[aeiouy]"; // vowel + var C = c + "[^aeiouy]*"; // consonant sequence + var V = v + "[aeiou]*"; // vowel sequence + + var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/0.4/_static/minus.png b/0.4/_static/minus.png new file mode 100644 index 00000000..d96755fd Binary files /dev/null and b/0.4/_static/minus.png differ diff --git a/0.4/_static/nbsphinx-broken-thumbnail.svg b/0.4/_static/nbsphinx-broken-thumbnail.svg new file mode 100644 index 00000000..4919ca88 --- /dev/null +++ b/0.4/_static/nbsphinx-broken-thumbnail.svg @@ -0,0 +1,9 @@ + + + + diff --git a/0.4/_static/nbsphinx-code-cells.css b/0.4/_static/nbsphinx-code-cells.css new file mode 100644 index 00000000..a3fb27c3 --- /dev/null +++ b/0.4/_static/nbsphinx-code-cells.css @@ -0,0 +1,259 @@ +/* remove conflicting styling from Sphinx themes */ +div.nbinput.container div.prompt *, +div.nboutput.container div.prompt *, +div.nbinput.container div.input_area pre, +div.nboutput.container div.output_area pre, +div.nbinput.container div.input_area .highlight, +div.nboutput.container div.output_area .highlight { + border: none; + padding: 0; + margin: 0; + box-shadow: none; +} + +div.nbinput.container > div[class*=highlight], +div.nboutput.container > div[class*=highlight] { + margin: 0; +} + +div.nbinput.container div.prompt *, +div.nboutput.container div.prompt * { + background: none; +} + +div.nboutput.container div.output_area .highlight, +div.nboutput.container div.output_area pre { + background: unset; +} + +div.nboutput.container div.output_area div.highlight { + color: unset; /* override Pygments text color */ +} + +/* avoid gaps between output lines */ +div.nboutput.container div[class*=highlight] pre { + line-height: normal; +} + +/* input/output containers */ +div.nbinput.container, +div.nboutput.container { + display: -webkit-flex; + display: flex; + align-items: flex-start; + margin: 0; + width: 100%; +} +@media (max-width: 540px) { + div.nbinput.container, + div.nboutput.container { + flex-direction: column; + } +} + +/* input container */ +div.nbinput.container { + padding-top: 5px; +} + +/* last container */ +div.nblast.container { + padding-bottom: 5px; +} + +/* input prompt */ +div.nbinput.container div.prompt pre, +/* for sphinx_immaterial theme: */ +div.nbinput.container div.prompt pre > code { + color: #307FC1; +} + +/* output prompt */ +div.nboutput.container div.prompt pre, +/* for sphinx_immaterial theme: */ +div.nboutput.container div.prompt pre > code { + color: #BF5B3D; +} + +/* all prompts */ +div.nbinput.container div.prompt, +div.nboutput.container div.prompt { + width: 4.5ex; + padding-top: 5px; + position: relative; + user-select: none; +} + +div.nbinput.container div.prompt > div, +div.nboutput.container div.prompt > div { + position: absolute; + right: 0; + margin-right: 0.3ex; +} + +@media (max-width: 540px) { + div.nbinput.container div.prompt, + div.nboutput.container div.prompt { + width: unset; + text-align: left; + padding: 0.4em; + } + div.nboutput.container div.prompt.empty { + padding: 0; + } + + div.nbinput.container div.prompt > div, + div.nboutput.container div.prompt > div { + position: unset; + } +} + +/* disable scrollbars and line breaks on prompts */ +div.nbinput.container div.prompt pre, +div.nboutput.container div.prompt pre { + overflow: hidden; + white-space: pre; +} + +/* input/output area */ +div.nbinput.container div.input_area, +div.nboutput.container div.output_area { + -webkit-flex: 1; + flex: 1; + overflow: auto; +} +@media (max-width: 540px) { + div.nbinput.container div.input_area, + div.nboutput.container div.output_area { + width: 100%; + } +} + +/* input area */ +div.nbinput.container div.input_area { + border: 1px solid #e0e0e0; + border-radius: 2px; + /*background: #f5f5f5;*/ +} + +/* override MathJax center alignment in output cells */ +div.nboutput.container div[class*=MathJax] { + text-align: left !important; +} + +/* override sphinx.ext.imgmath center alignment in output cells */ +div.nboutput.container div.math p { + text-align: left; +} + +/* standard error */ +div.nboutput.container div.output_area.stderr { + background: #fdd; +} + +/* ANSI colors */ +.ansi-black-fg { color: #3E424D; } +.ansi-black-bg { background-color: #3E424D; } +.ansi-black-intense-fg { color: #282C36; } +.ansi-black-intense-bg { background-color: #282C36; } +.ansi-red-fg { color: #E75C58; } +.ansi-red-bg { background-color: #E75C58; } +.ansi-red-intense-fg { color: #B22B31; } +.ansi-red-intense-bg { background-color: #B22B31; } +.ansi-green-fg { color: #00A250; } +.ansi-green-bg { background-color: #00A250; } +.ansi-green-intense-fg { color: #007427; } +.ansi-green-intense-bg { background-color: #007427; } +.ansi-yellow-fg { color: #DDB62B; } +.ansi-yellow-bg { background-color: #DDB62B; } +.ansi-yellow-intense-fg { color: #B27D12; } +.ansi-yellow-intense-bg { background-color: #B27D12; } +.ansi-blue-fg { color: #208FFB; } +.ansi-blue-bg { background-color: #208FFB; } +.ansi-blue-intense-fg { color: #0065CA; } +.ansi-blue-intense-bg { background-color: #0065CA; } +.ansi-magenta-fg { color: #D160C4; } +.ansi-magenta-bg { background-color: #D160C4; } +.ansi-magenta-intense-fg { color: #A03196; } +.ansi-magenta-intense-bg { background-color: #A03196; } +.ansi-cyan-fg { color: #60C6C8; } +.ansi-cyan-bg { background-color: #60C6C8; } +.ansi-cyan-intense-fg { color: #258F8F; } +.ansi-cyan-intense-bg { background-color: #258F8F; } +.ansi-white-fg { color: #C5C1B4; } +.ansi-white-bg { background-color: #C5C1B4; } +.ansi-white-intense-fg { color: #A1A6B2; } +.ansi-white-intense-bg { background-color: #A1A6B2; } + +.ansi-default-inverse-fg { color: #FFFFFF; } +.ansi-default-inverse-bg { background-color: #000000; } + +.ansi-bold { font-weight: bold; } +.ansi-underline { text-decoration: underline; } + + +div.nbinput.container div.input_area div[class*=highlight] > pre, +div.nboutput.container div.output_area div[class*=highlight] > pre, +div.nboutput.container div.output_area div[class*=highlight].math, +div.nboutput.container div.output_area.rendered_html, +div.nboutput.container div.output_area > div.output_javascript, +div.nboutput.container div.output_area:not(.rendered_html) > img{ + padding: 5px; + margin: 0; +} + +/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */ +div.nbinput.container div.input_area > div[class^='highlight'], +div.nboutput.container div.output_area > div[class^='highlight']{ + overflow-y: hidden; +} + +/* hide copy button on prompts for 'sphinx_copybutton' extension ... */ +.prompt .copybtn, +/* ... and 'sphinx_immaterial' theme */ +.prompt .md-clipboard.md-icon { + display: none; +} + +/* Some additional styling taken form the Jupyter notebook CSS */ +.jp-RenderedHTMLCommon table, +div.rendered_html table { + border: none; + border-collapse: collapse; + border-spacing: 0; + color: black; + font-size: 12px; + table-layout: fixed; +} +.jp-RenderedHTMLCommon thead, +div.rendered_html thead { + border-bottom: 1px solid black; + vertical-align: bottom; +} +.jp-RenderedHTMLCommon tr, +.jp-RenderedHTMLCommon th, +.jp-RenderedHTMLCommon td, +div.rendered_html tr, +div.rendered_html th, +div.rendered_html td { + text-align: right; + vertical-align: middle; + padding: 0.5em 0.5em; + line-height: normal; + white-space: normal; + max-width: none; + border: none; +} +.jp-RenderedHTMLCommon th, +div.rendered_html th { + font-weight: bold; +} +.jp-RenderedHTMLCommon tbody tr:nth-child(odd), +div.rendered_html tbody tr:nth-child(odd) { + background: #f5f5f5; +} +.jp-RenderedHTMLCommon tbody tr:hover, +div.rendered_html tbody tr:hover { + background: rgba(66, 165, 245, 0.2); +} + diff --git a/0.4/_static/nbsphinx-gallery.css b/0.4/_static/nbsphinx-gallery.css new file mode 100644 index 00000000..365c27a9 --- /dev/null +++ b/0.4/_static/nbsphinx-gallery.css @@ -0,0 +1,31 @@ +.nbsphinx-gallery { + display: grid; + grid-template-columns: repeat(auto-fill, minmax(160px, 1fr)); + gap: 5px; + margin-top: 1em; + margin-bottom: 1em; +} + +.nbsphinx-gallery > a { + padding: 5px; + border: 1px dotted currentColor; + border-radius: 2px; + text-align: center; +} + +.nbsphinx-gallery > a:hover { + border-style: solid; +} + +.nbsphinx-gallery img { + max-width: 100%; + max-height: 100%; +} + +.nbsphinx-gallery > a > div:first-child { + display: flex; + align-items: start; + justify-content: center; + height: 120px; + margin-bottom: 5px; +} diff --git a/0.4/_static/nbsphinx-no-thumbnail.svg b/0.4/_static/nbsphinx-no-thumbnail.svg new file mode 100644 index 00000000..9dca7588 --- /dev/null +++ b/0.4/_static/nbsphinx-no-thumbnail.svg @@ -0,0 +1,9 @@ + + + + diff --git a/0.4/_static/plus.png b/0.4/_static/plus.png new file mode 100644 index 00000000..7107cec9 Binary files /dev/null and b/0.4/_static/plus.png differ diff --git a/0.4/_static/pygments.css b/0.4/_static/pygments.css new file mode 100644 index 00000000..f227e5c6 --- /dev/null +++ b/0.4/_static/pygments.css @@ -0,0 +1,83 @@ +pre { line-height: 125%; } +td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #f8f8f8; } +.highlight .c { color: #8f5902; font-style: italic } /* Comment */ +.highlight .err { color: #a40000; border: 1px solid #ef2929 } /* Error */ +.highlight .g { color: #000000 } /* Generic */ +.highlight .k { color: #204a87; font-weight: bold } /* Keyword */ +.highlight .l { color: #000000 } /* Literal */ +.highlight .n { color: #000000 } /* Name */ +.highlight .o { color: #ce5c00; font-weight: bold } /* Operator */ +.highlight .x { color: #000000 } /* Other */ +.highlight .p { color: #000000; font-weight: bold } /* Punctuation */ +.highlight .ch { color: #8f5902; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #8f5902; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #8f5902; font-style: italic } /* Comment.Preproc */ +.highlight .cpf { color: #8f5902; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #8f5902; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #8f5902; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #a40000 } /* Generic.Deleted */ +.highlight .ge { color: #000000; font-style: italic } /* Generic.Emph */ +.highlight .gr { color: #ef2929 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #00A000 } /* Generic.Inserted */ +.highlight .go { color: #000000; font-style: italic } /* Generic.Output */ +.highlight .gp { color: #8f5902 } /* Generic.Prompt */ +.highlight .gs { color: #000000; font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #a40000; font-weight: bold } /* Generic.Traceback */ +.highlight .kc { color: #204a87; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #204a87; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #204a87; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #204a87; font-weight: bold } /* Keyword.Pseudo */ +.highlight .kr { color: #204a87; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #204a87; font-weight: bold } /* Keyword.Type */ +.highlight .ld { color: #000000 } /* Literal.Date */ +.highlight .m { color: #0000cf; font-weight: bold } /* Literal.Number */ +.highlight .s { color: #4e9a06 } /* Literal.String */ +.highlight .na { color: #c4a000 } /* Name.Attribute */ +.highlight .nb { color: #204a87 } /* Name.Builtin */ +.highlight .nc { color: #000000 } /* Name.Class */ +.highlight .no { color: #000000 } /* Name.Constant */ +.highlight .nd { color: #5c35cc; font-weight: bold } /* Name.Decorator */ +.highlight .ni { color: #ce5c00 } /* Name.Entity */ +.highlight .ne { color: #cc0000; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #000000 } /* Name.Function */ +.highlight .nl { color: #f57900 } /* Name.Label */ +.highlight .nn { color: #000000 } /* Name.Namespace */ +.highlight .nx { color: #000000 } /* Name.Other */ +.highlight .py { color: #000000 } /* Name.Property */ +.highlight .nt { color: #204a87; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #000000 } /* Name.Variable */ +.highlight .ow { color: #204a87; font-weight: bold } /* Operator.Word */ +.highlight .pm { color: #000000; font-weight: bold } /* Punctuation.Marker */ +.highlight .w { color: #f8f8f8 } /* Text.Whitespace */ +.highlight .mb { color: #0000cf; font-weight: bold } /* Literal.Number.Bin */ +.highlight .mf { color: #0000cf; font-weight: bold } /* Literal.Number.Float */ +.highlight .mh { color: #0000cf; font-weight: bold } /* Literal.Number.Hex */ +.highlight .mi { color: #0000cf; font-weight: bold } /* Literal.Number.Integer */ +.highlight .mo { color: #0000cf; font-weight: bold } /* Literal.Number.Oct */ +.highlight .sa { color: #4e9a06 } /* Literal.String.Affix */ +.highlight .sb { color: #4e9a06 } /* Literal.String.Backtick */ +.highlight .sc { color: #4e9a06 } /* Literal.String.Char */ +.highlight .dl { color: #4e9a06 } /* Literal.String.Delimiter */ +.highlight .sd { color: #8f5902; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #4e9a06 } /* Literal.String.Double */ +.highlight .se { color: #4e9a06 } /* Literal.String.Escape */ +.highlight .sh { color: #4e9a06 } /* Literal.String.Heredoc */ +.highlight .si { color: #4e9a06 } /* Literal.String.Interpol */ +.highlight .sx { color: #4e9a06 } /* Literal.String.Other */ +.highlight .sr { color: #4e9a06 } /* Literal.String.Regex */ +.highlight .s1 { color: #4e9a06 } /* Literal.String.Single */ +.highlight .ss { color: #4e9a06 } /* Literal.String.Symbol */ +.highlight .bp { color: #3465a4 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #000000 } /* Name.Function.Magic */ +.highlight .vc { color: #000000 } /* Name.Variable.Class */ +.highlight .vg { color: #000000 } /* Name.Variable.Global */ +.highlight .vi { color: #000000 } /* Name.Variable.Instance */ +.highlight .vm { color: #000000 } /* Name.Variable.Magic */ +.highlight .il { color: #0000cf; font-weight: bold } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/0.4/_static/searchtools.js b/0.4/_static/searchtools.js new file mode 100644 index 00000000..7918c3fa --- /dev/null +++ b/0.4/_static/searchtools.js @@ -0,0 +1,574 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms, highlightTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + const contentRoot = document.documentElement.dataset.content_root; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = contentRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = contentRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) { + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + // highlight search terms in the description + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + } + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms) + ); + // highlight search terms in the summary + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + `Search finished, found ${resultCount} page(s) matching the search query.` + ); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms, + highlightTerms, +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms, highlightTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms, highlightTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent !== undefined) return docContent.textContent; + console.warn( + "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + /** + * execute search (requires search index to be loaded) + */ + query: (query) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + // array of [docname, title, anchor, descr, score, filename] + let results = []; + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + let score = Math.round(100 * queryLower.length / title.length) + results.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id] of foundEntries) { + let score = Math.round(100 * queryLower.length / entry.length) + results.push([ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // lookup as object + objectTerms.forEach((term) => + results.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + results.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item))); + + // now sort the results by score (in opposite order of appearance, since the + // display function below uses pop() to retrieve items) and then + // alphabetically + results.sort((a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; + }); + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + results = results.reverse(); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms, highlightTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord) && !terms[word]) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord) && !titleTerms[word]) + arr.push({ files: titleTerms[word], score: Scorer.partialTitle }); + }); + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1) + fileMap.get(file).push(word); + else fileMap.set(file, [word]); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords) => { + const text = Search.htmlToText(htmlText); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/0.4/_static/sphinx-book-theme.12a9622fbb08dcb3a2a40b2c02b83a57.js b/0.4/_static/sphinx-book-theme.12a9622fbb08dcb3a2a40b2c02b83a57.js new file mode 100644 index 00000000..b8b8704e --- /dev/null +++ b/0.4/_static/sphinx-book-theme.12a9622fbb08dcb3a2a40b2c02b83a57.js @@ -0,0 +1,18 @@ +var initTriggerNavBar=()=>{if($(window).width()<768){$("#navbar-toggler").trigger("click")}} +var scrollToActive=()=>{var navbar=document.getElementById('site-navigation') +var active_pages=navbar.querySelectorAll(".active") +var active_page=active_pages[active_pages.length-1] +if(active_page!==undefined&&active_page.offsetTop>($(window).height()*.5)){navbar.scrollTop=active_page.offsetTop-($(window).height()*.2)}} +var sbRunWhenDOMLoaded=cb=>{if(document.readyState!='loading'){cb()}else if(document.addEventListener){document.addEventListener('DOMContentLoaded',cb)}else{document.attachEvent('onreadystatechange',function(){if(document.readyState=='complete')cb()})}} +function toggleFullScreen(){var navToggler=$("#navbar-toggler");if(!document.fullscreenElement){document.documentElement.requestFullscreen();if(!navToggler.hasClass("collapsed")){navToggler.click();}}else{if(document.exitFullscreen){document.exitFullscreen();if(navToggler.hasClass("collapsed")){navToggler.click();}}}} +var initTooltips=()=>{$(document).ready(function(){$('[data-toggle="tooltip"]').tooltip();});} +var initTocHide=()=>{var scrollTimeout;var throttle=200;var tocHeight=$("#bd-toc-nav").outerHeight(true)+$(".bd-toc").outerHeight(true);var hideTocAfter=tocHeight+200;var checkTocScroll=function(){var margin_content=$(".margin, .tag_margin, .full-width, .full_width, .tag_full-width, .tag_full_width, .sidebar, .tag_sidebar, .popout, .tag_popout");margin_content.each((index,item)=>{var topOffset=$(item).offset().top-$(window).scrollTop();var bottomOffset=topOffset+$(item).outerHeight(true);var topOverlaps=((topOffset>=0)&&(topOffset=0)&&(bottomOffset20){$("div.bd-toc").removeClass("show") +return false}else{$("div.bd-toc").addClass("show")};})};var manageScrolledClassOnBody=function(){if(window.scrollY>0){document.body.classList.add("scrolled");}else{document.body.classList.remove("scrolled");}} +$(window).on('scroll',function(){if(!scrollTimeout){scrollTimeout=setTimeout(function(){checkTocScroll();manageScrolledClassOnBody();scrollTimeout=null;},throttle);}});} +var initThebeSBT=()=>{var title=$("div.section h1")[0] +if(!$(title).next().hasClass("thebe-launch-button")){$("").insertAfter($(title))} +initThebe();} +sbRunWhenDOMLoaded(initTooltips) +sbRunWhenDOMLoaded(initTriggerNavBar) +sbRunWhenDOMLoaded(scrollToActive) +sbRunWhenDOMLoaded(initTocHide) diff --git a/0.4/_static/sphinx-book-theme.acff12b8f9c144ce68a297486a2fa670.css b/0.4/_static/sphinx-book-theme.acff12b8f9c144ce68a297486a2fa670.css new file mode 100644 index 00000000..6c5887c0 --- /dev/null +++ b/0.4/_static/sphinx-book-theme.acff12b8f9c144ce68a297486a2fa670.css @@ -0,0 +1,5 @@ +/*! sphinx-book-theme CSS + * BSD 3-Clause License + * Copyright (c) 2020, EBP + * All rights reserved. + */:root{--color-primary: 0, 123, 255;--color-info: 255, 193, 7;--color-warning: 253, 126, 20;--color-danger: 220, 53, 69}body{padding-top:0px !important}body img{max-width:100%}code{font-size:87.5% !important}main.bd-content{padding-top:3em !important;padding-bottom:0px !important}main.bd-content #main-content{padding-top:1.5em}main.bd-content #main-content a.headerlink{opacity:0;margin-left:.2em}main.bd-content #main-content a.headerlink:hover{background-color:transparent;color:#0071bc;opacity:1 !important}main.bd-content #main-content a,main.bd-content #main-content a:visited{color:#0071bc}main.bd-content #main-content h1,main.bd-content #main-content h2,main.bd-content #main-content h3,main.bd-content #main-content h4,main.bd-content #main-content h5{color:black}main.bd-content #main-content h1:hover a.headerlink,main.bd-content #main-content h2:hover a.headerlink,main.bd-content #main-content h3:hover a.headerlink,main.bd-content #main-content h4:hover a.headerlink,main.bd-content #main-content h5:hover a.headerlink{opacity:.5}main.bd-content #main-content h1 a.toc-backref,main.bd-content #main-content h2 a.toc-backref,main.bd-content #main-content h3 a.toc-backref,main.bd-content #main-content h4 a.toc-backref,main.bd-content #main-content h5 a.toc-backref{color:inherit}main.bd-content #main-content>div>div>div.section,main.bd-content #main-content .prev-next-bottom{padding-right:1em}main.bd-content #main-content div.section{overflow:visible !important}main.bd-content #main-content div.section ul p,main.bd-content #main-content div.section ol p{margin-bottom:0}main.bd-content #main-content span.eqno{float:right;font-size:1.2em}main.bd-content #main-content div.math{overflow-x:auto}main.bd-content #main-content img.align-center{margin-left:auto;margin-right:auto;display:block}main.bd-content #main-content img.align-left{clear:left;float:left;margin-right:1em}main.bd-content #main-content img.align-right{clear:right;float:right;margin-left:1em}main.bd-content #main-content div.figure{width:100%;margin-bottom:1em;text-align:center}main.bd-content #main-content div.figure.align-left{text-align:left}main.bd-content #main-content div.figure.align-left p.caption{margin-left:0}main.bd-content #main-content div.figure.align-right{text-align:right}main.bd-content #main-content div.figure.align-right p.caption{margin-right:0}main.bd-content #main-content div.figure p.caption{margin:.5em 10%}main.bd-content #main-content div.figure.margin p.caption,main.bd-content #main-content div.figure.margin-caption p.caption{margin:.5em 0}main.bd-content #main-content div.figure.margin-caption p.caption{text-align:left}main.bd-content #main-content div.figure span.caption-number{font-weight:bold}main.bd-content #main-content div.figure span{font-size:.9rem}main.bd-content #main-content div.contents{padding:1em}main.bd-content #main-content div.contents p.topic-title{font-size:1.5em;padding:.5em 0 0 1em}main.bd-content #main-content p.centered{text-align:center}main.bd-content #main-content div.sphinx-tabs>div.sphinx-menu{padding:0}main.bd-content #main-content div.sphinx-tabs>div.sphinx-menu>a.item{width:auto;margin:0px 0px -1px 0px}main.bd-content #main-content span.brackets:before,main.bd-content #main-content a.brackets:before{content:"["}main.bd-content #main-content span.brackets:after,main.bd-content #main-content a.brackets:after{content:"]"}main.bd-content #main-content .footnote-reference,main.bd-content #main-content a.bibtex.internal{font-size:1em}main.bd-content #main-content dl.simple dd,main.bd-content #main-content dl.field-list dd{margin-left:1.5em}main.bd-content #main-content dl.simple dd:not(:last-child),main.bd-content #main-content dl.field-list dd:not(:last-child){margin-bottom:0px}main.bd-content #main-content dl.simple dd:not(:last-child) p:last-child,main.bd-content #main-content dl.field-list dd:not(:last-child) p:last-child{margin-bottom:0px}main.bd-content #main-content dl.glossary dd{margin-left:1.5em}main.bd-content #main-content dl.footnote span.fn-backref{font-size:1em;padding-left:.1em}main.bd-content #main-content dl.footnote dd{font-size:.9em;margin-left:3em}main.bd-content #main-content dl.citation{margin-left:3em}main.bd-content #main-content dl.footnote dt.label{float:left}main.bd-content #main-content dl.footnote dd p{padding-left:1.5em}main.bd-content #main-content dl.module,main.bd-content #main-content dl.class,main.bd-content #main-content dl.exception,main.bd-content #main-content dl.function,main.bd-content #main-content dl.decorator,main.bd-content #main-content dl.data,main.bd-content #main-content dl.method,main.bd-content #main-content dl.attribute{margin-bottom:24px}main.bd-content #main-content dl.module dt,main.bd-content #main-content dl.class dt,main.bd-content #main-content dl.exception dt,main.bd-content #main-content dl.function dt,main.bd-content #main-content dl.decorator dt,main.bd-content #main-content dl.data dt,main.bd-content #main-content dl.method dt,main.bd-content #main-content dl.attribute dt{font-weight:bold}main.bd-content #main-content dl.module dt .headerlink,main.bd-content #main-content dl.class dt .headerlink,main.bd-content #main-content dl.exception dt .headerlink,main.bd-content #main-content dl.function dt .headerlink,main.bd-content #main-content dl.decorator dt .headerlink,main.bd-content #main-content dl.data dt .headerlink,main.bd-content #main-content dl.method dt .headerlink,main.bd-content #main-content dl.attribute dt .headerlink{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;font-family:inherit;visibility:hidden;font-size:14px}main.bd-content #main-content dl.module dt .headerlink:before,main.bd-content #main-content dl.class dt .headerlink:before,main.bd-content #main-content dl.exception dt .headerlink:before,main.bd-content #main-content dl.function dt .headerlink:before,main.bd-content #main-content dl.decorator dt .headerlink:before,main.bd-content #main-content dl.data dt .headerlink:before,main.bd-content #main-content dl.method dt .headerlink:before,main.bd-content #main-content dl.attribute dt .headerlink:before{-webkit-font-smoothing:antialiased;font-family:"FontAwesome";display:inline-block;font-style:normal;font-weight:normal;line-height:1;text-decoration:inherit}main.bd-content #main-content dl.module dt .headerlink:after,main.bd-content #main-content dl.class dt .headerlink:after,main.bd-content #main-content dl.exception dt .headerlink:after,main.bd-content #main-content dl.function dt .headerlink:after,main.bd-content #main-content dl.decorator dt .headerlink:after,main.bd-content #main-content dl.data dt .headerlink:after,main.bd-content #main-content dl.method dt .headerlink:after,main.bd-content #main-content dl.attribute dt .headerlink:after{content:"";font-family:FontAwesome}main.bd-content #main-content dl.module dt .fa-pull-left.headerlink,main.bd-content #main-content dl.class dt .fa-pull-left.headerlink,main.bd-content #main-content dl.exception dt .fa-pull-left.headerlink,main.bd-content #main-content dl.function dt .fa-pull-left.headerlink,main.bd-content #main-content dl.decorator dt .fa-pull-left.headerlink,main.bd-content #main-content dl.data dt .fa-pull-left.headerlink,main.bd-content #main-content dl.method dt .fa-pull-left.headerlink,main.bd-content #main-content dl.attribute dt .fa-pull-left.headerlink{margin-right:.3em}main.bd-content #main-content dl.module dt .fa-pull-right.headerlink,main.bd-content #main-content dl.class dt .fa-pull-right.headerlink,main.bd-content #main-content dl.exception dt .fa-pull-right.headerlink,main.bd-content #main-content dl.function dt .fa-pull-right.headerlink,main.bd-content #main-content dl.decorator dt .fa-pull-right.headerlink,main.bd-content #main-content dl.data dt .fa-pull-right.headerlink,main.bd-content #main-content dl.method dt .fa-pull-right.headerlink,main.bd-content #main-content dl.attribute dt .fa-pull-right.headerlink{margin-left:.3em}main.bd-content #main-content dl.module dt .pull-left.headerlink,main.bd-content #main-content dl.class dt .pull-left.headerlink,main.bd-content #main-content dl.exception dt .pull-left.headerlink,main.bd-content #main-content dl.function dt .pull-left.headerlink,main.bd-content #main-content dl.decorator dt .pull-left.headerlink,main.bd-content #main-content dl.data dt .pull-left.headerlink,main.bd-content #main-content dl.method dt .pull-left.headerlink,main.bd-content #main-content dl.attribute dt .pull-left.headerlink{margin-right:.3em}main.bd-content #main-content dl.module dt .pull-right.headerlink,main.bd-content #main-content dl.class dt .pull-right.headerlink,main.bd-content #main-content dl.exception dt .pull-right.headerlink,main.bd-content #main-content dl.function dt .pull-right.headerlink,main.bd-content #main-content dl.decorator dt .pull-right.headerlink,main.bd-content #main-content dl.data dt .pull-right.headerlink,main.bd-content #main-content dl.method dt .pull-right.headerlink,main.bd-content #main-content dl.attribute dt .pull-right.headerlink{margin-left:.3em}main.bd-content #main-content dl.module dt a .headerlink,main.bd-content #main-content dl.class dt a .headerlink,main.bd-content #main-content dl.exception dt a .headerlink,main.bd-content #main-content dl.function dt a .headerlink,main.bd-content #main-content dl.decorator dt a .headerlink,main.bd-content #main-content dl.data dt a .headerlink,main.bd-content #main-content dl.method dt a .headerlink,main.bd-content #main-content dl.attribute dt a .headerlink{display:inline-block;text-decoration:inherit}main.bd-content #main-content dl.module dt .btn .headerlink,main.bd-content #main-content dl.class dt .btn .headerlink,main.bd-content #main-content dl.exception dt .btn .headerlink,main.bd-content #main-content dl.function dt .btn .headerlink,main.bd-content #main-content dl.decorator dt .btn .headerlink,main.bd-content #main-content dl.data dt .btn .headerlink,main.bd-content #main-content dl.method dt .btn .headerlink,main.bd-content #main-content dl.attribute dt .btn .headerlink{display:inline}main.bd-content #main-content dl.module dt .btn .fa-large.headerlink,main.bd-content #main-content dl.class dt .btn .fa-large.headerlink,main.bd-content #main-content dl.exception dt .btn .fa-large.headerlink,main.bd-content #main-content dl.function dt .btn .fa-large.headerlink,main.bd-content #main-content dl.decorator dt .btn .fa-large.headerlink,main.bd-content #main-content dl.data dt .btn .fa-large.headerlink,main.bd-content #main-content dl.method dt .btn .fa-large.headerlink,main.bd-content #main-content dl.attribute dt .btn .fa-large.headerlink{line-height:.9em}main.bd-content #main-content dl.module dt .btn .fa-spin.headerlink,main.bd-content #main-content dl.class dt .btn .fa-spin.headerlink,main.bd-content #main-content dl.exception dt .btn .fa-spin.headerlink,main.bd-content #main-content dl.function dt .btn .fa-spin.headerlink,main.bd-content #main-content dl.decorator dt .btn .fa-spin.headerlink,main.bd-content #main-content dl.data dt .btn .fa-spin.headerlink,main.bd-content #main-content dl.method dt .btn .fa-spin.headerlink,main.bd-content #main-content dl.attribute dt .btn .fa-spin.headerlink{display:inline-block}main.bd-content #main-content dl.module dt .nav .headerlink,main.bd-content #main-content dl.class dt .nav .headerlink,main.bd-content #main-content dl.exception dt .nav .headerlink,main.bd-content #main-content dl.function dt .nav .headerlink,main.bd-content #main-content dl.decorator dt .nav .headerlink,main.bd-content #main-content dl.data dt .nav .headerlink,main.bd-content #main-content dl.method dt .nav .headerlink,main.bd-content #main-content dl.attribute dt .nav .headerlink{display:inline}main.bd-content #main-content dl.module dt .nav .fa-large.headerlink,main.bd-content #main-content dl.class dt .nav .fa-large.headerlink,main.bd-content #main-content dl.exception dt .nav .fa-large.headerlink,main.bd-content #main-content dl.function dt .nav .fa-large.headerlink,main.bd-content #main-content dl.decorator dt .nav .fa-large.headerlink,main.bd-content #main-content dl.data dt .nav .fa-large.headerlink,main.bd-content #main-content dl.method dt .nav .fa-large.headerlink,main.bd-content #main-content dl.attribute dt .nav .fa-large.headerlink{line-height:.9em}main.bd-content #main-content dl.module dt .nav .fa-spin.headerlink,main.bd-content #main-content dl.class dt .nav .fa-spin.headerlink,main.bd-content #main-content dl.exception dt .nav .fa-spin.headerlink,main.bd-content #main-content dl.function dt .nav .fa-spin.headerlink,main.bd-content #main-content dl.decorator dt .nav .fa-spin.headerlink,main.bd-content #main-content dl.data dt .nav .fa-spin.headerlink,main.bd-content #main-content dl.method dt .nav .fa-spin.headerlink,main.bd-content #main-content dl.attribute dt .nav .fa-spin.headerlink{display:inline-block}main.bd-content #main-content dl.module dt .btn.headerlink:before,main.bd-content #main-content dl.class dt .btn.headerlink:before,main.bd-content #main-content dl.exception dt .btn.headerlink:before,main.bd-content #main-content dl.function dt .btn.headerlink:before,main.bd-content #main-content dl.decorator dt .btn.headerlink:before,main.bd-content #main-content dl.data dt .btn.headerlink:before,main.bd-content #main-content dl.method dt .btn.headerlink:before,main.bd-content #main-content dl.attribute dt .btn.headerlink:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}main.bd-content #main-content dl.module dt .btn.headerlink:hover:before,main.bd-content #main-content dl.class dt .btn.headerlink:hover:before,main.bd-content #main-content dl.exception dt .btn.headerlink:hover:before,main.bd-content #main-content dl.function dt .btn.headerlink:hover:before,main.bd-content #main-content dl.decorator dt .btn.headerlink:hover:before,main.bd-content #main-content dl.data dt .btn.headerlink:hover:before,main.bd-content #main-content dl.method dt .btn.headerlink:hover:before,main.bd-content #main-content dl.attribute dt .btn.headerlink:hover:before{opacity:1}main.bd-content #main-content dl.module dt .btn-mini .headerlink:before,main.bd-content #main-content dl.class dt .btn-mini .headerlink:before,main.bd-content #main-content dl.exception dt .btn-mini .headerlink:before,main.bd-content #main-content dl.function dt .btn-mini .headerlink:before,main.bd-content #main-content dl.decorator dt .btn-mini .headerlink:before,main.bd-content #main-content dl.data dt .btn-mini .headerlink:before,main.bd-content #main-content dl.method dt .btn-mini .headerlink:before,main.bd-content #main-content dl.attribute dt .btn-mini .headerlink:before{font-size:14px;vertical-align:-15%}main.bd-content #main-content dl.module dt .rst-versions .rst-current-version .headerlink,main.bd-content #main-content dl.class dt .rst-versions .rst-current-version .headerlink,main.bd-content #main-content dl.exception dt .rst-versions .rst-current-version .headerlink,main.bd-content #main-content dl.function dt .rst-versions .rst-current-version .headerlink,main.bd-content #main-content dl.decorator dt .rst-versions .rst-current-version .headerlink,main.bd-content #main-content dl.data dt .rst-versions .rst-current-version .headerlink,main.bd-content #main-content dl.method dt .rst-versions .rst-current-version .headerlink,main.bd-content #main-content dl.attribute dt .rst-versions .rst-current-version .headerlink{color:#fcfcfc}main.bd-content #main-content dl.module dt:hover .headerlink:after,main.bd-content #main-content dl.class dt:hover .headerlink:after,main.bd-content #main-content dl.exception dt:hover .headerlink:after,main.bd-content #main-content dl.function dt:hover .headerlink:after,main.bd-content #main-content dl.decorator dt:hover .headerlink:after,main.bd-content #main-content dl.data dt:hover .headerlink:after,main.bd-content #main-content dl.method dt:hover .headerlink:after,main.bd-content #main-content dl.attribute dt:hover .headerlink:after{visibility:visible}main.bd-content #main-content dl.module p,main.bd-content #main-content dl.class p,main.bd-content #main-content dl.exception p,main.bd-content #main-content dl.function p,main.bd-content #main-content dl.decorator p,main.bd-content #main-content dl.data p,main.bd-content #main-content dl.method p,main.bd-content #main-content dl.attribute p{margin-bottom:12px !important}main.bd-content #main-content dl.module table,main.bd-content #main-content dl.class table,main.bd-content #main-content dl.exception table,main.bd-content #main-content dl.function table,main.bd-content #main-content dl.decorator table,main.bd-content #main-content dl.data table,main.bd-content #main-content dl.method table,main.bd-content #main-content dl.attribute table{margin-bottom:12px !important}main.bd-content #main-content dl.module ul,main.bd-content #main-content dl.class ul,main.bd-content #main-content dl.exception ul,main.bd-content #main-content dl.function ul,main.bd-content #main-content dl.decorator ul,main.bd-content #main-content dl.data ul,main.bd-content #main-content dl.method ul,main.bd-content #main-content dl.attribute ul{margin-bottom:12px !important}main.bd-content #main-content dl.module ol,main.bd-content #main-content dl.class ol,main.bd-content #main-content dl.exception ol,main.bd-content #main-content dl.function ol,main.bd-content #main-content dl.decorator ol,main.bd-content #main-content dl.data ol,main.bd-content #main-content dl.method ol,main.bd-content #main-content dl.attribute ol{margin-bottom:12px !important}main.bd-content #main-content dl.module dd,main.bd-content #main-content dl.class dd,main.bd-content #main-content dl.exception dd,main.bd-content #main-content dl.function dd,main.bd-content #main-content dl.decorator dd,main.bd-content #main-content dl.data dd,main.bd-content #main-content dl.method dd,main.bd-content #main-content dl.attribute dd{margin:0 0 12px 24px}main.bd-content #main-content dl.module:not(.docutils),main.bd-content #main-content dl.class:not(.docutils),main.bd-content #main-content dl.exception:not(.docutils),main.bd-content #main-content dl.function:not(.docutils),main.bd-content #main-content dl.decorator:not(.docutils),main.bd-content #main-content dl.data:not(.docutils),main.bd-content #main-content dl.method:not(.docutils),main.bd-content #main-content dl.attribute:not(.docutils){margin-bottom:24px}main.bd-content #main-content dl.module:not(.docutils) dt,main.bd-content #main-content dl.class:not(.docutils) dt,main.bd-content #main-content dl.exception:not(.docutils) dt,main.bd-content #main-content dl.function:not(.docutils) dt,main.bd-content #main-content dl.decorator:not(.docutils) dt,main.bd-content #main-content dl.data:not(.docutils) dt,main.bd-content #main-content dl.method:not(.docutils) dt,main.bd-content #main-content dl.attribute:not(.docutils) dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980B9;border-top:solid 3px #6ab0de;padding:6px;position:relative}main.bd-content #main-content dl.module:not(.docutils) dt:before,main.bd-content #main-content dl.class:not(.docutils) dt:before,main.bd-content #main-content dl.exception:not(.docutils) dt:before,main.bd-content #main-content dl.function:not(.docutils) dt:before,main.bd-content #main-content dl.decorator:not(.docutils) dt:before,main.bd-content #main-content dl.data:not(.docutils) dt:before,main.bd-content #main-content dl.method:not(.docutils) dt:before,main.bd-content #main-content dl.attribute:not(.docutils) dt:before{color:#6ab0de}main.bd-content #main-content dl.module:not(.docutils) dt .headerlink,main.bd-content #main-content dl.class:not(.docutils) dt .headerlink,main.bd-content #main-content dl.exception:not(.docutils) dt .headerlink,main.bd-content #main-content dl.function:not(.docutils) dt .headerlink,main.bd-content #main-content dl.decorator:not(.docutils) dt .headerlink,main.bd-content #main-content dl.data:not(.docutils) dt .headerlink,main.bd-content #main-content dl.method:not(.docutils) dt .headerlink,main.bd-content #main-content dl.attribute:not(.docutils) dt .headerlink{color:#404040;font-size:100% !important}main.bd-content #main-content dl.module:not(.docutils) dt:first-child,main.bd-content #main-content dl.class:not(.docutils) dt:first-child,main.bd-content #main-content dl.exception:not(.docutils) dt:first-child,main.bd-content #main-content dl.function:not(.docutils) dt:first-child,main.bd-content #main-content dl.decorator:not(.docutils) dt:first-child,main.bd-content #main-content dl.data:not(.docutils) dt:first-child,main.bd-content #main-content dl.method:not(.docutils) dt:first-child,main.bd-content #main-content dl.attribute:not(.docutils) dt:first-child{margin-top:0}main.bd-content #main-content dl.module:not(.docutils) dl dt,main.bd-content #main-content dl.class:not(.docutils) dl dt,main.bd-content #main-content dl.exception:not(.docutils) dl dt,main.bd-content #main-content dl.function:not(.docutils) dl dt,main.bd-content #main-content dl.decorator:not(.docutils) dl dt,main.bd-content #main-content dl.data:not(.docutils) dl dt,main.bd-content #main-content dl.method:not(.docutils) dl dt,main.bd-content #main-content dl.attribute:not(.docutils) dl dt{margin-bottom:6px;border:none;border-left:solid 3px #ccc;background:#f0f0f0;color:#555}main.bd-content #main-content dl.module:not(.docutils) dl dt .headerlink,main.bd-content #main-content dl.class:not(.docutils) dl dt .headerlink,main.bd-content #main-content dl.exception:not(.docutils) dl dt .headerlink,main.bd-content #main-content dl.function:not(.docutils) dl dt .headerlink,main.bd-content #main-content dl.decorator:not(.docutils) dl dt .headerlink,main.bd-content #main-content dl.data:not(.docutils) dl dt .headerlink,main.bd-content #main-content dl.method:not(.docutils) dl dt .headerlink,main.bd-content #main-content dl.attribute:not(.docutils) dl dt .headerlink{color:#404040;font-size:100% !important}main.bd-content #main-content dl.module:not(.docutils) tt,main.bd-content #main-content dl.class:not(.docutils) tt,main.bd-content #main-content dl.exception:not(.docutils) tt,main.bd-content #main-content dl.function:not(.docutils) tt,main.bd-content #main-content dl.decorator:not(.docutils) tt,main.bd-content #main-content dl.data:not(.docutils) tt,main.bd-content #main-content dl.method:not(.docutils) tt,main.bd-content #main-content dl.attribute:not(.docutils) tt{font-weight:bold;font-weight:bold}main.bd-content #main-content dl.module:not(.docutils) code,main.bd-content #main-content dl.class:not(.docutils) code,main.bd-content #main-content dl.exception:not(.docutils) code,main.bd-content #main-content dl.function:not(.docutils) code,main.bd-content #main-content dl.decorator:not(.docutils) code,main.bd-content #main-content dl.data:not(.docutils) code,main.bd-content #main-content dl.method:not(.docutils) code,main.bd-content #main-content dl.attribute:not(.docutils) code{font-weight:bold}main.bd-content #main-content dl.module:not(.docutils) tt.descname,main.bd-content #main-content dl.class:not(.docutils) tt.descname,main.bd-content #main-content dl.exception:not(.docutils) tt.descname,main.bd-content #main-content dl.function:not(.docutils) tt.descname,main.bd-content #main-content dl.decorator:not(.docutils) tt.descname,main.bd-content #main-content dl.data:not(.docutils) tt.descname,main.bd-content #main-content dl.method:not(.docutils) tt.descname,main.bd-content #main-content dl.attribute:not(.docutils) tt.descname{background-color:transparent;background-color:transparent;border:none;border:none;padding:0;padding:0;font-size:100% !important;font-size:100% !important;font-weight:bold;font-weight:bold}main.bd-content #main-content dl.module:not(.docutils) tt.descclassname,main.bd-content #main-content dl.class:not(.docutils) tt.descclassname,main.bd-content #main-content dl.exception:not(.docutils) tt.descclassname,main.bd-content #main-content dl.function:not(.docutils) tt.descclassname,main.bd-content #main-content dl.decorator:not(.docutils) tt.descclassname,main.bd-content #main-content dl.data:not(.docutils) tt.descclassname,main.bd-content #main-content dl.method:not(.docutils) tt.descclassname,main.bd-content #main-content dl.attribute:not(.docutils) tt.descclassname{background-color:transparent;background-color:transparent;border:none;border:none;padding:0;padding:0;font-size:100% !important;font-size:100% !important}main.bd-content #main-content dl.module:not(.docutils) code.descname,main.bd-content #main-content dl.class:not(.docutils) code.descname,main.bd-content #main-content dl.exception:not(.docutils) code.descname,main.bd-content #main-content dl.function:not(.docutils) code.descname,main.bd-content #main-content dl.decorator:not(.docutils) code.descname,main.bd-content #main-content dl.data:not(.docutils) code.descname,main.bd-content #main-content dl.method:not(.docutils) code.descname,main.bd-content #main-content dl.attribute:not(.docutils) code.descname{background-color:transparent;border:none;padding:0;font-size:100% !important;font-weight:bold}main.bd-content #main-content dl.module:not(.docutils) code.descclassname,main.bd-content #main-content dl.class:not(.docutils) code.descclassname,main.bd-content #main-content dl.exception:not(.docutils) code.descclassname,main.bd-content #main-content dl.function:not(.docutils) code.descclassname,main.bd-content #main-content dl.decorator:not(.docutils) code.descclassname,main.bd-content #main-content dl.data:not(.docutils) code.descclassname,main.bd-content #main-content dl.method:not(.docutils) code.descclassname,main.bd-content #main-content dl.attribute:not(.docutils) code.descclassname{background-color:transparent;border:none;padding:0;font-size:100% !important}main.bd-content #main-content dl.module:not(.docutils) .optional,main.bd-content #main-content dl.class:not(.docutils) .optional,main.bd-content #main-content dl.exception:not(.docutils) .optional,main.bd-content #main-content dl.function:not(.docutils) .optional,main.bd-content #main-content dl.decorator:not(.docutils) .optional,main.bd-content #main-content dl.data:not(.docutils) .optional,main.bd-content #main-content dl.method:not(.docutils) .optional,main.bd-content #main-content dl.attribute:not(.docutils) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:bold}main.bd-content #main-content dl.module:not(.docutils) .property,main.bd-content #main-content dl.class:not(.docutils) .property,main.bd-content #main-content dl.exception:not(.docutils) .property,main.bd-content #main-content dl.function:not(.docutils) .property,main.bd-content #main-content dl.decorator:not(.docutils) .property,main.bd-content #main-content dl.data:not(.docutils) .property,main.bd-content #main-content dl.method:not(.docutils) .property,main.bd-content #main-content dl.attribute:not(.docutils) .property{display:inline-block;padding-right:8px}main.bd-content #main-content dl.module .viewcode-link,main.bd-content #main-content dl.class .viewcode-link,main.bd-content #main-content dl.exception .viewcode-link,main.bd-content #main-content dl.function .viewcode-link,main.bd-content #main-content dl.decorator .viewcode-link,main.bd-content #main-content dl.data .viewcode-link,main.bd-content #main-content dl.method .viewcode-link,main.bd-content #main-content dl.attribute .viewcode-link{display:inline-block;color:#27AE60;font-size:80%;padding-left:24px}div.cell div.cell_output{padding-right:0}div.cell.tag_output_scroll div.cell_output,div.cell.tag_scroll-output div.cell_output{max-height:24em;overflow-y:auto}div.cell.tag_scroll-input div.cell_input{max-height:24em;overflow-y:auto}.highlighttable .linenos{vertical-align:baseline}.toggle.admonition button.toggle-button{top:0.5em !important}.admonition.seealso{border-color:#28a7464b}.admonition.seealso .admonition-title{background-color:rgba(40,167,70,0.1)}.admonition.seealso .admonition-title:before{color:#28a745;content:"\f064"}button.toggle-button-hidden:before{bottom:0.2em !important}div.sidebar,div.margin,div.margin-caption p.caption,.cell.tag_popout,.cell.tag_margin{width:40%;float:right;border-left:1px #a4a6a7 solid;margin-left:0.5em;padding:.2em 0 .2em 1em}div.sidebar p,div.margin p,div.margin-caption p.caption p,.cell.tag_popout p,.cell.tag_margin p{margin-bottom:0}div.sidebar p.sidebar-title,div.margin p.sidebar-title,div.margin-caption p.caption p.sidebar-title,.cell.tag_popout p.sidebar-title,.cell.tag_margin p.sidebar-title{font-weight:bold;font-size:1.2em}@media (min-width: 768px){div.cell.tag_popout,div.cell.tag_margin,div.margin,div.margin-caption p.caption{border:none;clear:right;width:31% !important;margin:0 -35% 0 0 !important;padding:0 !important;font-size:0.9rem;line-height:1.3;vertical-align:baseline;position:relative}div.cell.tag_popout p,div.cell.tag_margin p,div.margin p,div.margin-caption p.caption p{margin-bottom:.5em}div.cell.tag_popout p.sidebar-title,div.cell.tag_margin p.sidebar-title,div.margin p.sidebar-title,div.margin-caption p.caption p.sidebar-title{font-size:1em}div.cell.tag_margin .cell_output{padding-left:0}div.sidebar:not(.margin){width:60%;margin-left:1.5em;margin-right:-28%}}@media (min-width: 768px){div.cell.tag_full-width,div.cell.tag_full_width,div.full_width,div.full-width{width:136% !important}}blockquote{margin:1em;padding:.2em 1.5em;border-left:4px solid #ccc}blockquote.pull-quote,blockquote.epigraph,blockquote.highlights{font-size:1.25em;border-left:none}blockquote div>p{margin-bottom:.5em}blockquote div>p+p.attribution{font-style:normal;font-size:.9em;text-align:right;color:#6c757d;padding-right:2em}div.highlight{background:none;margin-bottom:1em}div.cell div.highlight{margin-bottom:0em}.thebelab-cell{border:none !important}button.thebe-launch-button{height:2.5em;font-size:1em}div.tableofcontents-wrapper p.caption{font-weight:600 !important;margin-bottom:0em !important}.topbar,.topbar-contents,.topbar-main{height:3em}.topbar{background-color:white;transition:left .2s}.scrolled .topbar{box-shadow:0 6px 6px -6px rgba(0,0,0,0.3)}.topbar .topbar-main{padding-top:0.25rem;padding-bottom:0.25rem;padding-right:0}.topbar .topbar-main>button,.topbar .topbar-main>div,.topbar .topbar-main>a{float:left;height:100%}.topbar .topbar-main button.topbarbtn{margin:0 .1em;background-color:white;color:#5a5a5a;border:none;padding-top:.1rem;padding-bottom:.1rem;font-size:1.4em}.topbar .topbar-main button.topbarbtn i.fab{vertical-align:baseline;line-height:1}.topbar .topbar-main div.dropdown-buttons-trigger,.topbar .topbar-main a.edit-button,.topbar .topbar-main a.full-screen-button{float:right}.bd-topbar-whitespace{width:275px;flex:auto;transition:flex 0.2s ease 0s}@media (max-width: 768px){.bd-topbar-whitespace{border-bottom:1px solid transparent}.bd-topbar-whitespace.show,.bd-topbar-whitespace.collapsing,body.scrolled .bd-topbar-whitespace{border-color:rgba(0,0,0,0.1);position:absolute;bottom:0;width:100%;display:block}}span.topbar-button-text{margin-left:0.4em}@media (max-width: 768px){span.topbar-button-text{display:none}}div.dropdown-buttons-trigger div.dropdown-buttons{display:none;position:absolute;max-width:130px;margin-top:.2em;z-index:1000}div.dropdown-buttons-trigger div.dropdown-buttons.sourcebuttons .topbarbtn i{padding-right:6px;margin-left:-5px;font-size:.9em !important}div.dropdown-buttons-trigger div.dropdown-buttons button.topbarbtn{padding-top:.35rem;padding-bottom:.35rem;min-width:120px !important;border:1px white solid !important;background-color:#5a5a5a;color:white;font-size:1em}div.dropdown-buttons-trigger:hover div.dropdown-buttons{display:block}a.dropdown-buttons i{margin-right:.5em}button.topbarbtn img{height:1.15em;padding-right:6px;margin-left:-5px}#navbar-toggler{position:relative;margin-right:1em;margin-left:.5em;color:#5a5a5a}#navbar-toggler i{transition:opacity .3s, transform .3s;position:absolute;top:16%;left:0;display:block;font-size:1.2em}#navbar-toggler i.fa-bars{opacity:0;transform:rotate(180deg) scale(0.5)}#navbar-toggler i.fa-arrow-left,#navbar-toggler i.fa-arrow-up{opacity:1}#navbar-toggler.collapsed i.fa-bars{opacity:1;transform:rotate(0) scale(1)}#navbar-toggler.collapsed i.fa-arrow-left,#navbar-toggler.collapsed i.fa-arrow-up{opacity:0;transform:rotate(-180deg) scale(0.5)}@media (max-width: 768px){#navbar-toggler i.fa-arrow-up{display:none}}@media (min-width: 768px){#navbar-toggler i.fa-arrow-up{display:none}#navbar-toggler i.fa-arrow-left{display:inherit}}@media (min-width: 768px){.bd-topbar-whitespace{max-width:275px}}.bd-toc{padding:0px !important;right:-1em;z-index:999;height:auto}.bd-toc div.onthispage,.bd-toc .toc-entry a{color:#5a5a5a}.bd-toc nav{opacity:0;max-height:0;transition:opacity 0.2s ease, max-height .7s ease;overflow-y:hidden;background:white;scrollbar-width:thin}.bd-toc nav::-webkit-scrollbar{width:5px}.bd-toc nav::-webkit-scrollbar{background:#f1f1f1}.bd-toc nav::-webkit-scrollbar-thumb{background:#c1c1c1}.bd-toc nav::-webkit-scrollbar-thumb:hover{background:#a0a0a0}@media (min-width: 992px){.bd-toc nav:not(:hover){-ms-overflow-style:none}.bd-toc nav:not(:hover)::-webkit-scrollbar{background:#FFFFFF}.bd-toc nav:not(:hover)::-webkit-scrollbar-thumb{background:#FFFFFF}}.bd-toc nav a:hover,.bd-toc nav li.active>a.active{color:#0071bc}.bd-toc nav li.active>a.active{border-left:2px solid #0071bc}.bd-toc nav>.nav{border-left:1px solid #eee}.bd-toc nav>.nav .nav{border-left:none}.bd-toc:hover nav,.bd-toc.show nav{max-height:100vh;opacity:1;overflow-y:auto}.bd-toc:hover .tocsection:after,.bd-toc.show .tocsection:after{opacity:0}.bd-toc .tocsection{padding:.5rem 0 .5rem 1rem !important}.bd-toc .tocsection:after{content:"\f107";font-family:"Font Awesome 5 Free";font-weight:900;padding-left:.5em;transition:opacity .3s ease}.bd-toc .toc-entry a{padding:.125rem 1rem !important}.bd-toc div.editthispage{display:none}#site-navigation{height:100vh !important;width:275px;flex:auto;top:0px !important;margin-left:0;overflow-y:auto;background:white;transition:margin-left .2s ease 0s, opacity .2s ease 0s, visibility .2s ease 0s;z-index:2000 !important;scrollbar-width:thin}#site-navigation::-webkit-scrollbar{width:5px}#site-navigation::-webkit-scrollbar{background:#f1f1f1}#site-navigation::-webkit-scrollbar-thumb{background:#c1c1c1}#site-navigation::-webkit-scrollbar-thumb:hover{background:#a0a0a0}@media (min-width: 992px){#site-navigation:not(:hover){-ms-overflow-style:none}#site-navigation:not(:hover)::-webkit-scrollbar{background:#FFFFFF}#site-navigation:not(:hover)::-webkit-scrollbar-thumb{background:#FFFFFF}}@media (max-width: 768px){#site-navigation{position:fixed;margin-top:3em;border-right:1px solid rgba(0,0,0,0.1)}#site-navigation.single-page{display:none}}#site-navigation nav ul.nav li a,#site-navigation nav ul.nav ul li a{color:#5a5a5a}#site-navigation nav ul.nav a:hover,#site-navigation nav ul.nav li.active>a,#site-navigation nav ul.nav li.active>a:hover{color:#0071bc}#site-navigation h1.site-logo{margin:.5em 0 0 0;font-size:1.1em;color:black;text-align:center}#site-navigation div.navbar_extra_footer{text-align:center;font-size:.9em;color:#5a5a5a;margin-bottom:3em}#site-navigation.single-page{border-right:0}@media (min-width: 768px){div.navbar-brand-box{padding-top:2em}}div.navbar-brand-box a.navbar-brand{width:100%;height:auto}div.navbar-brand-box a.navbar-brand img{display:block;height:auto;width:auto;max-height:10vh;max-width:100%;margin:0 auto}@media (min-width: 768px){div.navbar-brand-box a.navbar-brand img{max-height:15vh !important}}nav.bd-links{margin-left:0px;overflow-y:visible;max-height:none}nav.bd-links p.caption,nav.bd-links .toctree-l1 a{padding-left:0em}@media (min-width: 768px){.bd-sidebar{max-width:275px}}.prev-next-bottom{height:3em}ul.ablog-archive{padding-left:0px}ul.postlist{padding-left:0}ul.postlist>li>p:first-child{font-size:1.5em}ul.postlist li+li{margin-top:2em}ul.postlist li>p>a{font-style:normal;font-size:1.3em}div.bd-sidebar h2{font-size:1.5em}div.bd-sidebar h3{font-size:1.4em}div.bd-sidebar>ul{list-style:none;padding-left:0}@media print{.tag_popout,div.margin{float:right;clear:right;width:50%;margin-right:-56%;margin-top:0;margin-bottom:0;padding-right:1em;font-size:0.9rem;line-height:1.3;vertical-align:baseline;position:relative;border-left:none;padding-left:0}.bd-content div#main-content>div{flex:0 0 75%;max-width:75%}h1,h2,h3,h4{break-after:avoid}table{break-inside:avoid}pre{word-wrap:break-word}a.copybtn,a.headerlink{display:none}.tag-fullwidth{width:145%;clear:both}div.toggle-hidden{visibility:inherit;opacity:1;height:auto}button.toggle-button{display:none}blockquote.epigraph{border:none}div.container{min-width:50% !important}div.bd-sidebar,div.prev-next-bottom{display:none}div.topbar{height:0;padding:0;position:inherit}div.topbar div.topbar-main{opacity:0}div.topbar div.bd-toc{flex:0 0 25%;max-width:25%;height:auto !important}div.topbar div.bd-toc nav,div.topbar div.bd-toc nav>ul.nav,div.topbar div.bd-toc nav>ul.nav>li>ul.nav{opacity:1;display:block}div.topbar div.bd-toc .nav-link.active{font-weight:inherit;color:inherit;background-color:inherit;border-left:inherit}} diff --git a/0.4/_static/sphinx-book-theme.css b/0.4/_static/sphinx-book-theme.css new file mode 100644 index 00000000..6db3a762 --- /dev/null +++ b/0.4/_static/sphinx-book-theme.css @@ -0,0 +1 @@ +body{padding-top:0px !important}body img{max-width:100%}code{font-size:87.5% !important}main.bd-content{padding-top:3em !important;padding-bottom:0px !important}main.bd-content #main-content a.headerlink{opacity:0;margin-left:.2em}main.bd-content #main-content a.headerlink:hover{background-color:transparent;color:#0071bc;opacity:1 !important}main.bd-content #main-content a,main.bd-content #main-content a:visited{color:#0071bc}main.bd-content #main-content h1,main.bd-content #main-content h2,main.bd-content #main-content h3,main.bd-content #main-content h4,main.bd-content #main-content h5{color:black}main.bd-content #main-content h1:hover a.headerlink,main.bd-content #main-content h2:hover a.headerlink,main.bd-content #main-content h3:hover a.headerlink,main.bd-content #main-content h4:hover a.headerlink,main.bd-content #main-content h5:hover a.headerlink{opacity:.5}main.bd-content #main-content h1 a.toc-backref,main.bd-content #main-content h2 a.toc-backref,main.bd-content #main-content h3 a.toc-backref,main.bd-content #main-content h4 a.toc-backref,main.bd-content #main-content h5 a.toc-backref{color:inherit}main.bd-content #main-content div.section{padding-right:1em;overflow:visible !important}main.bd-content #main-content div.section ul p,main.bd-content #main-content div.section ol p{margin-bottom:0}main.bd-content #main-content span.eqno{float:right;font-size:1.2em}main.bd-content #main-content div.figure{width:100%;margin-bottom:1em;text-align:center}main.bd-content #main-content div.figure.align-left{text-align:left}main.bd-content #main-content div.figure.align-left p.caption{margin-left:0}main.bd-content #main-content div.figure.align-right{text-align:right}main.bd-content #main-content div.figure.align-right p.caption{margin-right:0}main.bd-content #main-content div.figure p.caption{margin:.5em 10%}main.bd-content #main-content div.figure.margin p.caption,main.bd-content #main-content div.figure.margin-caption p.caption{margin:.5em 0}main.bd-content #main-content div.figure.margin-caption p.caption{text-align:left}main.bd-content #main-content div.figure span.caption-number{font-weight:bold}main.bd-content #main-content div.figure span{font-size:.9rem}main.bd-content #main-content dl.glossary dd{margin-left:1.5em}main.bd-content #main-content div.contents{padding:1em}main.bd-content #main-content div.contents p.topic-title{font-size:1.5em;padding:.5em 0 0 1em}main.bd-content #main-content p.centered{text-align:center}main.bd-content #main-content div.sphinx-tabs>div.sphinx-menu{padding:0}main.bd-content #main-content div.sphinx-tabs>div.sphinx-menu>a.item{width:auto;margin:0px 0px -1px 0px}main.bd-content #main-content span.brackets:before,main.bd-content #main-content a.brackets:before{content:"["}main.bd-content #main-content span.brackets:after,main.bd-content #main-content a.brackets:after{content:"]"}main.bd-content #main-content .footnote-reference,main.bd-content #main-content a.bibtex.internal{font-size:1em}main.bd-content #main-content dl.footnote span.fn-backref{font-size:1em;padding-left:.1em}main.bd-content #main-content dl.footnote dd{font-size:.9em;margin-left:3em}main.bd-content #main-content dl.citation{margin-left:3em}main.bd-content #main-content dl.footnote dt.label{float:left}main.bd-content #main-content dl.footnote dd p{padding-left:1.5em}div.cell div.cell_output{padding-right:0}div.cell.tag_output_scroll div.cell_output{max-height:24em;overflow-y:auto}.toggle.admonition button.toggle-button{top:0.5em !important}button.toggle-button-hidden:before{bottom:0.2em !important}div.sidebar,div.margin,div.margin-caption p.caption,.cell.tag_popout,.cell.tag_margin{width:40%;float:right;border-left:1px #a4a6a7 solid;margin-left:0.5em;padding:.2em 0 .2em 1em}div.sidebar p,div.margin p,div.margin-caption p.caption p,.cell.tag_popout p,.cell.tag_margin p{margin-bottom:0}div.sidebar p.sidebar-title,div.margin p.sidebar-title,div.margin-caption p.caption p.sidebar-title,.cell.tag_popout p.sidebar-title,.cell.tag_margin p.sidebar-title{font-weight:bold;font-size:1.2em}@media (min-width: 768px){div.cell.tag_popout,div.cell.tag_margin,div.margin,div.margin-caption p.caption{border:none;clear:right;width:31% !important;margin:0 -35% 0 0 !important;padding:0 !important;font-size:0.9rem;line-height:1.3;vertical-align:baseline;position:relative}div.cell.tag_popout p,div.cell.tag_margin p,div.margin p,div.margin-caption p.caption p{margin-bottom:.5em}div.cell.tag_popout p.sidebar-title,div.cell.tag_margin p.sidebar-title,div.margin p.sidebar-title,div.margin-caption p.caption p.sidebar-title{font-size:1em}div.cell.tag_margin .cell_output{padding-left:0}div.sidebar:not(.margin){width:60%;margin-left:1.5em;margin-right:-28%}}@media (min-width: 768px){div.cell.tag_full-width,div.cell.tag_full_width,div.full_width,div.full-width{width:136% !important}}blockquote{margin:1em;padding:.2em 1.5em;border-left:4px solid #ccc}blockquote.pull-quote,blockquote.epigraph,blockquote.highlights{font-size:1.25em;border-left:none}blockquote div>p{margin-bottom:.5em}blockquote div>p+p.attribution{font-style:normal;font-size:.9em;text-align:right;color:#6c757d;padding-right:2em}div.highlight{background:none}.thebelab-cell{border:none !important}button.thebe-launch-button{height:2.5em;font-size:1em}div.tableofcontents-wrapper p.caption{font-weight:600 !important;margin-bottom:0em !important}.topbar{margin:0em auto 1em auto !important;padding-top:.25em;padding-bottom:.25em;background-color:white;height:3em;transition:left .2s}.topbar>div{height:2.5em;top:0px}.topbar .topbar-main>button,.topbar .topbar-main>div,.topbar .topbar-main>a{float:left;height:100%}.topbar .topbar-main button.topbarbtn{margin:0 .1em;background-color:white;color:#5a5a5a;border:none;padding-top:.1rem;padding-bottom:.1rem;font-size:1.4em}.topbar .topbar-main button.topbarbtn i.fab{vertical-align:baseline;line-height:1}.topbar .topbar-main div.dropdown-buttons-trigger,.topbar .topbar-main a.edit-button,.topbar .topbar-main a.full-screen-button{float:right}.bd-topbar-whitespace{padding-right:none}@media (max-width: 768px){.bd-topbar-whitespace{display:none}}span.topbar-button-text{margin-left:0.4em}@media (max-width: 768px){span.topbar-button-text{display:none}}div.dropdown-buttons-trigger div.dropdown-buttons{display:none;position:absolute;max-width:130px;margin-top:.2em;z-index:1000}div.dropdown-buttons-trigger div.dropdown-buttons.sourcebuttons .topbarbtn i{padding-right:6px;margin-left:-5px;font-size:.9em !important}div.dropdown-buttons-trigger div.dropdown-buttons button.topbarbtn{padding-top:.35rem;padding-bottom:.35rem;min-width:120px !important;border:1px white solid !important;background-color:#5a5a5a;color:white;font-size:1em}div.dropdown-buttons-trigger:hover div.dropdown-buttons{display:block}a.dropdown-buttons i{margin-right:.5em}button.topbarbtn img{height:1.15em;padding-right:6px;margin-left:-5px}#navbar-toggler{position:relative;margin-right:1em;margin-left:.5em;color:#5a5a5a}#navbar-toggler i{transition:opacity .3s, transform .3s;position:absolute;top:16%;left:0;display:block;font-size:1.2em}#navbar-toggler i.fa-bars{opacity:0;transform:rotate(180deg) scale(0.5)}#navbar-toggler i.fa-arrow-left,#navbar-toggler i.fa-arrow-up{opacity:1}#navbar-toggler.collapsed i.fa-bars{opacity:1;transform:rotate(0) scale(1)}#navbar-toggler.collapsed i.fa-arrow-left,#navbar-toggler.collapsed i.fa-arrow-up{opacity:0;transform:rotate(-180deg) scale(0.5)}@media (max-width: 768px){#navbar-toggler i.fa-arrow-up{display:inherit}#navbar-toggler i.fa-arrow-left{display:none}}@media (min-width: 768px){#navbar-toggler i.fa-arrow-up{display:none}#navbar-toggler i.fa-arrow-left{display:inherit}}.bd-toc{padding:0px !important;overflow-y:visible;background:white;right:0;z-index:999;transition:height .35s ease}.bd-toc div.onthispage,.bd-toc .toc-entry a{color:#5a5a5a}.bd-toc nav{opacity:0;max-height:0;transition:opacity 0.2s ease, max-height .7s ease;overflow-y:hidden;background:white}.bd-toc nav a:hover,.bd-toc nav li.active>a.active{color:#0071bc}.bd-toc nav li.active>a.active{border-left:2px solid #0071bc}.bd-toc:hover nav,.bd-toc.show nav{max-height:100vh;opacity:1}.bd-toc:hover .tocsection:after,.bd-toc.show .tocsection:after{opacity:0}.bd-toc .tocsection{padding:.5rem 0 .5rem 1rem !important}.bd-toc .tocsection:after{content:"\f107";font-family:"Font Awesome 5 Free";font-weight:900;padding-left:.5em;transition:opacity .3s ease}.bd-toc .toc-entry a{padding:.125rem 1rem !important}.bd-toc div.editthispage{display:none}.bd-sidebar{top:0px !important;overflow-y:auto;height:100vh !important;scrollbar-width:thin}.bd-sidebar nav ul.nav li a,.bd-sidebar nav ul.nav ul li a{color:#5a5a5a}.bd-sidebar nav ul.nav a:hover,.bd-sidebar nav ul.nav li.active>a,.bd-sidebar nav ul.nav li.active>a:hover{color:#0071bc}.bd-sidebar::-webkit-scrollbar{width:5px}.bd-sidebar::-webkit-scrollbar{background:#f1f1f1}.bd-sidebar::-webkit-scrollbar-thumb{background:#c1c1c1}@media (min-width: 992px){.bd-sidebar:not(:hover){-ms-overflow-style:none}.bd-sidebar:not(:hover)::-webkit-scrollbar{background:#FFFFFF}.bd-sidebar:not(:hover)::-webkit-scrollbar-thumb{background:#FFFFFF}}.bd-sidebar h1.site-logo{margin:.5em 0 0 0;font-size:1.1em;color:black;text-align:center}.bd-sidebar div.navbar_extra_footer{text-align:center;font-size:.9em;color:#5a5a5a;margin-bottom:3em}.bd-sidebar.collapsing{border:none;overflow:hidden;position:relative;padding-top:0}.bd-sidebar p.caption{margin-top:1.25em;margin-bottom:0;font-size:1.2em}.bd-sidebar li>a>i{font-size:.8em;margin-left:0.3em}.toc-h1,.toc-h2,.toc-h3,.toc-h4{font-size:1em}.toc-h1>a,.toc-h2>a,.toc-h3>a,.toc-h4>a{font-size:0.9em}.site-navigation,.site-navigation.collapsing{transition:flex .2s ease 0s, height .35s ease, opacity 0.2s ease}@media (max-width: 768px){#site-navigation{position:fixed;margin-top:3em;z-index:2000;background:white}}@media (max-width: 768px){.bd-sidebar{height:60vh !important;border-bottom:3px solid #c3c3c3}.bd-sidebar.collapsing{height:0px !important}.bd-sidebar.single-page{display:none}}@media (min-width: 768px){.bd-sidebar{z-index:2000 !important}.site-navigation.collapsing{flex:0 0 0px;padding:0}}@media (min-width: 768px){div.navbar-brand-box{padding-top:2em}}div.navbar-brand-box a.navbar-brand{width:100%;height:auto}div.navbar-brand-box a.navbar-brand img{display:block;height:auto;width:auto;max-height:10vh;max-width:100%;margin:0 auto}@media (min-width: 768px){div.navbar-brand-box a.navbar-brand img{max-height:15vh !important}}nav.bd-links{margin-left:0px;max-height:none !important}nav.bd-links p.caption{font-size:0.9em;text-transform:uppercase;font-weight:bold}nav.bd-links p.caption:first-child{margin-top:0}nav.bd-links ul{list-style:none}nav.bd-links li{width:100%}nav.bd-links li.toctree-l1,nav.bd-links li.toctree-l2,nav.bd-links li.toctree-l3,nav.bd-links li.toctree-l4,nav.bd-links li.toctree-l5{font-size:1em}nav.bd-links li.toctree-l1>a,nav.bd-links li.toctree-l2>a,nav.bd-links li.toctree-l3>a,nav.bd-links li.toctree-l4>a,nav.bd-links li.toctree-l5>a{font-size:0.9em}nav.bd-links>ul.nav{padding-left:0}nav.bd-links>ul.nav ul{padding:0 0 0 1rem}nav.bd-links>ul.nav a{padding:.25rem 0 !important}@media (min-width: 768px){.bd-sidebar,.bd-topbar-whitespace{max-width:275px}}.prev-next-bottom{height:3em}@media print{.tag_popout,div.margin{float:right;clear:right;width:50%;margin-right:-56%;margin-top:0;margin-bottom:0;padding-right:1em;font-size:0.9rem;line-height:1.3;vertical-align:baseline;position:relative;border-left:none;padding-left:0}.bd-content div#main-content>div{flex:0 0 75%;max-width:75%}h1,h2,h3,h4{break-after:avoid}table{break-inside:avoid}pre{word-wrap:break-word}a.copybtn,a.headerlink{display:none}.tag-fullwidth{width:145%;clear:both}div.toggle-hidden{visibility:inherit;opacity:1;height:auto}button.toggle-button{display:none}blockquote.epigraph{border:none}div.container{min-width:50% !important}div.bd-sidebar,div.prev-next-bottom{display:none}div.topbar{height:0;padding:0;position:inherit}div.topbar div.topbar-main{opacity:0}div.topbar div.bd-toc{flex:0 0 25%;max-width:25%;height:auto !important}div.topbar div.bd-toc nav,div.topbar div.bd-toc nav>ul.nav,div.topbar div.bd-toc nav>ul.nav>li>ul.nav{opacity:1;display:block}div.topbar div.bd-toc .nav-link.active{font-weight:inherit;color:inherit;background-color:inherit;border-left:inherit}} diff --git a/0.4/_static/sphinx_highlight.js b/0.4/_static/sphinx_highlight.js new file mode 100644 index 00000000..8a96c69a --- /dev/null +++ b/0.4/_static/sphinx_highlight.js @@ -0,0 +1,154 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + const rest = document.createTextNode(val.substr(pos + text.length)); + parent.insertBefore( + span, + parent.insertBefore( + rest, + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + /* There may be more occurrences of search term in this node. So call this + * function recursively on the remaining fragment. + */ + _highlight(rest, addItems, text, className); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(() => { + /* Do not call highlightSearchWords() when we are on the search page. + * It will highlight words from the *previous* search query. + */ + if (typeof Search === "undefined") SphinxHighlight.highlightSearchWords(); + SphinxHighlight.initEscapeListener(); +}); diff --git a/0.4/_static/vendor/fontawesome/5.13.0/LICENSE.txt b/0.4/_static/vendor/fontawesome/5.13.0/LICENSE.txt new file mode 100644 index 00000000..f31bef92 --- /dev/null +++ b/0.4/_static/vendor/fontawesome/5.13.0/LICENSE.txt @@ -0,0 +1,34 @@ +Font Awesome Free License +------------------------- + +Font Awesome Free is free, open source, and GPL friendly. You can use it for +commercial projects, open source projects, or really almost whatever you want. +Full Font Awesome Free license: https://fontawesome.com/license/free. + +# Icons: CC BY 4.0 License (https://creativecommons.org/licenses/by/4.0/) +In the Font Awesome Free download, the CC BY 4.0 license applies to all icons +packaged as SVG and JS file types. + +# Fonts: SIL OFL 1.1 License (https://scripts.sil.org/OFL) +In the Font Awesome Free download, the SIL OFL license applies to all icons +packaged as web and desktop font files. + +# Code: MIT License (https://opensource.org/licenses/MIT) +In the Font Awesome Free download, the MIT license applies to all non-font and +non-icon files. + +# Attribution +Attribution is required by MIT, SIL OFL, and CC BY licenses. Downloaded Font +Awesome Free files already contain embedded comments with sufficient +attribution, so you shouldn't need to do anything additional when using these +files normally. + +We've kept attribution comments terse, so we ask that you do not actively work +to remove them from files, especially code. They're a great way for folks to +learn about Font Awesome. + +# Brand Icons +All brand icons are trademarks of their respective owners. The use of these +trademarks does not indicate endorsement of the trademark holder by Font +Awesome, nor vice versa. **Please do not use brand logos for any purpose except +to represent the company, product, or service to which they refer.** diff --git a/0.4/_static/vendor/fontawesome/5.13.0/css/all.min.css b/0.4/_static/vendor/fontawesome/5.13.0/css/all.min.css new file mode 100644 index 00000000..3d28ab20 --- /dev/null +++ b/0.4/_static/vendor/fontawesome/5.13.0/css/all.min.css @@ -0,0 +1,5 @@ +/*! + * Font Awesome Free 5.13.0 by @fontawesome - https://fontawesome.com + * License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) + */ +.fa,.fab,.fad,.fal,.far,.fas{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;display:inline-block;font-style:normal;font-variant:normal;text-rendering:auto;line-height:1}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-.0667em}.fa-xs{font-size:.75em}.fa-sm{font-size:.875em}.fa-1x{font-size:1em}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-6x{font-size:6em}.fa-7x{font-size:7em}.fa-8x{font-size:8em}.fa-9x{font-size:9em}.fa-10x{font-size:10em}.fa-fw{text-align:center;width:1.25em}.fa-ul{list-style-type:none;margin-left:2.5em;padding-left:0}.fa-ul>li{position:relative}.fa-li{left:-2em;position:absolute;text-align:center;width:2em;line-height:inherit}.fa-border{border:.08em solid #eee;border-radius:.1em;padding:.2em .25em .15em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa.fa-pull-left,.fab.fa-pull-left,.fal.fa-pull-left,.far.fa-pull-left,.fas.fa-pull-left{margin-right:.3em}.fa.fa-pull-right,.fab.fa-pull-right,.fal.fa-pull-right,.far.fa-pull-right,.fas.fa-pull-right{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(1turn);transform:rotate(1turn)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(1turn);transform:rotate(1turn)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-webkit-transform:scaleY(-1);transform:scaleY(-1)}.fa-flip-both,.fa-flip-horizontal.fa-flip-vertical,.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)"}.fa-flip-both,.fa-flip-horizontal.fa-flip-vertical{-webkit-transform:scale(-1);transform:scale(-1)}:root .fa-flip-both,:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{-webkit-filter:none;filter:none}.fa-stack{display:inline-block;height:2em;line-height:2em;position:relative;vertical-align:middle;width:2.5em}.fa-stack-1x,.fa-stack-2x{left:0;position:absolute;text-align:center;width:100%}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-500px:before{content:"\f26e"}.fa-accessible-icon:before{content:"\f368"}.fa-accusoft:before{content:"\f369"}.fa-acquisitions-incorporated:before{content:"\f6af"}.fa-ad:before{content:"\f641"}.fa-address-book:before{content:"\f2b9"}.fa-address-card:before{content:"\f2bb"}.fa-adjust:before{content:"\f042"}.fa-adn:before{content:"\f170"}.fa-adobe:before{content:"\f778"}.fa-adversal:before{content:"\f36a"}.fa-affiliatetheme:before{content:"\f36b"}.fa-air-freshener:before{content:"\f5d0"}.fa-airbnb:before{content:"\f834"}.fa-algolia:before{content:"\f36c"}.fa-align-center:before{content:"\f037"}.fa-align-justify:before{content:"\f039"}.fa-align-left:before{content:"\f036"}.fa-align-right:before{content:"\f038"}.fa-alipay:before{content:"\f642"}.fa-allergies:before{content:"\f461"}.fa-amazon:before{content:"\f270"}.fa-amazon-pay:before{content:"\f42c"}.fa-ambulance:before{content:"\f0f9"}.fa-american-sign-language-interpreting:before{content:"\f2a3"}.fa-amilia:before{content:"\f36d"}.fa-anchor:before{content:"\f13d"}.fa-android:before{content:"\f17b"}.fa-angellist:before{content:"\f209"}.fa-angle-double-down:before{content:"\f103"}.fa-angle-double-left:before{content:"\f100"}.fa-angle-double-right:before{content:"\f101"}.fa-angle-double-up:before{content:"\f102"}.fa-angle-down:before{content:"\f107"}.fa-angle-left:before{content:"\f104"}.fa-angle-right:before{content:"\f105"}.fa-angle-up:before{content:"\f106"}.fa-angry:before{content:"\f556"}.fa-angrycreative:before{content:"\f36e"}.fa-angular:before{content:"\f420"}.fa-ankh:before{content:"\f644"}.fa-app-store:before{content:"\f36f"}.fa-app-store-ios:before{content:"\f370"}.fa-apper:before{content:"\f371"}.fa-apple:before{content:"\f179"}.fa-apple-alt:before{content:"\f5d1"}.fa-apple-pay:before{content:"\f415"}.fa-archive:before{content:"\f187"}.fa-archway:before{content:"\f557"}.fa-arrow-alt-circle-down:before{content:"\f358"}.fa-arrow-alt-circle-left:before{content:"\f359"}.fa-arrow-alt-circle-right:before{content:"\f35a"}.fa-arrow-alt-circle-up:before{content:"\f35b"}.fa-arrow-circle-down:before{content:"\f0ab"}.fa-arrow-circle-left:before{content:"\f0a8"}.fa-arrow-circle-right:before{content:"\f0a9"}.fa-arrow-circle-up:before{content:"\f0aa"}.fa-arrow-down:before{content:"\f063"}.fa-arrow-left:before{content:"\f060"}.fa-arrow-right:before{content:"\f061"}.fa-arrow-up:before{content:"\f062"}.fa-arrows-alt:before{content:"\f0b2"}.fa-arrows-alt-h:before{content:"\f337"}.fa-arrows-alt-v:before{content:"\f338"}.fa-artstation:before{content:"\f77a"}.fa-assistive-listening-systems:before{content:"\f2a2"}.fa-asterisk:before{content:"\f069"}.fa-asymmetrik:before{content:"\f372"}.fa-at:before{content:"\f1fa"}.fa-atlas:before{content:"\f558"}.fa-atlassian:before{content:"\f77b"}.fa-atom:before{content:"\f5d2"}.fa-audible:before{content:"\f373"}.fa-audio-description:before{content:"\f29e"}.fa-autoprefixer:before{content:"\f41c"}.fa-avianex:before{content:"\f374"}.fa-aviato:before{content:"\f421"}.fa-award:before{content:"\f559"}.fa-aws:before{content:"\f375"}.fa-baby:before{content:"\f77c"}.fa-baby-carriage:before{content:"\f77d"}.fa-backspace:before{content:"\f55a"}.fa-backward:before{content:"\f04a"}.fa-bacon:before{content:"\f7e5"}.fa-bahai:before{content:"\f666"}.fa-balance-scale:before{content:"\f24e"}.fa-balance-scale-left:before{content:"\f515"}.fa-balance-scale-right:before{content:"\f516"}.fa-ban:before{content:"\f05e"}.fa-band-aid:before{content:"\f462"}.fa-bandcamp:before{content:"\f2d5"}.fa-barcode:before{content:"\f02a"}.fa-bars:before{content:"\f0c9"}.fa-baseball-ball:before{content:"\f433"}.fa-basketball-ball:before{content:"\f434"}.fa-bath:before{content:"\f2cd"}.fa-battery-empty:before{content:"\f244"}.fa-battery-full:before{content:"\f240"}.fa-battery-half:before{content:"\f242"}.fa-battery-quarter:before{content:"\f243"}.fa-battery-three-quarters:before{content:"\f241"}.fa-battle-net:before{content:"\f835"}.fa-bed:before{content:"\f236"}.fa-beer:before{content:"\f0fc"}.fa-behance:before{content:"\f1b4"}.fa-behance-square:before{content:"\f1b5"}.fa-bell:before{content:"\f0f3"}.fa-bell-slash:before{content:"\f1f6"}.fa-bezier-curve:before{content:"\f55b"}.fa-bible:before{content:"\f647"}.fa-bicycle:before{content:"\f206"}.fa-biking:before{content:"\f84a"}.fa-bimobject:before{content:"\f378"}.fa-binoculars:before{content:"\f1e5"}.fa-biohazard:before{content:"\f780"}.fa-birthday-cake:before{content:"\f1fd"}.fa-bitbucket:before{content:"\f171"}.fa-bitcoin:before{content:"\f379"}.fa-bity:before{content:"\f37a"}.fa-black-tie:before{content:"\f27e"}.fa-blackberry:before{content:"\f37b"}.fa-blender:before{content:"\f517"}.fa-blender-phone:before{content:"\f6b6"}.fa-blind:before{content:"\f29d"}.fa-blog:before{content:"\f781"}.fa-blogger:before{content:"\f37c"}.fa-blogger-b:before{content:"\f37d"}.fa-bluetooth:before{content:"\f293"}.fa-bluetooth-b:before{content:"\f294"}.fa-bold:before{content:"\f032"}.fa-bolt:before{content:"\f0e7"}.fa-bomb:before{content:"\f1e2"}.fa-bone:before{content:"\f5d7"}.fa-bong:before{content:"\f55c"}.fa-book:before{content:"\f02d"}.fa-book-dead:before{content:"\f6b7"}.fa-book-medical:before{content:"\f7e6"}.fa-book-open:before{content:"\f518"}.fa-book-reader:before{content:"\f5da"}.fa-bookmark:before{content:"\f02e"}.fa-bootstrap:before{content:"\f836"}.fa-border-all:before{content:"\f84c"}.fa-border-none:before{content:"\f850"}.fa-border-style:before{content:"\f853"}.fa-bowling-ball:before{content:"\f436"}.fa-box:before{content:"\f466"}.fa-box-open:before{content:"\f49e"}.fa-box-tissue:before{content:"\f95b"}.fa-boxes:before{content:"\f468"}.fa-braille:before{content:"\f2a1"}.fa-brain:before{content:"\f5dc"}.fa-bread-slice:before{content:"\f7ec"}.fa-briefcase:before{content:"\f0b1"}.fa-briefcase-medical:before{content:"\f469"}.fa-broadcast-tower:before{content:"\f519"}.fa-broom:before{content:"\f51a"}.fa-brush:before{content:"\f55d"}.fa-btc:before{content:"\f15a"}.fa-buffer:before{content:"\f837"}.fa-bug:before{content:"\f188"}.fa-building:before{content:"\f1ad"}.fa-bullhorn:before{content:"\f0a1"}.fa-bullseye:before{content:"\f140"}.fa-burn:before{content:"\f46a"}.fa-buromobelexperte:before{content:"\f37f"}.fa-bus:before{content:"\f207"}.fa-bus-alt:before{content:"\f55e"}.fa-business-time:before{content:"\f64a"}.fa-buy-n-large:before{content:"\f8a6"}.fa-buysellads:before{content:"\f20d"}.fa-calculator:before{content:"\f1ec"}.fa-calendar:before{content:"\f133"}.fa-calendar-alt:before{content:"\f073"}.fa-calendar-check:before{content:"\f274"}.fa-calendar-day:before{content:"\f783"}.fa-calendar-minus:before{content:"\f272"}.fa-calendar-plus:before{content:"\f271"}.fa-calendar-times:before{content:"\f273"}.fa-calendar-week:before{content:"\f784"}.fa-camera:before{content:"\f030"}.fa-camera-retro:before{content:"\f083"}.fa-campground:before{content:"\f6bb"}.fa-canadian-maple-leaf:before{content:"\f785"}.fa-candy-cane:before{content:"\f786"}.fa-cannabis:before{content:"\f55f"}.fa-capsules:before{content:"\f46b"}.fa-car:before{content:"\f1b9"}.fa-car-alt:before{content:"\f5de"}.fa-car-battery:before{content:"\f5df"}.fa-car-crash:before{content:"\f5e1"}.fa-car-side:before{content:"\f5e4"}.fa-caravan:before{content:"\f8ff"}.fa-caret-down:before{content:"\f0d7"}.fa-caret-left:before{content:"\f0d9"}.fa-caret-right:before{content:"\f0da"}.fa-caret-square-down:before{content:"\f150"}.fa-caret-square-left:before{content:"\f191"}.fa-caret-square-right:before{content:"\f152"}.fa-caret-square-up:before{content:"\f151"}.fa-caret-up:before{content:"\f0d8"}.fa-carrot:before{content:"\f787"}.fa-cart-arrow-down:before{content:"\f218"}.fa-cart-plus:before{content:"\f217"}.fa-cash-register:before{content:"\f788"}.fa-cat:before{content:"\f6be"}.fa-cc-amazon-pay:before{content:"\f42d"}.fa-cc-amex:before{content:"\f1f3"}.fa-cc-apple-pay:before{content:"\f416"}.fa-cc-diners-club:before{content:"\f24c"}.fa-cc-discover:before{content:"\f1f2"}.fa-cc-jcb:before{content:"\f24b"}.fa-cc-mastercard:before{content:"\f1f1"}.fa-cc-paypal:before{content:"\f1f4"}.fa-cc-stripe:before{content:"\f1f5"}.fa-cc-visa:before{content:"\f1f0"}.fa-centercode:before{content:"\f380"}.fa-centos:before{content:"\f789"}.fa-certificate:before{content:"\f0a3"}.fa-chair:before{content:"\f6c0"}.fa-chalkboard:before{content:"\f51b"}.fa-chalkboard-teacher:before{content:"\f51c"}.fa-charging-station:before{content:"\f5e7"}.fa-chart-area:before{content:"\f1fe"}.fa-chart-bar:before{content:"\f080"}.fa-chart-line:before{content:"\f201"}.fa-chart-pie:before{content:"\f200"}.fa-check:before{content:"\f00c"}.fa-check-circle:before{content:"\f058"}.fa-check-double:before{content:"\f560"}.fa-check-square:before{content:"\f14a"}.fa-cheese:before{content:"\f7ef"}.fa-chess:before{content:"\f439"}.fa-chess-bishop:before{content:"\f43a"}.fa-chess-board:before{content:"\f43c"}.fa-chess-king:before{content:"\f43f"}.fa-chess-knight:before{content:"\f441"}.fa-chess-pawn:before{content:"\f443"}.fa-chess-queen:before{content:"\f445"}.fa-chess-rook:before{content:"\f447"}.fa-chevron-circle-down:before{content:"\f13a"}.fa-chevron-circle-left:before{content:"\f137"}.fa-chevron-circle-right:before{content:"\f138"}.fa-chevron-circle-up:before{content:"\f139"}.fa-chevron-down:before{content:"\f078"}.fa-chevron-left:before{content:"\f053"}.fa-chevron-right:before{content:"\f054"}.fa-chevron-up:before{content:"\f077"}.fa-child:before{content:"\f1ae"}.fa-chrome:before{content:"\f268"}.fa-chromecast:before{content:"\f838"}.fa-church:before{content:"\f51d"}.fa-circle:before{content:"\f111"}.fa-circle-notch:before{content:"\f1ce"}.fa-city:before{content:"\f64f"}.fa-clinic-medical:before{content:"\f7f2"}.fa-clipboard:before{content:"\f328"}.fa-clipboard-check:before{content:"\f46c"}.fa-clipboard-list:before{content:"\f46d"}.fa-clock:before{content:"\f017"}.fa-clone:before{content:"\f24d"}.fa-closed-captioning:before{content:"\f20a"}.fa-cloud:before{content:"\f0c2"}.fa-cloud-download-alt:before{content:"\f381"}.fa-cloud-meatball:before{content:"\f73b"}.fa-cloud-moon:before{content:"\f6c3"}.fa-cloud-moon-rain:before{content:"\f73c"}.fa-cloud-rain:before{content:"\f73d"}.fa-cloud-showers-heavy:before{content:"\f740"}.fa-cloud-sun:before{content:"\f6c4"}.fa-cloud-sun-rain:before{content:"\f743"}.fa-cloud-upload-alt:before{content:"\f382"}.fa-cloudscale:before{content:"\f383"}.fa-cloudsmith:before{content:"\f384"}.fa-cloudversify:before{content:"\f385"}.fa-cocktail:before{content:"\f561"}.fa-code:before{content:"\f121"}.fa-code-branch:before{content:"\f126"}.fa-codepen:before{content:"\f1cb"}.fa-codiepie:before{content:"\f284"}.fa-coffee:before{content:"\f0f4"}.fa-cog:before{content:"\f013"}.fa-cogs:before{content:"\f085"}.fa-coins:before{content:"\f51e"}.fa-columns:before{content:"\f0db"}.fa-comment:before{content:"\f075"}.fa-comment-alt:before{content:"\f27a"}.fa-comment-dollar:before{content:"\f651"}.fa-comment-dots:before{content:"\f4ad"}.fa-comment-medical:before{content:"\f7f5"}.fa-comment-slash:before{content:"\f4b3"}.fa-comments:before{content:"\f086"}.fa-comments-dollar:before{content:"\f653"}.fa-compact-disc:before{content:"\f51f"}.fa-compass:before{content:"\f14e"}.fa-compress:before{content:"\f066"}.fa-compress-alt:before{content:"\f422"}.fa-compress-arrows-alt:before{content:"\f78c"}.fa-concierge-bell:before{content:"\f562"}.fa-confluence:before{content:"\f78d"}.fa-connectdevelop:before{content:"\f20e"}.fa-contao:before{content:"\f26d"}.fa-cookie:before{content:"\f563"}.fa-cookie-bite:before{content:"\f564"}.fa-copy:before{content:"\f0c5"}.fa-copyright:before{content:"\f1f9"}.fa-cotton-bureau:before{content:"\f89e"}.fa-couch:before{content:"\f4b8"}.fa-cpanel:before{content:"\f388"}.fa-creative-commons:before{content:"\f25e"}.fa-creative-commons-by:before{content:"\f4e7"}.fa-creative-commons-nc:before{content:"\f4e8"}.fa-creative-commons-nc-eu:before{content:"\f4e9"}.fa-creative-commons-nc-jp:before{content:"\f4ea"}.fa-creative-commons-nd:before{content:"\f4eb"}.fa-creative-commons-pd:before{content:"\f4ec"}.fa-creative-commons-pd-alt:before{content:"\f4ed"}.fa-creative-commons-remix:before{content:"\f4ee"}.fa-creative-commons-sa:before{content:"\f4ef"}.fa-creative-commons-sampling:before{content:"\f4f0"}.fa-creative-commons-sampling-plus:before{content:"\f4f1"}.fa-creative-commons-share:before{content:"\f4f2"}.fa-creative-commons-zero:before{content:"\f4f3"}.fa-credit-card:before{content:"\f09d"}.fa-critical-role:before{content:"\f6c9"}.fa-crop:before{content:"\f125"}.fa-crop-alt:before{content:"\f565"}.fa-cross:before{content:"\f654"}.fa-crosshairs:before{content:"\f05b"}.fa-crow:before{content:"\f520"}.fa-crown:before{content:"\f521"}.fa-crutch:before{content:"\f7f7"}.fa-css3:before{content:"\f13c"}.fa-css3-alt:before{content:"\f38b"}.fa-cube:before{content:"\f1b2"}.fa-cubes:before{content:"\f1b3"}.fa-cut:before{content:"\f0c4"}.fa-cuttlefish:before{content:"\f38c"}.fa-d-and-d:before{content:"\f38d"}.fa-d-and-d-beyond:before{content:"\f6ca"}.fa-dailymotion:before{content:"\f952"}.fa-dashcube:before{content:"\f210"}.fa-database:before{content:"\f1c0"}.fa-deaf:before{content:"\f2a4"}.fa-delicious:before{content:"\f1a5"}.fa-democrat:before{content:"\f747"}.fa-deploydog:before{content:"\f38e"}.fa-deskpro:before{content:"\f38f"}.fa-desktop:before{content:"\f108"}.fa-dev:before{content:"\f6cc"}.fa-deviantart:before{content:"\f1bd"}.fa-dharmachakra:before{content:"\f655"}.fa-dhl:before{content:"\f790"}.fa-diagnoses:before{content:"\f470"}.fa-diaspora:before{content:"\f791"}.fa-dice:before{content:"\f522"}.fa-dice-d20:before{content:"\f6cf"}.fa-dice-d6:before{content:"\f6d1"}.fa-dice-five:before{content:"\f523"}.fa-dice-four:before{content:"\f524"}.fa-dice-one:before{content:"\f525"}.fa-dice-six:before{content:"\f526"}.fa-dice-three:before{content:"\f527"}.fa-dice-two:before{content:"\f528"}.fa-digg:before{content:"\f1a6"}.fa-digital-ocean:before{content:"\f391"}.fa-digital-tachograph:before{content:"\f566"}.fa-directions:before{content:"\f5eb"}.fa-discord:before{content:"\f392"}.fa-discourse:before{content:"\f393"}.fa-disease:before{content:"\f7fa"}.fa-divide:before{content:"\f529"}.fa-dizzy:before{content:"\f567"}.fa-dna:before{content:"\f471"}.fa-dochub:before{content:"\f394"}.fa-docker:before{content:"\f395"}.fa-dog:before{content:"\f6d3"}.fa-dollar-sign:before{content:"\f155"}.fa-dolly:before{content:"\f472"}.fa-dolly-flatbed:before{content:"\f474"}.fa-donate:before{content:"\f4b9"}.fa-door-closed:before{content:"\f52a"}.fa-door-open:before{content:"\f52b"}.fa-dot-circle:before{content:"\f192"}.fa-dove:before{content:"\f4ba"}.fa-download:before{content:"\f019"}.fa-draft2digital:before{content:"\f396"}.fa-drafting-compass:before{content:"\f568"}.fa-dragon:before{content:"\f6d5"}.fa-draw-polygon:before{content:"\f5ee"}.fa-dribbble:before{content:"\f17d"}.fa-dribbble-square:before{content:"\f397"}.fa-dropbox:before{content:"\f16b"}.fa-drum:before{content:"\f569"}.fa-drum-steelpan:before{content:"\f56a"}.fa-drumstick-bite:before{content:"\f6d7"}.fa-drupal:before{content:"\f1a9"}.fa-dumbbell:before{content:"\f44b"}.fa-dumpster:before{content:"\f793"}.fa-dumpster-fire:before{content:"\f794"}.fa-dungeon:before{content:"\f6d9"}.fa-dyalog:before{content:"\f399"}.fa-earlybirds:before{content:"\f39a"}.fa-ebay:before{content:"\f4f4"}.fa-edge:before{content:"\f282"}.fa-edit:before{content:"\f044"}.fa-egg:before{content:"\f7fb"}.fa-eject:before{content:"\f052"}.fa-elementor:before{content:"\f430"}.fa-ellipsis-h:before{content:"\f141"}.fa-ellipsis-v:before{content:"\f142"}.fa-ello:before{content:"\f5f1"}.fa-ember:before{content:"\f423"}.fa-empire:before{content:"\f1d1"}.fa-envelope:before{content:"\f0e0"}.fa-envelope-open:before{content:"\f2b6"}.fa-envelope-open-text:before{content:"\f658"}.fa-envelope-square:before{content:"\f199"}.fa-envira:before{content:"\f299"}.fa-equals:before{content:"\f52c"}.fa-eraser:before{content:"\f12d"}.fa-erlang:before{content:"\f39d"}.fa-ethereum:before{content:"\f42e"}.fa-ethernet:before{content:"\f796"}.fa-etsy:before{content:"\f2d7"}.fa-euro-sign:before{content:"\f153"}.fa-evernote:before{content:"\f839"}.fa-exchange-alt:before{content:"\f362"}.fa-exclamation:before{content:"\f12a"}.fa-exclamation-circle:before{content:"\f06a"}.fa-exclamation-triangle:before{content:"\f071"}.fa-expand:before{content:"\f065"}.fa-expand-alt:before{content:"\f424"}.fa-expand-arrows-alt:before{content:"\f31e"}.fa-expeditedssl:before{content:"\f23e"}.fa-external-link-alt:before{content:"\f35d"}.fa-external-link-square-alt:before{content:"\f360"}.fa-eye:before{content:"\f06e"}.fa-eye-dropper:before{content:"\f1fb"}.fa-eye-slash:before{content:"\f070"}.fa-facebook:before{content:"\f09a"}.fa-facebook-f:before{content:"\f39e"}.fa-facebook-messenger:before{content:"\f39f"}.fa-facebook-square:before{content:"\f082"}.fa-fan:before{content:"\f863"}.fa-fantasy-flight-games:before{content:"\f6dc"}.fa-fast-backward:before{content:"\f049"}.fa-fast-forward:before{content:"\f050"}.fa-faucet:before{content:"\f905"}.fa-fax:before{content:"\f1ac"}.fa-feather:before{content:"\f52d"}.fa-feather-alt:before{content:"\f56b"}.fa-fedex:before{content:"\f797"}.fa-fedora:before{content:"\f798"}.fa-female:before{content:"\f182"}.fa-fighter-jet:before{content:"\f0fb"}.fa-figma:before{content:"\f799"}.fa-file:before{content:"\f15b"}.fa-file-alt:before{content:"\f15c"}.fa-file-archive:before{content:"\f1c6"}.fa-file-audio:before{content:"\f1c7"}.fa-file-code:before{content:"\f1c9"}.fa-file-contract:before{content:"\f56c"}.fa-file-csv:before{content:"\f6dd"}.fa-file-download:before{content:"\f56d"}.fa-file-excel:before{content:"\f1c3"}.fa-file-export:before{content:"\f56e"}.fa-file-image:before{content:"\f1c5"}.fa-file-import:before{content:"\f56f"}.fa-file-invoice:before{content:"\f570"}.fa-file-invoice-dollar:before{content:"\f571"}.fa-file-medical:before{content:"\f477"}.fa-file-medical-alt:before{content:"\f478"}.fa-file-pdf:before{content:"\f1c1"}.fa-file-powerpoint:before{content:"\f1c4"}.fa-file-prescription:before{content:"\f572"}.fa-file-signature:before{content:"\f573"}.fa-file-upload:before{content:"\f574"}.fa-file-video:before{content:"\f1c8"}.fa-file-word:before{content:"\f1c2"}.fa-fill:before{content:"\f575"}.fa-fill-drip:before{content:"\f576"}.fa-film:before{content:"\f008"}.fa-filter:before{content:"\f0b0"}.fa-fingerprint:before{content:"\f577"}.fa-fire:before{content:"\f06d"}.fa-fire-alt:before{content:"\f7e4"}.fa-fire-extinguisher:before{content:"\f134"}.fa-firefox:before{content:"\f269"}.fa-firefox-browser:before{content:"\f907"}.fa-first-aid:before{content:"\f479"}.fa-first-order:before{content:"\f2b0"}.fa-first-order-alt:before{content:"\f50a"}.fa-firstdraft:before{content:"\f3a1"}.fa-fish:before{content:"\f578"}.fa-fist-raised:before{content:"\f6de"}.fa-flag:before{content:"\f024"}.fa-flag-checkered:before{content:"\f11e"}.fa-flag-usa:before{content:"\f74d"}.fa-flask:before{content:"\f0c3"}.fa-flickr:before{content:"\f16e"}.fa-flipboard:before{content:"\f44d"}.fa-flushed:before{content:"\f579"}.fa-fly:before{content:"\f417"}.fa-folder:before{content:"\f07b"}.fa-folder-minus:before{content:"\f65d"}.fa-folder-open:before{content:"\f07c"}.fa-folder-plus:before{content:"\f65e"}.fa-font:before{content:"\f031"}.fa-font-awesome:before{content:"\f2b4"}.fa-font-awesome-alt:before{content:"\f35c"}.fa-font-awesome-flag:before{content:"\f425"}.fa-font-awesome-logo-full:before{content:"\f4e6"}.fa-fonticons:before{content:"\f280"}.fa-fonticons-fi:before{content:"\f3a2"}.fa-football-ball:before{content:"\f44e"}.fa-fort-awesome:before{content:"\f286"}.fa-fort-awesome-alt:before{content:"\f3a3"}.fa-forumbee:before{content:"\f211"}.fa-forward:before{content:"\f04e"}.fa-foursquare:before{content:"\f180"}.fa-free-code-camp:before{content:"\f2c5"}.fa-freebsd:before{content:"\f3a4"}.fa-frog:before{content:"\f52e"}.fa-frown:before{content:"\f119"}.fa-frown-open:before{content:"\f57a"}.fa-fulcrum:before{content:"\f50b"}.fa-funnel-dollar:before{content:"\f662"}.fa-futbol:before{content:"\f1e3"}.fa-galactic-republic:before{content:"\f50c"}.fa-galactic-senate:before{content:"\f50d"}.fa-gamepad:before{content:"\f11b"}.fa-gas-pump:before{content:"\f52f"}.fa-gavel:before{content:"\f0e3"}.fa-gem:before{content:"\f3a5"}.fa-genderless:before{content:"\f22d"}.fa-get-pocket:before{content:"\f265"}.fa-gg:before{content:"\f260"}.fa-gg-circle:before{content:"\f261"}.fa-ghost:before{content:"\f6e2"}.fa-gift:before{content:"\f06b"}.fa-gifts:before{content:"\f79c"}.fa-git:before{content:"\f1d3"}.fa-git-alt:before{content:"\f841"}.fa-git-square:before{content:"\f1d2"}.fa-github:before{content:"\f09b"}.fa-github-alt:before{content:"\f113"}.fa-github-square:before{content:"\f092"}.fa-gitkraken:before{content:"\f3a6"}.fa-gitlab:before{content:"\f296"}.fa-gitter:before{content:"\f426"}.fa-glass-cheers:before{content:"\f79f"}.fa-glass-martini:before{content:"\f000"}.fa-glass-martini-alt:before{content:"\f57b"}.fa-glass-whiskey:before{content:"\f7a0"}.fa-glasses:before{content:"\f530"}.fa-glide:before{content:"\f2a5"}.fa-glide-g:before{content:"\f2a6"}.fa-globe:before{content:"\f0ac"}.fa-globe-africa:before{content:"\f57c"}.fa-globe-americas:before{content:"\f57d"}.fa-globe-asia:before{content:"\f57e"}.fa-globe-europe:before{content:"\f7a2"}.fa-gofore:before{content:"\f3a7"}.fa-golf-ball:before{content:"\f450"}.fa-goodreads:before{content:"\f3a8"}.fa-goodreads-g:before{content:"\f3a9"}.fa-google:before{content:"\f1a0"}.fa-google-drive:before{content:"\f3aa"}.fa-google-play:before{content:"\f3ab"}.fa-google-plus:before{content:"\f2b3"}.fa-google-plus-g:before{content:"\f0d5"}.fa-google-plus-square:before{content:"\f0d4"}.fa-google-wallet:before{content:"\f1ee"}.fa-gopuram:before{content:"\f664"}.fa-graduation-cap:before{content:"\f19d"}.fa-gratipay:before{content:"\f184"}.fa-grav:before{content:"\f2d6"}.fa-greater-than:before{content:"\f531"}.fa-greater-than-equal:before{content:"\f532"}.fa-grimace:before{content:"\f57f"}.fa-grin:before{content:"\f580"}.fa-grin-alt:before{content:"\f581"}.fa-grin-beam:before{content:"\f582"}.fa-grin-beam-sweat:before{content:"\f583"}.fa-grin-hearts:before{content:"\f584"}.fa-grin-squint:before{content:"\f585"}.fa-grin-squint-tears:before{content:"\f586"}.fa-grin-stars:before{content:"\f587"}.fa-grin-tears:before{content:"\f588"}.fa-grin-tongue:before{content:"\f589"}.fa-grin-tongue-squint:before{content:"\f58a"}.fa-grin-tongue-wink:before{content:"\f58b"}.fa-grin-wink:before{content:"\f58c"}.fa-grip-horizontal:before{content:"\f58d"}.fa-grip-lines:before{content:"\f7a4"}.fa-grip-lines-vertical:before{content:"\f7a5"}.fa-grip-vertical:before{content:"\f58e"}.fa-gripfire:before{content:"\f3ac"}.fa-grunt:before{content:"\f3ad"}.fa-guitar:before{content:"\f7a6"}.fa-gulp:before{content:"\f3ae"}.fa-h-square:before{content:"\f0fd"}.fa-hacker-news:before{content:"\f1d4"}.fa-hacker-news-square:before{content:"\f3af"}.fa-hackerrank:before{content:"\f5f7"}.fa-hamburger:before{content:"\f805"}.fa-hammer:before{content:"\f6e3"}.fa-hamsa:before{content:"\f665"}.fa-hand-holding:before{content:"\f4bd"}.fa-hand-holding-heart:before{content:"\f4be"}.fa-hand-holding-medical:before{content:"\f95c"}.fa-hand-holding-usd:before{content:"\f4c0"}.fa-hand-holding-water:before{content:"\f4c1"}.fa-hand-lizard:before{content:"\f258"}.fa-hand-middle-finger:before{content:"\f806"}.fa-hand-paper:before{content:"\f256"}.fa-hand-peace:before{content:"\f25b"}.fa-hand-point-down:before{content:"\f0a7"}.fa-hand-point-left:before{content:"\f0a5"}.fa-hand-point-right:before{content:"\f0a4"}.fa-hand-point-up:before{content:"\f0a6"}.fa-hand-pointer:before{content:"\f25a"}.fa-hand-rock:before{content:"\f255"}.fa-hand-scissors:before{content:"\f257"}.fa-hand-sparkles:before{content:"\f95d"}.fa-hand-spock:before{content:"\f259"}.fa-hands:before{content:"\f4c2"}.fa-hands-helping:before{content:"\f4c4"}.fa-hands-wash:before{content:"\f95e"}.fa-handshake:before{content:"\f2b5"}.fa-handshake-alt-slash:before{content:"\f95f"}.fa-handshake-slash:before{content:"\f960"}.fa-hanukiah:before{content:"\f6e6"}.fa-hard-hat:before{content:"\f807"}.fa-hashtag:before{content:"\f292"}.fa-hat-cowboy:before{content:"\f8c0"}.fa-hat-cowboy-side:before{content:"\f8c1"}.fa-hat-wizard:before{content:"\f6e8"}.fa-hdd:before{content:"\f0a0"}.fa-head-side-cough:before{content:"\f961"}.fa-head-side-cough-slash:before{content:"\f962"}.fa-head-side-mask:before{content:"\f963"}.fa-head-side-virus:before{content:"\f964"}.fa-heading:before{content:"\f1dc"}.fa-headphones:before{content:"\f025"}.fa-headphones-alt:before{content:"\f58f"}.fa-headset:before{content:"\f590"}.fa-heart:before{content:"\f004"}.fa-heart-broken:before{content:"\f7a9"}.fa-heartbeat:before{content:"\f21e"}.fa-helicopter:before{content:"\f533"}.fa-highlighter:before{content:"\f591"}.fa-hiking:before{content:"\f6ec"}.fa-hippo:before{content:"\f6ed"}.fa-hips:before{content:"\f452"}.fa-hire-a-helper:before{content:"\f3b0"}.fa-history:before{content:"\f1da"}.fa-hockey-puck:before{content:"\f453"}.fa-holly-berry:before{content:"\f7aa"}.fa-home:before{content:"\f015"}.fa-hooli:before{content:"\f427"}.fa-hornbill:before{content:"\f592"}.fa-horse:before{content:"\f6f0"}.fa-horse-head:before{content:"\f7ab"}.fa-hospital:before{content:"\f0f8"}.fa-hospital-alt:before{content:"\f47d"}.fa-hospital-symbol:before{content:"\f47e"}.fa-hospital-user:before{content:"\f80d"}.fa-hot-tub:before{content:"\f593"}.fa-hotdog:before{content:"\f80f"}.fa-hotel:before{content:"\f594"}.fa-hotjar:before{content:"\f3b1"}.fa-hourglass:before{content:"\f254"}.fa-hourglass-end:before{content:"\f253"}.fa-hourglass-half:before{content:"\f252"}.fa-hourglass-start:before{content:"\f251"}.fa-house-damage:before{content:"\f6f1"}.fa-house-user:before{content:"\f965"}.fa-houzz:before{content:"\f27c"}.fa-hryvnia:before{content:"\f6f2"}.fa-html5:before{content:"\f13b"}.fa-hubspot:before{content:"\f3b2"}.fa-i-cursor:before{content:"\f246"}.fa-ice-cream:before{content:"\f810"}.fa-icicles:before{content:"\f7ad"}.fa-icons:before{content:"\f86d"}.fa-id-badge:before{content:"\f2c1"}.fa-id-card:before{content:"\f2c2"}.fa-id-card-alt:before{content:"\f47f"}.fa-ideal:before{content:"\f913"}.fa-igloo:before{content:"\f7ae"}.fa-image:before{content:"\f03e"}.fa-images:before{content:"\f302"}.fa-imdb:before{content:"\f2d8"}.fa-inbox:before{content:"\f01c"}.fa-indent:before{content:"\f03c"}.fa-industry:before{content:"\f275"}.fa-infinity:before{content:"\f534"}.fa-info:before{content:"\f129"}.fa-info-circle:before{content:"\f05a"}.fa-instagram:before{content:"\f16d"}.fa-instagram-square:before{content:"\f955"}.fa-intercom:before{content:"\f7af"}.fa-internet-explorer:before{content:"\f26b"}.fa-invision:before{content:"\f7b0"}.fa-ioxhost:before{content:"\f208"}.fa-italic:before{content:"\f033"}.fa-itch-io:before{content:"\f83a"}.fa-itunes:before{content:"\f3b4"}.fa-itunes-note:before{content:"\f3b5"}.fa-java:before{content:"\f4e4"}.fa-jedi:before{content:"\f669"}.fa-jedi-order:before{content:"\f50e"}.fa-jenkins:before{content:"\f3b6"}.fa-jira:before{content:"\f7b1"}.fa-joget:before{content:"\f3b7"}.fa-joint:before{content:"\f595"}.fa-joomla:before{content:"\f1aa"}.fa-journal-whills:before{content:"\f66a"}.fa-js:before{content:"\f3b8"}.fa-js-square:before{content:"\f3b9"}.fa-jsfiddle:before{content:"\f1cc"}.fa-kaaba:before{content:"\f66b"}.fa-kaggle:before{content:"\f5fa"}.fa-key:before{content:"\f084"}.fa-keybase:before{content:"\f4f5"}.fa-keyboard:before{content:"\f11c"}.fa-keycdn:before{content:"\f3ba"}.fa-khanda:before{content:"\f66d"}.fa-kickstarter:before{content:"\f3bb"}.fa-kickstarter-k:before{content:"\f3bc"}.fa-kiss:before{content:"\f596"}.fa-kiss-beam:before{content:"\f597"}.fa-kiss-wink-heart:before{content:"\f598"}.fa-kiwi-bird:before{content:"\f535"}.fa-korvue:before{content:"\f42f"}.fa-landmark:before{content:"\f66f"}.fa-language:before{content:"\f1ab"}.fa-laptop:before{content:"\f109"}.fa-laptop-code:before{content:"\f5fc"}.fa-laptop-house:before{content:"\f966"}.fa-laptop-medical:before{content:"\f812"}.fa-laravel:before{content:"\f3bd"}.fa-lastfm:before{content:"\f202"}.fa-lastfm-square:before{content:"\f203"}.fa-laugh:before{content:"\f599"}.fa-laugh-beam:before{content:"\f59a"}.fa-laugh-squint:before{content:"\f59b"}.fa-laugh-wink:before{content:"\f59c"}.fa-layer-group:before{content:"\f5fd"}.fa-leaf:before{content:"\f06c"}.fa-leanpub:before{content:"\f212"}.fa-lemon:before{content:"\f094"}.fa-less:before{content:"\f41d"}.fa-less-than:before{content:"\f536"}.fa-less-than-equal:before{content:"\f537"}.fa-level-down-alt:before{content:"\f3be"}.fa-level-up-alt:before{content:"\f3bf"}.fa-life-ring:before{content:"\f1cd"}.fa-lightbulb:before{content:"\f0eb"}.fa-line:before{content:"\f3c0"}.fa-link:before{content:"\f0c1"}.fa-linkedin:before{content:"\f08c"}.fa-linkedin-in:before{content:"\f0e1"}.fa-linode:before{content:"\f2b8"}.fa-linux:before{content:"\f17c"}.fa-lira-sign:before{content:"\f195"}.fa-list:before{content:"\f03a"}.fa-list-alt:before{content:"\f022"}.fa-list-ol:before{content:"\f0cb"}.fa-list-ul:before{content:"\f0ca"}.fa-location-arrow:before{content:"\f124"}.fa-lock:before{content:"\f023"}.fa-lock-open:before{content:"\f3c1"}.fa-long-arrow-alt-down:before{content:"\f309"}.fa-long-arrow-alt-left:before{content:"\f30a"}.fa-long-arrow-alt-right:before{content:"\f30b"}.fa-long-arrow-alt-up:before{content:"\f30c"}.fa-low-vision:before{content:"\f2a8"}.fa-luggage-cart:before{content:"\f59d"}.fa-lungs:before{content:"\f604"}.fa-lungs-virus:before{content:"\f967"}.fa-lyft:before{content:"\f3c3"}.fa-magento:before{content:"\f3c4"}.fa-magic:before{content:"\f0d0"}.fa-magnet:before{content:"\f076"}.fa-mail-bulk:before{content:"\f674"}.fa-mailchimp:before{content:"\f59e"}.fa-male:before{content:"\f183"}.fa-mandalorian:before{content:"\f50f"}.fa-map:before{content:"\f279"}.fa-map-marked:before{content:"\f59f"}.fa-map-marked-alt:before{content:"\f5a0"}.fa-map-marker:before{content:"\f041"}.fa-map-marker-alt:before{content:"\f3c5"}.fa-map-pin:before{content:"\f276"}.fa-map-signs:before{content:"\f277"}.fa-markdown:before{content:"\f60f"}.fa-marker:before{content:"\f5a1"}.fa-mars:before{content:"\f222"}.fa-mars-double:before{content:"\f227"}.fa-mars-stroke:before{content:"\f229"}.fa-mars-stroke-h:before{content:"\f22b"}.fa-mars-stroke-v:before{content:"\f22a"}.fa-mask:before{content:"\f6fa"}.fa-mastodon:before{content:"\f4f6"}.fa-maxcdn:before{content:"\f136"}.fa-mdb:before{content:"\f8ca"}.fa-medal:before{content:"\f5a2"}.fa-medapps:before{content:"\f3c6"}.fa-medium:before{content:"\f23a"}.fa-medium-m:before{content:"\f3c7"}.fa-medkit:before{content:"\f0fa"}.fa-medrt:before{content:"\f3c8"}.fa-meetup:before{content:"\f2e0"}.fa-megaport:before{content:"\f5a3"}.fa-meh:before{content:"\f11a"}.fa-meh-blank:before{content:"\f5a4"}.fa-meh-rolling-eyes:before{content:"\f5a5"}.fa-memory:before{content:"\f538"}.fa-mendeley:before{content:"\f7b3"}.fa-menorah:before{content:"\f676"}.fa-mercury:before{content:"\f223"}.fa-meteor:before{content:"\f753"}.fa-microblog:before{content:"\f91a"}.fa-microchip:before{content:"\f2db"}.fa-microphone:before{content:"\f130"}.fa-microphone-alt:before{content:"\f3c9"}.fa-microphone-alt-slash:before{content:"\f539"}.fa-microphone-slash:before{content:"\f131"}.fa-microscope:before{content:"\f610"}.fa-microsoft:before{content:"\f3ca"}.fa-minus:before{content:"\f068"}.fa-minus-circle:before{content:"\f056"}.fa-minus-square:before{content:"\f146"}.fa-mitten:before{content:"\f7b5"}.fa-mix:before{content:"\f3cb"}.fa-mixcloud:before{content:"\f289"}.fa-mixer:before{content:"\f956"}.fa-mizuni:before{content:"\f3cc"}.fa-mobile:before{content:"\f10b"}.fa-mobile-alt:before{content:"\f3cd"}.fa-modx:before{content:"\f285"}.fa-monero:before{content:"\f3d0"}.fa-money-bill:before{content:"\f0d6"}.fa-money-bill-alt:before{content:"\f3d1"}.fa-money-bill-wave:before{content:"\f53a"}.fa-money-bill-wave-alt:before{content:"\f53b"}.fa-money-check:before{content:"\f53c"}.fa-money-check-alt:before{content:"\f53d"}.fa-monument:before{content:"\f5a6"}.fa-moon:before{content:"\f186"}.fa-mortar-pestle:before{content:"\f5a7"}.fa-mosque:before{content:"\f678"}.fa-motorcycle:before{content:"\f21c"}.fa-mountain:before{content:"\f6fc"}.fa-mouse:before{content:"\f8cc"}.fa-mouse-pointer:before{content:"\f245"}.fa-mug-hot:before{content:"\f7b6"}.fa-music:before{content:"\f001"}.fa-napster:before{content:"\f3d2"}.fa-neos:before{content:"\f612"}.fa-network-wired:before{content:"\f6ff"}.fa-neuter:before{content:"\f22c"}.fa-newspaper:before{content:"\f1ea"}.fa-nimblr:before{content:"\f5a8"}.fa-node:before{content:"\f419"}.fa-node-js:before{content:"\f3d3"}.fa-not-equal:before{content:"\f53e"}.fa-notes-medical:before{content:"\f481"}.fa-npm:before{content:"\f3d4"}.fa-ns8:before{content:"\f3d5"}.fa-nutritionix:before{content:"\f3d6"}.fa-object-group:before{content:"\f247"}.fa-object-ungroup:before{content:"\f248"}.fa-odnoklassniki:before{content:"\f263"}.fa-odnoklassniki-square:before{content:"\f264"}.fa-oil-can:before{content:"\f613"}.fa-old-republic:before{content:"\f510"}.fa-om:before{content:"\f679"}.fa-opencart:before{content:"\f23d"}.fa-openid:before{content:"\f19b"}.fa-opera:before{content:"\f26a"}.fa-optin-monster:before{content:"\f23c"}.fa-orcid:before{content:"\f8d2"}.fa-osi:before{content:"\f41a"}.fa-otter:before{content:"\f700"}.fa-outdent:before{content:"\f03b"}.fa-page4:before{content:"\f3d7"}.fa-pagelines:before{content:"\f18c"}.fa-pager:before{content:"\f815"}.fa-paint-brush:before{content:"\f1fc"}.fa-paint-roller:before{content:"\f5aa"}.fa-palette:before{content:"\f53f"}.fa-palfed:before{content:"\f3d8"}.fa-pallet:before{content:"\f482"}.fa-paper-plane:before{content:"\f1d8"}.fa-paperclip:before{content:"\f0c6"}.fa-parachute-box:before{content:"\f4cd"}.fa-paragraph:before{content:"\f1dd"}.fa-parking:before{content:"\f540"}.fa-passport:before{content:"\f5ab"}.fa-pastafarianism:before{content:"\f67b"}.fa-paste:before{content:"\f0ea"}.fa-patreon:before{content:"\f3d9"}.fa-pause:before{content:"\f04c"}.fa-pause-circle:before{content:"\f28b"}.fa-paw:before{content:"\f1b0"}.fa-paypal:before{content:"\f1ed"}.fa-peace:before{content:"\f67c"}.fa-pen:before{content:"\f304"}.fa-pen-alt:before{content:"\f305"}.fa-pen-fancy:before{content:"\f5ac"}.fa-pen-nib:before{content:"\f5ad"}.fa-pen-square:before{content:"\f14b"}.fa-pencil-alt:before{content:"\f303"}.fa-pencil-ruler:before{content:"\f5ae"}.fa-penny-arcade:before{content:"\f704"}.fa-people-arrows:before{content:"\f968"}.fa-people-carry:before{content:"\f4ce"}.fa-pepper-hot:before{content:"\f816"}.fa-percent:before{content:"\f295"}.fa-percentage:before{content:"\f541"}.fa-periscope:before{content:"\f3da"}.fa-person-booth:before{content:"\f756"}.fa-phabricator:before{content:"\f3db"}.fa-phoenix-framework:before{content:"\f3dc"}.fa-phoenix-squadron:before{content:"\f511"}.fa-phone:before{content:"\f095"}.fa-phone-alt:before{content:"\f879"}.fa-phone-slash:before{content:"\f3dd"}.fa-phone-square:before{content:"\f098"}.fa-phone-square-alt:before{content:"\f87b"}.fa-phone-volume:before{content:"\f2a0"}.fa-photo-video:before{content:"\f87c"}.fa-php:before{content:"\f457"}.fa-pied-piper:before{content:"\f2ae"}.fa-pied-piper-alt:before{content:"\f1a8"}.fa-pied-piper-hat:before{content:"\f4e5"}.fa-pied-piper-pp:before{content:"\f1a7"}.fa-pied-piper-square:before{content:"\f91e"}.fa-piggy-bank:before{content:"\f4d3"}.fa-pills:before{content:"\f484"}.fa-pinterest:before{content:"\f0d2"}.fa-pinterest-p:before{content:"\f231"}.fa-pinterest-square:before{content:"\f0d3"}.fa-pizza-slice:before{content:"\f818"}.fa-place-of-worship:before{content:"\f67f"}.fa-plane:before{content:"\f072"}.fa-plane-arrival:before{content:"\f5af"}.fa-plane-departure:before{content:"\f5b0"}.fa-plane-slash:before{content:"\f969"}.fa-play:before{content:"\f04b"}.fa-play-circle:before{content:"\f144"}.fa-playstation:before{content:"\f3df"}.fa-plug:before{content:"\f1e6"}.fa-plus:before{content:"\f067"}.fa-plus-circle:before{content:"\f055"}.fa-plus-square:before{content:"\f0fe"}.fa-podcast:before{content:"\f2ce"}.fa-poll:before{content:"\f681"}.fa-poll-h:before{content:"\f682"}.fa-poo:before{content:"\f2fe"}.fa-poo-storm:before{content:"\f75a"}.fa-poop:before{content:"\f619"}.fa-portrait:before{content:"\f3e0"}.fa-pound-sign:before{content:"\f154"}.fa-power-off:before{content:"\f011"}.fa-pray:before{content:"\f683"}.fa-praying-hands:before{content:"\f684"}.fa-prescription:before{content:"\f5b1"}.fa-prescription-bottle:before{content:"\f485"}.fa-prescription-bottle-alt:before{content:"\f486"}.fa-print:before{content:"\f02f"}.fa-procedures:before{content:"\f487"}.fa-product-hunt:before{content:"\f288"}.fa-project-diagram:before{content:"\f542"}.fa-pump-medical:before{content:"\f96a"}.fa-pump-soap:before{content:"\f96b"}.fa-pushed:before{content:"\f3e1"}.fa-puzzle-piece:before{content:"\f12e"}.fa-python:before{content:"\f3e2"}.fa-qq:before{content:"\f1d6"}.fa-qrcode:before{content:"\f029"}.fa-question:before{content:"\f128"}.fa-question-circle:before{content:"\f059"}.fa-quidditch:before{content:"\f458"}.fa-quinscape:before{content:"\f459"}.fa-quora:before{content:"\f2c4"}.fa-quote-left:before{content:"\f10d"}.fa-quote-right:before{content:"\f10e"}.fa-quran:before{content:"\f687"}.fa-r-project:before{content:"\f4f7"}.fa-radiation:before{content:"\f7b9"}.fa-radiation-alt:before{content:"\f7ba"}.fa-rainbow:before{content:"\f75b"}.fa-random:before{content:"\f074"}.fa-raspberry-pi:before{content:"\f7bb"}.fa-ravelry:before{content:"\f2d9"}.fa-react:before{content:"\f41b"}.fa-reacteurope:before{content:"\f75d"}.fa-readme:before{content:"\f4d5"}.fa-rebel:before{content:"\f1d0"}.fa-receipt:before{content:"\f543"}.fa-record-vinyl:before{content:"\f8d9"}.fa-recycle:before{content:"\f1b8"}.fa-red-river:before{content:"\f3e3"}.fa-reddit:before{content:"\f1a1"}.fa-reddit-alien:before{content:"\f281"}.fa-reddit-square:before{content:"\f1a2"}.fa-redhat:before{content:"\f7bc"}.fa-redo:before{content:"\f01e"}.fa-redo-alt:before{content:"\f2f9"}.fa-registered:before{content:"\f25d"}.fa-remove-format:before{content:"\f87d"}.fa-renren:before{content:"\f18b"}.fa-reply:before{content:"\f3e5"}.fa-reply-all:before{content:"\f122"}.fa-replyd:before{content:"\f3e6"}.fa-republican:before{content:"\f75e"}.fa-researchgate:before{content:"\f4f8"}.fa-resolving:before{content:"\f3e7"}.fa-restroom:before{content:"\f7bd"}.fa-retweet:before{content:"\f079"}.fa-rev:before{content:"\f5b2"}.fa-ribbon:before{content:"\f4d6"}.fa-ring:before{content:"\f70b"}.fa-road:before{content:"\f018"}.fa-robot:before{content:"\f544"}.fa-rocket:before{content:"\f135"}.fa-rocketchat:before{content:"\f3e8"}.fa-rockrms:before{content:"\f3e9"}.fa-route:before{content:"\f4d7"}.fa-rss:before{content:"\f09e"}.fa-rss-square:before{content:"\f143"}.fa-ruble-sign:before{content:"\f158"}.fa-ruler:before{content:"\f545"}.fa-ruler-combined:before{content:"\f546"}.fa-ruler-horizontal:before{content:"\f547"}.fa-ruler-vertical:before{content:"\f548"}.fa-running:before{content:"\f70c"}.fa-rupee-sign:before{content:"\f156"}.fa-sad-cry:before{content:"\f5b3"}.fa-sad-tear:before{content:"\f5b4"}.fa-safari:before{content:"\f267"}.fa-salesforce:before{content:"\f83b"}.fa-sass:before{content:"\f41e"}.fa-satellite:before{content:"\f7bf"}.fa-satellite-dish:before{content:"\f7c0"}.fa-save:before{content:"\f0c7"}.fa-schlix:before{content:"\f3ea"}.fa-school:before{content:"\f549"}.fa-screwdriver:before{content:"\f54a"}.fa-scribd:before{content:"\f28a"}.fa-scroll:before{content:"\f70e"}.fa-sd-card:before{content:"\f7c2"}.fa-search:before{content:"\f002"}.fa-search-dollar:before{content:"\f688"}.fa-search-location:before{content:"\f689"}.fa-search-minus:before{content:"\f010"}.fa-search-plus:before{content:"\f00e"}.fa-searchengin:before{content:"\f3eb"}.fa-seedling:before{content:"\f4d8"}.fa-sellcast:before{content:"\f2da"}.fa-sellsy:before{content:"\f213"}.fa-server:before{content:"\f233"}.fa-servicestack:before{content:"\f3ec"}.fa-shapes:before{content:"\f61f"}.fa-share:before{content:"\f064"}.fa-share-alt:before{content:"\f1e0"}.fa-share-alt-square:before{content:"\f1e1"}.fa-share-square:before{content:"\f14d"}.fa-shekel-sign:before{content:"\f20b"}.fa-shield-alt:before{content:"\f3ed"}.fa-shield-virus:before{content:"\f96c"}.fa-ship:before{content:"\f21a"}.fa-shipping-fast:before{content:"\f48b"}.fa-shirtsinbulk:before{content:"\f214"}.fa-shoe-prints:before{content:"\f54b"}.fa-shopify:before{content:"\f957"}.fa-shopping-bag:before{content:"\f290"}.fa-shopping-basket:before{content:"\f291"}.fa-shopping-cart:before{content:"\f07a"}.fa-shopware:before{content:"\f5b5"}.fa-shower:before{content:"\f2cc"}.fa-shuttle-van:before{content:"\f5b6"}.fa-sign:before{content:"\f4d9"}.fa-sign-in-alt:before{content:"\f2f6"}.fa-sign-language:before{content:"\f2a7"}.fa-sign-out-alt:before{content:"\f2f5"}.fa-signal:before{content:"\f012"}.fa-signature:before{content:"\f5b7"}.fa-sim-card:before{content:"\f7c4"}.fa-simplybuilt:before{content:"\f215"}.fa-sistrix:before{content:"\f3ee"}.fa-sitemap:before{content:"\f0e8"}.fa-sith:before{content:"\f512"}.fa-skating:before{content:"\f7c5"}.fa-sketch:before{content:"\f7c6"}.fa-skiing:before{content:"\f7c9"}.fa-skiing-nordic:before{content:"\f7ca"}.fa-skull:before{content:"\f54c"}.fa-skull-crossbones:before{content:"\f714"}.fa-skyatlas:before{content:"\f216"}.fa-skype:before{content:"\f17e"}.fa-slack:before{content:"\f198"}.fa-slack-hash:before{content:"\f3ef"}.fa-slash:before{content:"\f715"}.fa-sleigh:before{content:"\f7cc"}.fa-sliders-h:before{content:"\f1de"}.fa-slideshare:before{content:"\f1e7"}.fa-smile:before{content:"\f118"}.fa-smile-beam:before{content:"\f5b8"}.fa-smile-wink:before{content:"\f4da"}.fa-smog:before{content:"\f75f"}.fa-smoking:before{content:"\f48d"}.fa-smoking-ban:before{content:"\f54d"}.fa-sms:before{content:"\f7cd"}.fa-snapchat:before{content:"\f2ab"}.fa-snapchat-ghost:before{content:"\f2ac"}.fa-snapchat-square:before{content:"\f2ad"}.fa-snowboarding:before{content:"\f7ce"}.fa-snowflake:before{content:"\f2dc"}.fa-snowman:before{content:"\f7d0"}.fa-snowplow:before{content:"\f7d2"}.fa-soap:before{content:"\f96e"}.fa-socks:before{content:"\f696"}.fa-solar-panel:before{content:"\f5ba"}.fa-sort:before{content:"\f0dc"}.fa-sort-alpha-down:before{content:"\f15d"}.fa-sort-alpha-down-alt:before{content:"\f881"}.fa-sort-alpha-up:before{content:"\f15e"}.fa-sort-alpha-up-alt:before{content:"\f882"}.fa-sort-amount-down:before{content:"\f160"}.fa-sort-amount-down-alt:before{content:"\f884"}.fa-sort-amount-up:before{content:"\f161"}.fa-sort-amount-up-alt:before{content:"\f885"}.fa-sort-down:before{content:"\f0dd"}.fa-sort-numeric-down:before{content:"\f162"}.fa-sort-numeric-down-alt:before{content:"\f886"}.fa-sort-numeric-up:before{content:"\f163"}.fa-sort-numeric-up-alt:before{content:"\f887"}.fa-sort-up:before{content:"\f0de"}.fa-soundcloud:before{content:"\f1be"}.fa-sourcetree:before{content:"\f7d3"}.fa-spa:before{content:"\f5bb"}.fa-space-shuttle:before{content:"\f197"}.fa-speakap:before{content:"\f3f3"}.fa-speaker-deck:before{content:"\f83c"}.fa-spell-check:before{content:"\f891"}.fa-spider:before{content:"\f717"}.fa-spinner:before{content:"\f110"}.fa-splotch:before{content:"\f5bc"}.fa-spotify:before{content:"\f1bc"}.fa-spray-can:before{content:"\f5bd"}.fa-square:before{content:"\f0c8"}.fa-square-full:before{content:"\f45c"}.fa-square-root-alt:before{content:"\f698"}.fa-squarespace:before{content:"\f5be"}.fa-stack-exchange:before{content:"\f18d"}.fa-stack-overflow:before{content:"\f16c"}.fa-stackpath:before{content:"\f842"}.fa-stamp:before{content:"\f5bf"}.fa-star:before{content:"\f005"}.fa-star-and-crescent:before{content:"\f699"}.fa-star-half:before{content:"\f089"}.fa-star-half-alt:before{content:"\f5c0"}.fa-star-of-david:before{content:"\f69a"}.fa-star-of-life:before{content:"\f621"}.fa-staylinked:before{content:"\f3f5"}.fa-steam:before{content:"\f1b6"}.fa-steam-square:before{content:"\f1b7"}.fa-steam-symbol:before{content:"\f3f6"}.fa-step-backward:before{content:"\f048"}.fa-step-forward:before{content:"\f051"}.fa-stethoscope:before{content:"\f0f1"}.fa-sticker-mule:before{content:"\f3f7"}.fa-sticky-note:before{content:"\f249"}.fa-stop:before{content:"\f04d"}.fa-stop-circle:before{content:"\f28d"}.fa-stopwatch:before{content:"\f2f2"}.fa-stopwatch-20:before{content:"\f96f"}.fa-store:before{content:"\f54e"}.fa-store-alt:before{content:"\f54f"}.fa-store-alt-slash:before{content:"\f970"}.fa-store-slash:before{content:"\f971"}.fa-strava:before{content:"\f428"}.fa-stream:before{content:"\f550"}.fa-street-view:before{content:"\f21d"}.fa-strikethrough:before{content:"\f0cc"}.fa-stripe:before{content:"\f429"}.fa-stripe-s:before{content:"\f42a"}.fa-stroopwafel:before{content:"\f551"}.fa-studiovinari:before{content:"\f3f8"}.fa-stumbleupon:before{content:"\f1a4"}.fa-stumbleupon-circle:before{content:"\f1a3"}.fa-subscript:before{content:"\f12c"}.fa-subway:before{content:"\f239"}.fa-suitcase:before{content:"\f0f2"}.fa-suitcase-rolling:before{content:"\f5c1"}.fa-sun:before{content:"\f185"}.fa-superpowers:before{content:"\f2dd"}.fa-superscript:before{content:"\f12b"}.fa-supple:before{content:"\f3f9"}.fa-surprise:before{content:"\f5c2"}.fa-suse:before{content:"\f7d6"}.fa-swatchbook:before{content:"\f5c3"}.fa-swift:before{content:"\f8e1"}.fa-swimmer:before{content:"\f5c4"}.fa-swimming-pool:before{content:"\f5c5"}.fa-symfony:before{content:"\f83d"}.fa-synagogue:before{content:"\f69b"}.fa-sync:before{content:"\f021"}.fa-sync-alt:before{content:"\f2f1"}.fa-syringe:before{content:"\f48e"}.fa-table:before{content:"\f0ce"}.fa-table-tennis:before{content:"\f45d"}.fa-tablet:before{content:"\f10a"}.fa-tablet-alt:before{content:"\f3fa"}.fa-tablets:before{content:"\f490"}.fa-tachometer-alt:before{content:"\f3fd"}.fa-tag:before{content:"\f02b"}.fa-tags:before{content:"\f02c"}.fa-tape:before{content:"\f4db"}.fa-tasks:before{content:"\f0ae"}.fa-taxi:before{content:"\f1ba"}.fa-teamspeak:before{content:"\f4f9"}.fa-teeth:before{content:"\f62e"}.fa-teeth-open:before{content:"\f62f"}.fa-telegram:before{content:"\f2c6"}.fa-telegram-plane:before{content:"\f3fe"}.fa-temperature-high:before{content:"\f769"}.fa-temperature-low:before{content:"\f76b"}.fa-tencent-weibo:before{content:"\f1d5"}.fa-tenge:before{content:"\f7d7"}.fa-terminal:before{content:"\f120"}.fa-text-height:before{content:"\f034"}.fa-text-width:before{content:"\f035"}.fa-th:before{content:"\f00a"}.fa-th-large:before{content:"\f009"}.fa-th-list:before{content:"\f00b"}.fa-the-red-yeti:before{content:"\f69d"}.fa-theater-masks:before{content:"\f630"}.fa-themeco:before{content:"\f5c6"}.fa-themeisle:before{content:"\f2b2"}.fa-thermometer:before{content:"\f491"}.fa-thermometer-empty:before{content:"\f2cb"}.fa-thermometer-full:before{content:"\f2c7"}.fa-thermometer-half:before{content:"\f2c9"}.fa-thermometer-quarter:before{content:"\f2ca"}.fa-thermometer-three-quarters:before{content:"\f2c8"}.fa-think-peaks:before{content:"\f731"}.fa-thumbs-down:before{content:"\f165"}.fa-thumbs-up:before{content:"\f164"}.fa-thumbtack:before{content:"\f08d"}.fa-ticket-alt:before{content:"\f3ff"}.fa-times:before{content:"\f00d"}.fa-times-circle:before{content:"\f057"}.fa-tint:before{content:"\f043"}.fa-tint-slash:before{content:"\f5c7"}.fa-tired:before{content:"\f5c8"}.fa-toggle-off:before{content:"\f204"}.fa-toggle-on:before{content:"\f205"}.fa-toilet:before{content:"\f7d8"}.fa-toilet-paper:before{content:"\f71e"}.fa-toilet-paper-slash:before{content:"\f972"}.fa-toolbox:before{content:"\f552"}.fa-tools:before{content:"\f7d9"}.fa-tooth:before{content:"\f5c9"}.fa-torah:before{content:"\f6a0"}.fa-torii-gate:before{content:"\f6a1"}.fa-tractor:before{content:"\f722"}.fa-trade-federation:before{content:"\f513"}.fa-trademark:before{content:"\f25c"}.fa-traffic-light:before{content:"\f637"}.fa-trailer:before{content:"\f941"}.fa-train:before{content:"\f238"}.fa-tram:before{content:"\f7da"}.fa-transgender:before{content:"\f224"}.fa-transgender-alt:before{content:"\f225"}.fa-trash:before{content:"\f1f8"}.fa-trash-alt:before{content:"\f2ed"}.fa-trash-restore:before{content:"\f829"}.fa-trash-restore-alt:before{content:"\f82a"}.fa-tree:before{content:"\f1bb"}.fa-trello:before{content:"\f181"}.fa-tripadvisor:before{content:"\f262"}.fa-trophy:before{content:"\f091"}.fa-truck:before{content:"\f0d1"}.fa-truck-loading:before{content:"\f4de"}.fa-truck-monster:before{content:"\f63b"}.fa-truck-moving:before{content:"\f4df"}.fa-truck-pickup:before{content:"\f63c"}.fa-tshirt:before{content:"\f553"}.fa-tty:before{content:"\f1e4"}.fa-tumblr:before{content:"\f173"}.fa-tumblr-square:before{content:"\f174"}.fa-tv:before{content:"\f26c"}.fa-twitch:before{content:"\f1e8"}.fa-twitter:before{content:"\f099"}.fa-twitter-square:before{content:"\f081"}.fa-typo3:before{content:"\f42b"}.fa-uber:before{content:"\f402"}.fa-ubuntu:before{content:"\f7df"}.fa-uikit:before{content:"\f403"}.fa-umbraco:before{content:"\f8e8"}.fa-umbrella:before{content:"\f0e9"}.fa-umbrella-beach:before{content:"\f5ca"}.fa-underline:before{content:"\f0cd"}.fa-undo:before{content:"\f0e2"}.fa-undo-alt:before{content:"\f2ea"}.fa-uniregistry:before{content:"\f404"}.fa-unity:before{content:"\f949"}.fa-universal-access:before{content:"\f29a"}.fa-university:before{content:"\f19c"}.fa-unlink:before{content:"\f127"}.fa-unlock:before{content:"\f09c"}.fa-unlock-alt:before{content:"\f13e"}.fa-untappd:before{content:"\f405"}.fa-upload:before{content:"\f093"}.fa-ups:before{content:"\f7e0"}.fa-usb:before{content:"\f287"}.fa-user:before{content:"\f007"}.fa-user-alt:before{content:"\f406"}.fa-user-alt-slash:before{content:"\f4fa"}.fa-user-astronaut:before{content:"\f4fb"}.fa-user-check:before{content:"\f4fc"}.fa-user-circle:before{content:"\f2bd"}.fa-user-clock:before{content:"\f4fd"}.fa-user-cog:before{content:"\f4fe"}.fa-user-edit:before{content:"\f4ff"}.fa-user-friends:before{content:"\f500"}.fa-user-graduate:before{content:"\f501"}.fa-user-injured:before{content:"\f728"}.fa-user-lock:before{content:"\f502"}.fa-user-md:before{content:"\f0f0"}.fa-user-minus:before{content:"\f503"}.fa-user-ninja:before{content:"\f504"}.fa-user-nurse:before{content:"\f82f"}.fa-user-plus:before{content:"\f234"}.fa-user-secret:before{content:"\f21b"}.fa-user-shield:before{content:"\f505"}.fa-user-slash:before{content:"\f506"}.fa-user-tag:before{content:"\f507"}.fa-user-tie:before{content:"\f508"}.fa-user-times:before{content:"\f235"}.fa-users:before{content:"\f0c0"}.fa-users-cog:before{content:"\f509"}.fa-usps:before{content:"\f7e1"}.fa-ussunnah:before{content:"\f407"}.fa-utensil-spoon:before{content:"\f2e5"}.fa-utensils:before{content:"\f2e7"}.fa-vaadin:before{content:"\f408"}.fa-vector-square:before{content:"\f5cb"}.fa-venus:before{content:"\f221"}.fa-venus-double:before{content:"\f226"}.fa-venus-mars:before{content:"\f228"}.fa-viacoin:before{content:"\f237"}.fa-viadeo:before{content:"\f2a9"}.fa-viadeo-square:before{content:"\f2aa"}.fa-vial:before{content:"\f492"}.fa-vials:before{content:"\f493"}.fa-viber:before{content:"\f409"}.fa-video:before{content:"\f03d"}.fa-video-slash:before{content:"\f4e2"}.fa-vihara:before{content:"\f6a7"}.fa-vimeo:before{content:"\f40a"}.fa-vimeo-square:before{content:"\f194"}.fa-vimeo-v:before{content:"\f27d"}.fa-vine:before{content:"\f1ca"}.fa-virus:before{content:"\f974"}.fa-virus-slash:before{content:"\f975"}.fa-viruses:before{content:"\f976"}.fa-vk:before{content:"\f189"}.fa-vnv:before{content:"\f40b"}.fa-voicemail:before{content:"\f897"}.fa-volleyball-ball:before{content:"\f45f"}.fa-volume-down:before{content:"\f027"}.fa-volume-mute:before{content:"\f6a9"}.fa-volume-off:before{content:"\f026"}.fa-volume-up:before{content:"\f028"}.fa-vote-yea:before{content:"\f772"}.fa-vr-cardboard:before{content:"\f729"}.fa-vuejs:before{content:"\f41f"}.fa-walking:before{content:"\f554"}.fa-wallet:before{content:"\f555"}.fa-warehouse:before{content:"\f494"}.fa-water:before{content:"\f773"}.fa-wave-square:before{content:"\f83e"}.fa-waze:before{content:"\f83f"}.fa-weebly:before{content:"\f5cc"}.fa-weibo:before{content:"\f18a"}.fa-weight:before{content:"\f496"}.fa-weight-hanging:before{content:"\f5cd"}.fa-weixin:before{content:"\f1d7"}.fa-whatsapp:before{content:"\f232"}.fa-whatsapp-square:before{content:"\f40c"}.fa-wheelchair:before{content:"\f193"}.fa-whmcs:before{content:"\f40d"}.fa-wifi:before{content:"\f1eb"}.fa-wikipedia-w:before{content:"\f266"}.fa-wind:before{content:"\f72e"}.fa-window-close:before{content:"\f410"}.fa-window-maximize:before{content:"\f2d0"}.fa-window-minimize:before{content:"\f2d1"}.fa-window-restore:before{content:"\f2d2"}.fa-windows:before{content:"\f17a"}.fa-wine-bottle:before{content:"\f72f"}.fa-wine-glass:before{content:"\f4e3"}.fa-wine-glass-alt:before{content:"\f5ce"}.fa-wix:before{content:"\f5cf"}.fa-wizards-of-the-coast:before{content:"\f730"}.fa-wolf-pack-battalion:before{content:"\f514"}.fa-won-sign:before{content:"\f159"}.fa-wordpress:before{content:"\f19a"}.fa-wordpress-simple:before{content:"\f411"}.fa-wpbeginner:before{content:"\f297"}.fa-wpexplorer:before{content:"\f2de"}.fa-wpforms:before{content:"\f298"}.fa-wpressr:before{content:"\f3e4"}.fa-wrench:before{content:"\f0ad"}.fa-x-ray:before{content:"\f497"}.fa-xbox:before{content:"\f412"}.fa-xing:before{content:"\f168"}.fa-xing-square:before{content:"\f169"}.fa-y-combinator:before{content:"\f23b"}.fa-yahoo:before{content:"\f19e"}.fa-yammer:before{content:"\f840"}.fa-yandex:before{content:"\f413"}.fa-yandex-international:before{content:"\f414"}.fa-yarn:before{content:"\f7e3"}.fa-yelp:before{content:"\f1e9"}.fa-yen-sign:before{content:"\f157"}.fa-yin-yang:before{content:"\f6ad"}.fa-yoast:before{content:"\f2b1"}.fa-youtube:before{content:"\f167"}.fa-youtube-square:before{content:"\f431"}.fa-zhihu:before{content:"\f63f"}.sr-only{border:0;clip:rect(0,0,0,0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.sr-only-focusable:active,.sr-only-focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}@font-face{font-family:"Font Awesome 5 Brands";font-style:normal;font-weight:400;font-display:block;src:url(../webfonts/fa-brands-400.eot);src:url(../webfonts/fa-brands-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-brands-400.woff2) format("woff2"),url(../webfonts/fa-brands-400.woff) format("woff"),url(../webfonts/fa-brands-400.ttf) format("truetype"),url(../webfonts/fa-brands-400.svg#fontawesome) format("svg")}.fab{font-family:"Font Awesome 5 Brands"}@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:400;font-display:block;src:url(../webfonts/fa-regular-400.eot);src:url(../webfonts/fa-regular-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-regular-400.woff2) format("woff2"),url(../webfonts/fa-regular-400.woff) format("woff"),url(../webfonts/fa-regular-400.ttf) format("truetype"),url(../webfonts/fa-regular-400.svg#fontawesome) format("svg")}.fab,.far{font-weight:400}@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:900;font-display:block;src:url(../webfonts/fa-solid-900.eot);src:url(../webfonts/fa-solid-900.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-solid-900.woff2) format("woff2"),url(../webfonts/fa-solid-900.woff) format("woff"),url(../webfonts/fa-solid-900.ttf) format("truetype"),url(../webfonts/fa-solid-900.svg#fontawesome) format("svg")}.fa,.far,.fas{font-family:"Font Awesome 5 Free"}.fa,.fas{font-weight:900} \ No newline at end of file diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.eot b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.eot new file mode 100644 index 00000000..a1bc094a Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.eot differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.svg b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.svg new file mode 100644 index 00000000..46ad237a --- /dev/null +++ b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.svg @@ -0,0 +1,3570 @@ + + + + + +Created by FontForge 20190801 at Mon Mar 23 10:45:51 2020 + By Robert Madole +Copyright (c) Font Awesome + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.ttf b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.ttf new file mode 100644 index 00000000..948a2a6c Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.ttf differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff new file mode 100644 index 00000000..2a89d521 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2 b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2 new file mode 100644 index 00000000..141a90a9 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2 differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.eot b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.eot new file mode 100644 index 00000000..38cf2517 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.eot differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.svg b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.svg new file mode 100644 index 00000000..48634a9a --- /dev/null +++ b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.svg @@ -0,0 +1,803 @@ + + + + + +Created by FontForge 20190801 at Mon Mar 23 10:45:51 2020 + By Robert Madole +Copyright (c) Font Awesome + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.ttf b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.ttf new file mode 100644 index 00000000..abe99e20 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.ttf differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.woff b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.woff new file mode 100644 index 00000000..24de566a Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.woff differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.woff2 b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.woff2 new file mode 100644 index 00000000..7e0118e5 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-regular-400.woff2 differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.eot b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.eot new file mode 100644 index 00000000..d3b77c22 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.eot differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.svg b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.svg new file mode 100644 index 00000000..7742838b --- /dev/null +++ b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.svg @@ -0,0 +1,4938 @@ + + + + + +Created by FontForge 20190801 at Mon Mar 23 10:45:51 2020 + By Robert Madole +Copyright (c) Font Awesome + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.ttf b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.ttf new file mode 100644 index 00000000..5b979039 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.ttf differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff new file mode 100644 index 00000000..beec7917 Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff differ diff --git a/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2 b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2 new file mode 100644 index 00000000..978a681a Binary files /dev/null and b/0.4/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2 differ diff --git a/0.4/_static/webpack-macros.html b/0.4/_static/webpack-macros.html new file mode 100644 index 00000000..144f1885 --- /dev/null +++ b/0.4/_static/webpack-macros.html @@ -0,0 +1,25 @@ + +{% macro head_pre_icons() %} + + + +{% endmacro %} + +{% macro head_pre_fonts() %} +{% endmacro %} + +{% macro head_pre_bootstrap() %} + + +{% endmacro %} + +{% macro head_js_preload() %} + +{% endmacro %} + +{% macro body_post() %} + +{% endmacro %} \ No newline at end of file diff --git a/0.4/api/collectors/index.html b/0.4/api/collectors/index.html new file mode 100644 index 00000000..6d91abda --- /dev/null +++ b/0.4/api/collectors/index.html @@ -0,0 +1,771 @@ + + + + + + + + 11. Collectors & Extractors — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ + +
+
+ +
+ +
+

11. Collectors & Extractors

+
+

11.1. miplearn.classifiers.minprob

+
+
+class miplearn.classifiers.minprob.MinProbabilityClassifier(base_clf: ~typing.Any, thresholds: ~typing.List[float], clone_fn: ~typing.Callable[[~typing.Any], ~typing.Any] = <function clone>)
+

Bases: BaseEstimator

+

Meta-classifier that returns NaN for predictions made by a base classifier that +have probability below a given threshold. More specifically, this meta-classifier +calls base_clf.predict_proba and compares the result against the provided +thresholds. If the probability for one of the classes is above its threshold, +the meta-classifier returns that prediction. Otherwise, it returns NaN.

+
+
+fit(x: ndarray, y: ndarray) None
+
+ +
+
+predict(x: ndarray) ndarray
+
+ +
+
+set_fit_request(*, x: bool | None | str = '$UNCHANGED$') MinProbabilityClassifier
+

Request metadata passed to the fit method.

+

Note that this method is only relevant if +enable_metadata_routing=True (see sklearn.set_config()). +Please see User Guide on how the routing +mechanism works.

+

The options for each parameter are:

+
    +
  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • +
  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • +
  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • +
  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

  • +
+

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the +existing request. This allows you to change the request for some +parameters and not others.

+
+

New in version 1.3.

+
+
+

Note

+

This method is only relevant if this estimator is used as a +sub-estimator of a meta-estimator, e.g. used inside a +Pipeline. Otherwise it has no effect.

+
+
+
Parameters:
+

x (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for x parameter in fit.

+
+
Returns:
+

self – The updated object.

+
+
Return type:
+

object

+
+
+
+ +
+
+set_predict_request(*, x: bool | None | str = '$UNCHANGED$') MinProbabilityClassifier
+

Request metadata passed to the predict method.

+

Note that this method is only relevant if +enable_metadata_routing=True (see sklearn.set_config()). +Please see User Guide on how the routing +mechanism works.

+

The options for each parameter are:

+
    +
  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • +
  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • +
  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • +
  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

  • +
+

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the +existing request. This allows you to change the request for some +parameters and not others.

+
+

New in version 1.3.

+
+
+

Note

+

This method is only relevant if this estimator is used as a +sub-estimator of a meta-estimator, e.g. used inside a +Pipeline. Otherwise it has no effect.

+
+
+
Parameters:
+

x (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for x parameter in predict.

+
+
Returns:
+

self – The updated object.

+
+
Return type:
+

object

+
+
+
+ +
+ +
+
+

11.2. miplearn.classifiers.singleclass

+
+
+class miplearn.classifiers.singleclass.SingleClassFix(base_clf: ~sklearn.base.BaseEstimator, clone_fn: ~typing.Callable = <function clone>)
+

Bases: BaseEstimator

+

Some sklearn classifiers, such as logistic regression, have issues with datasets +that contain a single class. This meta-classifier fixes the issue. If the +training data contains a single class, this meta-classifier always returns that +class as a prediction. Otherwise, it fits the provided base classifier, +and returns its predictions instead.

+
+
+fit(x: ndarray, y: ndarray) None
+
+ +
+
+predict(x: ndarray) ndarray
+
+ +
+
+set_fit_request(*, x: bool | None | str = '$UNCHANGED$') SingleClassFix
+

Request metadata passed to the fit method.

+

Note that this method is only relevant if +enable_metadata_routing=True (see sklearn.set_config()). +Please see User Guide on how the routing +mechanism works.

+

The options for each parameter are:

+
    +
  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • +
  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • +
  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • +
  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

  • +
+

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the +existing request. This allows you to change the request for some +parameters and not others.

+
+

New in version 1.3.

+
+
+

Note

+

This method is only relevant if this estimator is used as a +sub-estimator of a meta-estimator, e.g. used inside a +Pipeline. Otherwise it has no effect.

+
+
+
Parameters:
+

x (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for x parameter in fit.

+
+
Returns:
+

self – The updated object.

+
+
Return type:
+

object

+
+
+
+ +
+
+set_predict_request(*, x: bool | None | str = '$UNCHANGED$') SingleClassFix
+

Request metadata passed to the predict method.

+

Note that this method is only relevant if +enable_metadata_routing=True (see sklearn.set_config()). +Please see User Guide on how the routing +mechanism works.

+

The options for each parameter are:

+
    +
  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • +
  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • +
  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • +
  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

  • +
+

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the +existing request. This allows you to change the request for some +parameters and not others.

+
+

New in version 1.3.

+
+
+

Note

+

This method is only relevant if this estimator is used as a +sub-estimator of a meta-estimator, e.g. used inside a +Pipeline. Otherwise it has no effect.

+
+
+
Parameters:
+

x (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for x parameter in predict.

+
+
Returns:
+

self – The updated object.

+
+
Return type:
+

object

+
+
+
+ +
+ +
+
+

11.3. miplearn.collectors.basic

+
+
+class miplearn.collectors.basic.BasicCollector(skip_lp: bool = False, write_mps: bool = True)
+

Bases: object

+
+
+collect(filenames: List[str], build_model: Callable, n_jobs: int = 1, progress: bool = False, verbose: bool = False) None
+
+ +
+ +
+
+

11.4. miplearn.extractors.fields

+
+
+class miplearn.extractors.fields.H5FieldsExtractor(instance_fields: List[str] | None = None, var_fields: List[str] | None = None, constr_fields: List[str] | None = None)
+

Bases: FeaturesExtractor

+
+
+get_constr_features(h5: H5File) ndarray
+
+ +
+
+get_instance_features(h5: H5File) ndarray
+
+ +
+
+get_var_features(h5: H5File) ndarray
+
+ +
+ +
+
+

11.5. miplearn.extractors.AlvLouWeh2017

+
+
+class miplearn.extractors.AlvLouWeh2017.AlvLouWeh2017Extractor(with_m1: bool = True, with_m2: bool = True, with_m3: bool = True)
+

Bases: FeaturesExtractor

+
+
+get_constr_features(h5: H5File) ndarray
+
+ +
+
+get_instance_features(h5: H5File) ndarray
+
+ +
+
+get_var_features(h5: H5File) ndarray
+
+
Computes static variable features described in:

Alvarez, A. M., Louveaux, Q., & Wehenkel, L. (2017). A machine learning-based +approximation of strong branching. INFORMS Journal on Computing, 29(1), +185-195.

+
+
+
+ +
+ +
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/api/components/index.html b/0.4/api/components/index.html new file mode 100644 index 00000000..ae0645f1 --- /dev/null +++ b/0.4/api/components/index.html @@ -0,0 +1,728 @@ + + + + + + + + 12. Components — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + +
+ +
+ Contents +
+ +
+
+
+
+
+ +
+ +
+

12. Components

+
+

12.1. miplearn.components.primal.actions

+
+
+class miplearn.components.primal.actions.EnforceProximity(tol: float)
+

Bases: PrimalComponentAction

+
+
+perform(model: AbstractModel, var_names: ndarray, var_values: ndarray, stats: Dict | None) None
+
+ +
+ +
+
+class miplearn.components.primal.actions.FixVariables
+

Bases: PrimalComponentAction

+
+
+perform(model: AbstractModel, var_names: ndarray, var_values: ndarray, stats: Dict | None) None
+
+ +
+ +
+
+class miplearn.components.primal.actions.PrimalComponentAction
+

Bases: ABC

+
+
+abstract perform(model: AbstractModel, var_names: ndarray, var_values: ndarray, stats: Dict | None) None
+
+ +
+ +
+
+class miplearn.components.primal.actions.SetWarmStart
+

Bases: PrimalComponentAction

+
+
+perform(model: AbstractModel, var_names: ndarray, var_values: ndarray, stats: Dict | None) None
+
+ +
+ +
+
+

12.2. miplearn.components.primal.expert

+
+
+class miplearn.components.primal.expert.ExpertPrimalComponent(action: PrimalComponentAction)
+

Bases: object

+
+
+before_mip(test_h5: str, model: AbstractModel, stats: Dict[str, Any]) None
+
+ +
+
+fit(train_h5: List[str]) None
+
+ +
+ +
+
+

12.3. miplearn.components.primal.indep

+
+
+class miplearn.components.primal.indep.IndependentVarsPrimalComponent(base_clf: ~typing.Any, extractor: ~miplearn.extractors.abstract.FeaturesExtractor, action: ~miplearn.components.primal.actions.PrimalComponentAction, clone_fn: ~typing.Callable[[~typing.Any], ~typing.Any] = <function clone>)
+

Bases: object

+
+
+before_mip(test_h5: str, model: AbstractModel, stats: Dict[str, Any]) None
+
+ +
+
+fit(train_h5: List[str]) None
+
+ +
+ +
+
+

12.4. miplearn.components.primal.joint

+
+
+class miplearn.components.primal.joint.JointVarsPrimalComponent(clf: Any, extractor: FeaturesExtractor, action: PrimalComponentAction)
+

Bases: object

+
+
+before_mip(test_h5: str, model: AbstractModel, stats: Dict[str, Any]) None
+
+ +
+
+fit(train_h5: List[str]) None
+
+ +
+ +
+
+

12.5. miplearn.components.primal.mem

+
+
+class miplearn.components.primal.mem.MemorizingPrimalComponent(clf: Any, extractor: FeaturesExtractor, constructor: SolutionConstructor, action: PrimalComponentAction)
+

Bases: object

+

Component that memorizes all solutions seen during training, then fits a +single classifier to predict which of the memorized solutions should be +provided to the solver. Optionally combines multiple memorized solutions +into a single, partial one.

+
+
+before_mip(test_h5: str, model: AbstractModel, stats: Dict[str, Any]) None
+
+ +
+
+fit(train_h5: List[str]) None
+
+ +
+ +
+
+class miplearn.components.primal.mem.MergeTopSolutions(k: int, thresholds: List[float])
+

Bases: SolutionConstructor

+

Warm start construction strategy that first selects the top k solutions, +then merges them into a single solution.

+

To merge the solutions, the strategy first computes the mean optimal value of each +decision variable, then: (i) sets the variable to zero if the mean is below +thresholds[0]; (ii) sets the variable to one if the mean is above thresholds[1]; +(iii) leaves the variable free otherwise.

+
+
+construct(y_proba: ndarray, solutions: ndarray) ndarray
+
+ +
+ +
+
+class miplearn.components.primal.mem.SelectTopSolutions(k: int)
+

Bases: SolutionConstructor

+

Warm start construction strategy that selects and returns the top k solutions.

+
+
+construct(y_proba: ndarray, solutions: ndarray) ndarray
+
+ +
+ +
+
+class miplearn.components.primal.mem.SolutionConstructor
+

Bases: ABC

+
+
+abstract construct(y_proba: ndarray, solutions: ndarray) ndarray
+
+ +
+ +
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/api/helpers/index.html b/0.4/api/helpers/index.html new file mode 100644 index 00000000..35928745 --- /dev/null +++ b/0.4/api/helpers/index.html @@ -0,0 +1,473 @@ + + + + + + + + 14. Helpers — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ + +
+
+ +
+ +
+

14. Helpers

+
+

14.1. miplearn.io

+
+
+miplearn.io.gzip(filename: str) None
+
+ +
+
+miplearn.io.read_pkl_gz(filename: str) Any
+
+ +
+
+miplearn.io.write_pkl_gz(objs: List[Any], dirname: str, prefix: str = '', n_jobs: int = 1, progress: bool = False) List[str]
+
+ +
+
+

14.2. miplearn.h5

+
+
+class miplearn.h5.H5File(filename: str, mode: str = 'r+')
+

Bases: object

+
+
+close() None
+
+ +
+
+get_array(key: str) ndarray | None
+
+ +
+
+get_bytes(key: str) bytes | bytearray | None
+
+ +
+
+get_scalar(key: str) Any | None
+
+ +
+
+get_sparse(key: str) coo_matrix | None
+
+ +
+
+put_array(key: str, value: ndarray | None) None
+
+ +
+
+put_bytes(key: str, value: bytes | bytearray) None
+
+ +
+
+put_scalar(key: str, value: Any) None
+
+ +
+
+put_sparse(key: str, value: coo_matrix) None
+
+ +
+ +
+
+ + +
+ + +
+ + 13. Solvers + +
+ +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/api/problems/index.html b/0.4/api/problems/index.html new file mode 100644 index 00000000..439b9cb7 --- /dev/null +++ b/0.4/api/problems/index.html @@ -0,0 +1,783 @@ + + + + + + + + 10. Benchmark Problems — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ + +
+
+ +
+ +
+

10. Benchmark Problems

+
+

10.1. miplearn.problems.binpack

+
+
+class miplearn.problems.binpack.BinPackData(sizes: ndarray, capacity: int)
+

Data for the bin packing problem.

+
+
Parameters:
+
    +
  • sizes (numpy.ndarray) – Sizes of the items

  • +
  • capacity (int) – Capacity of the bin

  • +
+
+
+
+ +
+
+class miplearn.problems.binpack.BinPackGenerator(n: rv_frozen, sizes: rv_frozen, capacity: rv_frozen, sizes_jitter: rv_frozen, capacity_jitter: rv_frozen, fix_items: bool)
+

Random instance generator for the bin packing problem.

+

If fix_items=False, the class samples the user-provided probability distributions +n, sizes and capacity to decide, respectively, the number of items, the sizes of +the items and capacity of the bin. All values are sampled independently.

+

If fix_items=True, the class creates a reference instance, using the method +previously described, then generates additional instances by perturbing its item +sizes and bin capacity. More specifically, the sizes of the items are set to s_i +* gamma_i where s_i is the size of the i-th item in the reference instance and +gamma_i is sampled from sizes_jitter. Similarly, the bin capacity is set to B * +beta, where B is the reference bin capacity and beta is sampled from +capacity_jitter. The number of items remains the same across all generated +instances.

+
+
Parameters:
+
    +
  • n – Probability distribution for the number of items.

  • +
  • sizes – Probability distribution for the item sizes.

  • +
  • capacity – Probability distribution for the bin capacity.

  • +
  • sizes_jitter – Probability distribution for the item size randomization.

  • +
  • capacity_jitter – Probability distribution for the bin capacity.

  • +
  • fix_items – If True, generates a reference instance, then applies some perturbation to it. +If False, generates completely different instances.

  • +
+
+
+
+
+generate(n_samples: int) List[BinPackData]
+

Generates random instances.

+
+
Parameters:
+

n_samples – Number of samples to generate.

+
+
+
+ +
+ +
+
+miplearn.problems.binpack.build_binpack_model_gurobipy(data: str | BinPackData) GurobiModel
+

Converts bin packing problem data into a concrete Gurobipy model.

+
+ +
+
+

10.2. miplearn.problems.multiknapsack

+
+
+class miplearn.problems.multiknapsack.MultiKnapsackData(prices: ndarray, capacities: ndarray, weights: ndarray)
+

Data for the multi-dimensional knapsack problem

+
+
Parameters:
+
    +
  • prices (numpy.ndarray) – Item prices.

  • +
  • capacities (numpy.ndarray) – Knapsack capacities.

  • +
  • weights (numpy.ndarray) – Matrix of item weights.

  • +
+
+
+
+ +
+
+class miplearn.problems.multiknapsack.MultiKnapsackGenerator(n: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, m: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, w: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, K: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, u: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, alpha: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, fix_w: bool = False, w_jitter: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, p_jitter: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, round: bool = True)
+

Random instance generator for the multi-dimensional knapsack problem.

+

Instances have a random number of items (or variables) and a random number of +knapsacks (or constraints), as specified by the provided probability +distributions n and m, respectively. The weight of each item i on knapsack +j is sampled independently from the provided distribution w. The capacity of +knapsack j is set to alpha_j * sum(w[i,j] for i in range(n)), +where alpha_j, the tightness ratio, is sampled from the provided probability +distribution alpha.

+

To make the instances more challenging, the costs of the items are linearly +correlated to their average weights. More specifically, the weight of each item +i is set to sum(w[i,j]/m for j in range(m)) + K * u_i, where K, +the correlation coefficient, and u_i, the correlation multiplier, are sampled +from the provided probability distributions. Note that K is only sample once +for the entire instance.

+

If fix_w=True, then weights[i,j] are kept the same in all generated +instances. This also implies that n and m are kept fixed. Although the prices and +capacities are derived from weights[i,j], as long as u and K are not +constants, the generated instances will still not be completely identical.

+

If a probability distribution w_jitter is provided, then item weights will be +set to w[i,j] * gamma[i,j] where gamma[i,j] is sampled from w_jitter. +When combined with fix_w=True, this argument may be used to generate instances +where the weight of each item is roughly the same, but not exactly identical, +across all instances. The prices of the items and the capacities of the knapsacks +will be calculated as above, but using these perturbed weights instead.

+

By default, all generated prices, weights and capacities are rounded to the +nearest integer number. If round=False is provided, this rounding will be +disabled.

+
+
Parameters:
+
    +
  • n (rv_discrete) – Probability distribution for the number of items (or variables).

  • +
  • m (rv_discrete) – Probability distribution for the number of knapsacks (or constraints).

  • +
  • w (rv_continuous) – Probability distribution for the item weights.

  • +
  • K (rv_continuous) – Probability distribution for the profit correlation coefficient.

  • +
  • u (rv_continuous) – Probability distribution for the profit multiplier.

  • +
  • alpha (rv_continuous) – Probability distribution for the tightness ratio.

  • +
  • fix_w (boolean) – If true, weights are kept the same (minus the noise from w_jitter) in all +instances.

  • +
  • w_jitter (rv_continuous) – Probability distribution for random noise added to the weights.

  • +
  • round (boolean) – If true, all prices, weights and capacities are rounded to the nearest +integer.

  • +
+
+
+
+ +
+
+miplearn.problems.multiknapsack.build_multiknapsack_model_gurobipy(data: str | MultiKnapsackData) GurobiModel
+

Converts multi-knapsack problem data into a concrete Gurobipy model.

+
+ +
+
+

10.3. miplearn.problems.pmedian

+
+
+class miplearn.problems.pmedian.PMedianData(distances: ndarray, demands: ndarray, p: int, capacities: ndarray)
+

Data for the capacitated p-median problem

+
+
Parameters:
+
    +
  • distances (numpy.ndarray) – Matrix of distances between customer i and facility j.

  • +
  • demands (numpy.ndarray) – Customer demands.

  • +
  • p (int) – Number of medians that need to be chosen.

  • +
  • capacities (numpy.ndarray) – Facility capacities.

  • +
+
+
+
+ +
+
+class miplearn.problems.pmedian.PMedianGenerator(x: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, y: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, n: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, p: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, demands: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, capacities: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, distances_jitter: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, demands_jitter: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, capacities_jitter: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, fixed: bool = True)
+

Random generator for the capacitated p-median problem.

+

This class first decides the number of customers and the parameter p by +sampling the provided n and p distributions, respectively. Then, for each +customer i, the class builds its geographical location (xi, yi) by sampling +the provided x and y distributions. For each i, the demand for customer i +and the capacity of facility i are decided by sampling the distributions +demands and capacities, respectively. Finally, the costs w[i,j] are set to +the Euclidean distance between the locations of customers i and j.

+

If fixed=True, then the number of customers, their locations, the parameter +p, the demands and the capacities are only sampled from their respective +distributions exactly once, to build a reference instance which is then +perturbed. Specifically, for each perturbation, the distances, demands and +capacities are multiplied by factors sampled from the distributions +distances_jitter, demands_jitter and capacities_jitter, respectively. The +result is a list of instances that have the same set of customers, but slightly +different demands, capacities and distances.

+
+
Parameters:
+
    +
  • x – Probability distribution for the x-coordinate of the points.

  • +
  • y – Probability distribution for the y-coordinate of the points.

  • +
  • n – Probability distribution for the number of customer.

  • +
  • p – Probability distribution for the number of medians.

  • +
  • demands – Probability distribution for the customer demands.

  • +
  • capacities – Probability distribution for the facility capacities.

  • +
  • distances_jitter – Probability distribution for the random scaling factor applied to distances.

  • +
  • demands_jitter – Probability distribution for the random scaling factor applied to demands.

  • +
  • capacities_jitter – Probability distribution for the random scaling factor applied to capacities.

  • +
  • fixed – If True, then customer are kept the same across instances.

  • +
+
+
+
+ +
+
+miplearn.problems.pmedian.build_pmedian_model_gurobipy(data: str | PMedianData) GurobiModel
+

Converts capacitated p-median data into a concrete Gurobipy model.

+
+ +
+
+

10.4. miplearn.problems.setcover

+
+
+class miplearn.problems.setcover.SetCoverData(costs: numpy.ndarray, incidence_matrix: numpy.ndarray)
+
+ +
+
+

10.5. miplearn.problems.setpack

+
+
+class miplearn.problems.setpack.SetPackData(costs: numpy.ndarray, incidence_matrix: numpy.ndarray)
+
+ +
+
+

10.6. miplearn.problems.stab

+
+
+class miplearn.problems.stab.MaxWeightStableSetData(graph: networkx.classes.graph.Graph, weights: numpy.ndarray)
+
+ +
+
+class miplearn.problems.stab.MaxWeightStableSetGenerator(w: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, n: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, p: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, fix_graph: bool = True)
+

Random instance generator for the Maximum-Weight Stable Set Problem.

+

The generator has two modes of operation. When fix_graph=True is provided, +one random Erdős-Rényi graph $G_{n,p}$ is generated in the constructor, where $n$ +and $p$ are sampled from user-provided probability distributions n and p. To +generate each instance, the generator independently samples each $w_v$ from the +user-provided probability distribution w.

+

When fix_graph=False, a new random graph is generated for each instance; the +remaining parameters are sampled in the same way.

+
+ +
+
+

10.7. miplearn.problems.tsp

+
+
+class miplearn.problems.tsp.TravelingSalesmanData(n_cities: int, distances: numpy.ndarray)
+
+ +
+
+class miplearn.problems.tsp.TravelingSalesmanGenerator(x: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, y: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, n: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_discrete_frozen object>, gamma: ~scipy.stats._distn_infrastructure.rv_frozen = <scipy.stats._distn_infrastructure.rv_continuous_frozen object>, fix_cities: bool = True, round: bool = True)
+

Random generator for the Traveling Salesman Problem.

+
+ +
+
+

10.8. miplearn.problems.uc

+
+
+class miplearn.problems.uc.UnitCommitmentData(demand: numpy.ndarray, min_power: numpy.ndarray, max_power: numpy.ndarray, min_uptime: numpy.ndarray, min_downtime: numpy.ndarray, cost_startup: numpy.ndarray, cost_prod: numpy.ndarray, cost_fixed: numpy.ndarray)
+
+ +
+
+miplearn.problems.uc.build_uc_model_gurobipy(data: str | UnitCommitmentData) GurobiModel
+

Models the unit commitment problem according to equations (1)-(5) of:

+
+

Bendotti, P., Fouilhoux, P. & Rottner, C. The min-up/min-down unit +commitment polytope. J Comb Optim 36, 1024-1058 (2018). +https://doi.org/10.1007/s10878-018-0273-y

+
+
+ +
+
+

10.9. miplearn.problems.vertexcover

+
+
+class miplearn.problems.vertexcover.MinWeightVertexCoverData(graph: networkx.classes.graph.Graph, weights: numpy.ndarray)
+
+ +
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/api/solvers/index.html b/0.4/api/solvers/index.html new file mode 100644 index 00000000..8d8efa13 --- /dev/null +++ b/0.4/api/solvers/index.html @@ -0,0 +1,736 @@ + + + + + + + + 13. Solvers — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + + +
+
+
+
+ +
+ +
+

13. Solvers

+
+

13.1. miplearn.solvers.abstract

+
+
+class miplearn.solvers.abstract.AbstractModel
+

Bases: ABC

+
+
+WHERE_CUTS = 'cuts'
+
+ +
+
+WHERE_DEFAULT = 'default'
+
+ +
+
+WHERE_LAZY = 'lazy'
+
+ +
+
+abstract add_constrs(var_names: ndarray, constrs_lhs: ndarray, constrs_sense: ndarray, constrs_rhs: ndarray, stats: Dict | None = None) None
+
+ +
+
+abstract extract_after_load(h5: H5File) None
+
+ +
+
+abstract extract_after_lp(h5: H5File) None
+
+ +
+
+abstract extract_after_mip(h5: H5File) None
+
+ +
+
+abstract fix_variables(var_names: ndarray, var_values: ndarray, stats: Dict | None = None) None
+
+ +
+
+lazy_enforce(violations: List[Any]) None
+
+ +
+
+abstract optimize() None
+
+ +
+
+abstract relax() AbstractModel
+
+ +
+
+set_cuts(cuts: List) None
+
+ +
+
+abstract set_warm_starts(var_names: ndarray, var_values: ndarray, stats: Dict | None = None) None
+
+ +
+
+abstract write(filename: str) None
+
+ +
+ +
+
+

13.2. miplearn.solvers.gurobi

+
+
+class miplearn.solvers.gurobi.GurobiModel(inner: Model, lazy_separate: Callable | None = None, lazy_enforce: Callable | None = None, cuts_separate: Callable | None = None, cuts_enforce: Callable | None = None)
+

Bases: AbstractModel

+
+
+add_constr(constr: Any) None
+
+ +
+
+add_constrs(var_names: ndarray, constrs_lhs: ndarray, constrs_sense: ndarray, constrs_rhs: ndarray, stats: Dict | None = None) None
+
+ +
+
+extract_after_load(h5: H5File) None
+

Given a model that has just been loaded, extracts static problem +features, such as variable names and types, objective coefficients, etc.

+
+ +
+
+extract_after_lp(h5: H5File) None
+

Given a linear programming model that has just been solved, extracts +dynamic problem features, such as optimal LP solution, basis status, +etc.

+
+ +
+
+extract_after_mip(h5: H5File) None
+

Given a mixed-integer linear programming model that has just been +solved, extracts dynamic problem features, such as optimal MIP solution.

+
+ +
+
+fix_variables(var_names: ndarray, var_values: ndarray, stats: Dict | None = None) None
+
+ +
+
+optimize() None
+
+ +
+
+relax() GurobiModel
+
+ +
+
+set_time_limit(time_limit_sec: float) None
+
+ +
+
+set_warm_starts(var_names: ndarray, var_values: ndarray, stats: Dict | None = None) None
+
+ +
+
+write(filename: str) None
+
+ +
+ +
+
+

13.3. miplearn.solvers.learning

+
+
+class miplearn.solvers.learning.LearningSolver(components: List[Any], skip_lp: bool = False)
+

Bases: object

+
+
+fit(data_filenames: List[str]) None
+
+ +
+
+optimize(model: str | AbstractModel, build_model: Callable | None = None) Dict[str, Any]
+
+ +
+ +
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/genindex/index.html b/0.4/genindex/index.html new file mode 100644 index 00000000..5b2d940e --- /dev/null +++ b/0.4/genindex/index.html @@ -0,0 +1,872 @@ + + + + + + + Index — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + +
+ + +
+ +
+
+
+
+
+ +
+ + +

Index

+ +
+ A + | B + | C + | E + | F + | G + | H + | I + | J + | L + | M + | O + | P + | R + | S + | T + | U + | W + +
+

A

+ + + +
+ +

B

+ + + +
+ +

C

+ + + +
+ +

E

+ + + +
+ +

F

+ + + +
+ +

G

+ + + +
+ +

H

+ + + +
+ +

I

+ + +
+ +

J

+ + +
+ +

L

+ + + +
+ +

M

+ + + +
+ +

O

+ + +
+ +

P

+ + + +
+ +

R

+ + + +
+ +

S

+ + + +
+ +

T

+ + + +
+ +

U

+ + +
+ +

W

+ + + +
+ + + +
+ + +
+ + +
+ +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/guide/collectors.ipynb b/0.4/guide/collectors.ipynb new file mode 100644 index 00000000..443802ed --- /dev/null +++ b/0.4/guide/collectors.ipynb @@ -0,0 +1,288 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "505cea0b-5f5d-478a-9107-42bb5515937d", + "metadata": {}, + "source": [ + "# Training Data Collectors\n", + "The first step in solving mixed-integer optimization problems with the assistance of supervised machine learning methods is solving a large set of training instances and collecting the raw training data. In this section, we describe the various training data collectors included in MIPLearn. Additionally, the framework follows the convention of storing all training data in files with a specific data format (namely, HDF5). In this section, we briefly describe this format and the rationale for choosing it.\n", + "\n", + "## Overview\n", + "\n", + "In MIPLearn, a **collector** is a class that solves or analyzes the problem and collects raw data which may be later useful for machine learning methods. Collectors, by convention, take as input: (i) a list of problem data filenames, in gzipped pickle format, ending with `.pkl.gz`; (ii) a function that builds the optimization model, such as `build_tsp_model`. After processing is done, collectors store the training data in a HDF5 file located alongside with the problem data. For example, if the problem data is stored in file `problem.pkl.gz`, then the collector writes to `problem.h5`. Collectors are, in general, very time consuming, as they may need to solve the problem to optimality, potentially multiple times.\n", + "\n", + "## HDF5 Format\n", + "\n", + "MIPLearn stores all training data in [HDF5](HDF5) (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n", + "\n", + "- *Storage of multiple scalars, vectors and matrices in a single file* --- This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.\n", + "- *High-performance partial I/O* --- Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.\n", + "- *On-the-fly compression* --- HDF5 files can be transparently compressed, using the gzip method, which reduces storage requirements and accelerates network transfers.\n", + "- *Stable, portable and well-supported data format* --- Training data files are typically expensive to generate. Having a stable and well supported data format ensures that these files remain usable in the future, potentially even by other non-Python MIP/ML frameworks.\n", + "\n", + "MIPLearn currently uses HDF5 as simple key-value storage for numerical data; more advanced features of the format, such as metadata, are not currently used. Although files generated by MIPLearn can be read with any HDF5 library, such as [h5py][h5py], some convenience functions are provided to make the access more simple and less error-prone. Specifically, the class [H5File][H5File], which is built on top of h5py, provides the methods [put_scalar][put_scalar], [put_array][put_array], [put_sparse][put_sparse], [put_bytes][put_bytes] to store, respectively, scalar values, dense multi-dimensional arrays, sparse multi-dimensional arrays and arbitrary binary data. The corresponding *get* methods are also provided. Compared to pure h5py methods, these methods automatically perform type-checking and gzip compression. The example below shows their usage.\n", + "\n", + "[HDF5]: https://en.wikipedia.org/wiki/Hierarchical_Data_Format\n", + "[NCSA]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications\n", + "[h5py]: https://www.h5py.org/\n", + "[H5File]: ../../api/helpers/#miplearn.h5.H5File\n", + "[put_scalar]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "[put_array]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "[put_sparse]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "[put_bytes]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n", + "\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "f906fe9c", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-30T22:19:30.826123021Z", + "start_time": "2024-01-30T22:19:30.766066926Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "x1 = 1\n", + "x2 = hello world\n", + "x3 = [1 2 3]\n", + "x4 = [[0.37454012 0.9507143 0.7319939 ]\n", + " [0.5986585 0.15601864 0.15599452]\n", + " [0.05808361 0.8661761 0.601115 ]]\n", + "x5 = (3, 2)\t0.6803075671195984\n", + " (2, 3)\t0.4504992663860321\n", + " (0, 4)\t0.013264961540699005\n", + " (2, 0)\t0.9422017335891724\n", + " (2, 4)\t0.5632882118225098\n", + " (1, 2)\t0.38541650772094727\n", + " (1, 1)\t0.015966251492500305\n", + " (0, 3)\t0.2308938205242157\n", + " (4, 4)\t0.24102546274662018\n", + " (3, 1)\t0.6832635402679443\n", + " (1, 3)\t0.6099966764450073\n", + " (3, 0)\t0.83319491147995\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "import scipy.sparse\n", + "\n", + "from miplearn.h5 import H5File\n", + "\n", + "# Set random seed to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Create a new empty HDF5 file\n", + "with H5File(\"test.h5\", \"w\") as h5:\n", + " # Store a scalar\n", + " h5.put_scalar(\"x1\", 1)\n", + " h5.put_scalar(\"x2\", \"hello world\")\n", + "\n", + " # Store a dense array and a dense matrix\n", + " h5.put_array(\"x3\", np.array([1, 2, 3]))\n", + " h5.put_array(\"x4\", np.random.rand(3, 3))\n", + "\n", + " # Store a sparse matrix\n", + " h5.put_sparse(\"x5\", scipy.sparse.random(5, 5, 0.5))\n", + "\n", + "# Re-open the file we just created and print\n", + "# previously-stored data\n", + "with H5File(\"test.h5\", \"r\") as h5:\n", + " print(\"x1 =\", h5.get_scalar(\"x1\"))\n", + " print(\"x2 =\", h5.get_scalar(\"x2\"))\n", + " print(\"x3 =\", h5.get_array(\"x3\"))\n", + " print(\"x4 =\", h5.get_array(\"x4\"))\n", + " print(\"x5 =\", h5.get_sparse(\"x5\"))" + ] + }, + { + "cell_type": "markdown", + "id": "50441907", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "d0000c8d", + "metadata": {}, + "source": [ + "## Basic collector\n", + "\n", + "[BasicCollector][BasicCollector] is the most fundamental collector, and performs the following steps:\n", + "\n", + "1. Extracts all model data, such as objective function and constraint right-hand sides into numpy arrays, which can later be easily and efficiently accessed without rebuilding the model or invoking the solver;\n", + "2. Solves the linear relaxation of the problem and stores its optimal solution, basis status and sensitivity information, among other information;\n", + "3. Solves the original mixed-integer optimization problem to optimality and stores its optimal solution, along with solve statistics, such as number of explored nodes and wallclock time.\n", + "\n", + "Data extracted in Phases 1, 2 and 3 above are prefixed, respectively as `static_`, `lp_` and `mip_`. The entire set of fields is shown in the table below.\n", + "\n", + "[BasicCollector]: ../../api/collectors/#miplearn.collectors.basic.BasicCollector\n" + ] + }, + { + "cell_type": "markdown", + "id": "6529f667", + "metadata": {}, + "source": [ + "### Data fields\n", + "\n", + "| Field | Type | Description |\n", + "|-----------------------------------|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------|\n", + "| `static_constr_lhs` | `(nconstrs, nvars)` | Constraint left-hand sides, in sparse matrix format |\n", + "| `static_constr_names` | `(nconstrs,)` | Constraint names |\n", + "| `static_constr_rhs` | `(nconstrs,)` | Constraint right-hand sides |\n", + "| `static_constr_sense` | `(nconstrs,)` | Constraint senses (`\"<\"`, `\">\"` or `\"=\"`) |\n", + "| `static_obj_offset` | `float` | Constant value added to the objective function |\n", + "| `static_sense` | `str` | `\"min\"` if minimization problem or `\"max\"` otherwise |\n", + "| `static_var_lower_bounds` | `(nvars,)` | Variable lower bounds |\n", + "| `static_var_names` | `(nvars,)` | Variable names |\n", + "| `static_var_obj_coeffs` | `(nvars,)` | Objective coefficients |\n", + "| `static_var_types` | `(nvars,)` | Types of the decision variables (`\"C\"`, `\"B\"` and `\"I\"` for continuous, binary and integer, respectively) |\n", + "| `static_var_upper_bounds` | `(nvars,)` | Variable upper bounds |\n", + "| `lp_constr_basis_status` | `(nconstr,)` | Constraint basis status (`0` for basic, `-1` for non-basic) |\n", + "| `lp_constr_dual_values` | `(nconstr,)` | Constraint dual value (or shadow price) |\n", + "| `lp_constr_sa_rhs_{up,down}` | `(nconstr,)` | Sensitivity information for the constraint RHS |\n", + "| `lp_constr_slacks` | `(nconstr,)` | Constraint slack in the solution to the LP relaxation |\n", + "| `lp_obj_value` | `float` | Optimal value of the LP relaxation |\n", + "| `lp_var_basis_status` | `(nvars,)` | Variable basis status (`0`, `-1`, `-2` or `-3` for basic, non-basic at lower bound, non-basic at upper bound, and superbasic, respectively) |\n", + "| `lp_var_reduced_costs` | `(nvars,)` | Variable reduced costs |\n", + "| `lp_var_sa_{obj,ub,lb}_{up,down}` | `(nvars,)` | Sensitivity information for the variable objective coefficient, lower and upper bound. |\n", + "| `lp_var_values` | `(nvars,)` | Optimal solution to the LP relaxation |\n", + "| `lp_wallclock_time` | `float` | Time taken to solve the LP relaxation (in seconds) |\n", + "| `mip_constr_slacks` | `(nconstrs,)` | Constraint slacks in the best MIP solution |\n", + "| `mip_gap` | `float` | Relative MIP optimality gap |\n", + "| `mip_node_count` | `float` | Number of explored branch-and-bound nodes |\n", + "| `mip_obj_bound` | `float` | Dual bound |\n", + "| `mip_obj_value` | `float` | Value of the best MIP solution |\n", + "| `mip_var_values` | `(nvars,)` | Best MIP solution |\n", + "| `mip_wallclock_time` | `float` | Time taken to solve the MIP (in seconds) |" + ] + }, + { + "cell_type": "markdown", + "id": "f2894594", + "metadata": {}, + "source": [ + "### Example\n", + "\n", + "The example below shows how to generate a few random instances of the traveling salesman problem, store its problem data, run the collector and print some of the training data to screen." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "ac6f8c6f", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-30T22:19:30.826707866Z", + "start_time": "2024-01-30T22:19:30.825940503Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "lp_obj_value = 2909.0\n", + "mip_obj_value = 2921.0\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from glob import glob\n", + "\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanGenerator,\n", + " build_tsp_model_gurobipy,\n", + ")\n", + "from miplearn.io import write_pkl_gz\n", + "from miplearn.h5 import H5File\n", + "from miplearn.collectors.basic import BasicCollector\n", + "\n", + "# Set random seed to make example reproducible.\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate a few instances of the traveling salesman problem.\n", + "data = TravelingSalesmanGenerator(\n", + " n=randint(low=10, high=11),\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " gamma=uniform(loc=0.90, scale=0.20),\n", + " fix_cities=True,\n", + " round=True,\n", + ").generate(10)\n", + "\n", + "# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n", + "write_pkl_gz(data, \"data/tsp\")\n", + "\n", + "# Solve all instances and collect basic solution information.\n", + "# Process at most four instances in parallel.\n", + "bc = BasicCollector()\n", + "bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model_gurobipy, n_jobs=4)\n", + "\n", + "# Read and print some training data for the first instance.\n", + "with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n", + " print(\"lp_obj_value = \", h5.get_scalar(\"lp_obj_value\"))\n", + " print(\"mip_obj_value = \", h5.get_scalar(\"mip_obj_value\"))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "78f0b07a", + "metadata": { + "ExecuteTime": { + "start_time": "2024-01-30T22:19:30.826179789Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/guide/collectors/index.html b/0.4/guide/collectors/index.html new file mode 100644 index 00000000..443f6f56 --- /dev/null +++ b/0.4/guide/collectors/index.html @@ -0,0 +1,595 @@ + + + + + + + + 6. Training Data Collectors — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + + +
+
+
+
+ +
+ +
+

6. Training Data Collectors

+

The first step in solving mixed-integer optimization problems with the assistance of supervised machine learning methods is solving a large set of training instances and collecting the raw training data. In this section, we describe the various training data collectors included in MIPLearn. Additionally, the framework follows the convention of storing all training data in files with a specific data format (namely, HDF5). In this section, we briefly describe this format and the rationale for +choosing it.

+
+

6.1. Overview

+

In MIPLearn, a collector is a class that solves or analyzes the problem and collects raw data which may be later useful for machine learning methods. Collectors, by convention, take as input: (i) a list of problem data filenames, in gzipped pickle format, ending with .pkl.gz; (ii) a function that builds the optimization model, such as build_tsp_model. After processing is done, collectors store the training data in a HDF5 file located alongside with the problem data. For example, if +the problem data is stored in file problem.pkl.gz, then the collector writes to problem.h5. Collectors are, in general, very time consuming, as they may need to solve the problem to optimality, potentially multiple times.

+
+
+

6.2. HDF5 Format

+

MIPLearn stores all training data in HDF5 (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the National Center for Supercomputing Applications (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the +HDF5 format provides several advantages for MIPLearn, including:

+
    +
  • Storage of multiple scalars, vectors and matrices in a single file — This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.

  • +
  • High-performance partial I/O — Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.

  • +
  • On-the-fly compression — HDF5 files can be transparently compressed, using the gzip method, which reduces storage requirements and accelerates network transfers.

  • +
  • Stable, portable and well-supported data format — Training data files are typically expensive to generate. Having a stable and well supported data format ensures that these files remain usable in the future, potentially even by other non-Python MIP/ML frameworks.

  • +
+

MIPLearn currently uses HDF5 as simple key-value storage for numerical data; more advanced features of the format, such as metadata, are not currently used. Although files generated by MIPLearn can be read with any HDF5 library, such as h5py, some convenience functions are provided to make the access more simple and less error-prone. Specifically, the class H5File, which is built on top of h5py, provides the methods +put_scalar, put_array, put_sparse, put_bytes to store, respectively, scalar values, dense multi-dimensional arrays, sparse multi-dimensional arrays and arbitrary binary data. The corresponding get methods are also provided. Compared to pure h5py methods, these methods +automatically perform type-checking and gzip compression. The example below shows their usage.

+
+

Example

+
+
[1]:
+
+
+
import numpy as np
+import scipy.sparse
+
+from miplearn.h5 import H5File
+
+# Set random seed to make example reproducible
+np.random.seed(42)
+
+# Create a new empty HDF5 file
+with H5File("test.h5", "w") as h5:
+    # Store a scalar
+    h5.put_scalar("x1", 1)
+    h5.put_scalar("x2", "hello world")
+
+    # Store a dense array and a dense matrix
+    h5.put_array("x3", np.array([1, 2, 3]))
+    h5.put_array("x4", np.random.rand(3, 3))
+
+    # Store a sparse matrix
+    h5.put_sparse("x5", scipy.sparse.random(5, 5, 0.5))
+
+# Re-open the file we just created and print
+# previously-stored data
+with H5File("test.h5", "r") as h5:
+    print("x1 =", h5.get_scalar("x1"))
+    print("x2 =", h5.get_scalar("x2"))
+    print("x3 =", h5.get_array("x3"))
+    print("x4 =", h5.get_array("x4"))
+    print("x5 =", h5.get_sparse("x5"))
+
+
+
+
+
+
+
+
+x1 = 1
+x2 = hello world
+x3 = [1 2 3]
+x4 = [[0.37454012 0.9507143  0.7319939 ]
+ [0.5986585  0.15601864 0.15599452]
+ [0.05808361 0.8661761  0.601115  ]]
+x5 =   (3, 2)   0.6803075671195984
+  (2, 3)        0.4504992663860321
+  (0, 4)        0.013264961540699005
+  (2, 0)        0.9422017335891724
+  (2, 4)        0.5632882118225098
+  (1, 2)        0.38541650772094727
+  (1, 1)        0.015966251492500305
+  (0, 3)        0.2308938205242157
+  (4, 4)        0.24102546274662018
+  (3, 1)        0.6832635402679443
+  (1, 3)        0.6099966764450073
+  (3, 0)        0.83319491147995
+
+
+
+
+
+

6.3. Basic collector

+

BasicCollector is the most fundamental collector, and performs the following steps:

+
    +
  1. Extracts all model data, such as objective function and constraint right-hand sides into numpy arrays, which can later be easily and efficiently accessed without rebuilding the model or invoking the solver;

  2. +
  3. Solves the linear relaxation of the problem and stores its optimal solution, basis status and sensitivity information, among other information;

  4. +
  5. Solves the original mixed-integer optimization problem to optimality and stores its optimal solution, along with solve statistics, such as number of explored nodes and wallclock time.

  6. +
+

Data extracted in Phases 1, 2 and 3 above are prefixed, respectively as static_, lp_ and mip_. The entire set of fields is shown in the table below.

+
+

Data fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Field

Type

Description

static_constr_lhs

(nconstrs, nvars)

Constraint left-hand sides, in sparse matrix format

static_constr_names

(nconstrs,)

Constraint names

static_constr_rhs

(nconstrs,)

Constraint right-hand sides

static_constr_sense

(nconstrs,)

Constraint senses ("<", ">" or "=")

static_obj_offset

float

Constant value added to the objective function

static_sense

str

"min" if minimization problem or "max" otherwise

static_var_lower_bounds

(nvars,)

Variable lower bounds

static_var_names

(nvars,)

Variable names

static_var_obj_coeffs

(nvars,)

Objective coefficients

static_var_types

(nvars,)

Types of the decision variables ("C", "B" and "I" for continuous, binary and integer, respectively)

static_var_upper_bounds

(nvars,)

Variable upper bounds

lp_constr_basis_status

(nconstr,)

Constraint basis status (0 for basic, -1 for non-basic)

lp_constr_dual_values

(nconstr,)

Constraint dual value (or shadow price)

lp_constr_sa_rhs_{up,down}

(nconstr,)

Sensitivity information for the constraint RHS

lp_constr_slacks

(nconstr,)

Constraint slack in the solution to the LP relaxation

lp_obj_value

float

Optimal value of the LP relaxation

lp_var_basis_status

(nvars,)

Variable basis status (0, -1, -2 or -3 for basic, non-basic at lower bound, non-basic at upper bound, and superbasic, respectively)

lp_var_reduced_costs

(nvars,)

Variable reduced costs

lp_var_sa_{obj,ub,lb}_{up,down}

(nvars,)

Sensitivity information for the variable objective coefficient, lower and upper bound.

lp_var_values

(nvars,)

Optimal solution to the LP relaxation

lp_wallclock_time

float

Time taken to solve the LP relaxation (in seconds)

mip_constr_slacks

(nconstrs,)

Constraint slacks in the best MIP solution

mip_gap

float

Relative MIP optimality gap

mip_node_count

float

Number of explored branch-and-bound nodes

mip_obj_bound

float

Dual bound

mip_obj_value

float

Value of the best MIP solution

mip_var_values

(nvars,)

Best MIP solution

mip_wallclock_time

float

Time taken to solve the MIP (in seconds)

+
+
+

Example

+

The example below shows how to generate a few random instances of the traveling salesman problem, store its problem data, run the collector and print some of the training data to screen.

+
+
[2]:
+
+
+
import random
+import numpy as np
+from scipy.stats import uniform, randint
+from glob import glob
+
+from miplearn.problems.tsp import (
+    TravelingSalesmanGenerator,
+    build_tsp_model_gurobipy,
+)
+from miplearn.io import write_pkl_gz
+from miplearn.h5 import H5File
+from miplearn.collectors.basic import BasicCollector
+
+# Set random seed to make example reproducible.
+random.seed(42)
+np.random.seed(42)
+
+# Generate a few instances of the traveling salesman problem.
+data = TravelingSalesmanGenerator(
+    n=randint(low=10, high=11),
+    x=uniform(loc=0.0, scale=1000.0),
+    y=uniform(loc=0.0, scale=1000.0),
+    gamma=uniform(loc=0.90, scale=0.20),
+    fix_cities=True,
+    round=True,
+).generate(10)
+
+# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...
+write_pkl_gz(data, "data/tsp")
+
+# Solve all instances and collect basic solution information.
+# Process at most four instances in parallel.
+bc = BasicCollector()
+bc.collect(glob("data/tsp/*.pkl.gz"), build_tsp_model_gurobipy, n_jobs=4)
+
+# Read and print some training data for the first instance.
+with H5File("data/tsp/00000.h5", "r") as h5:
+    print("lp_obj_value = ", h5.get_scalar("lp_obj_value"))
+    print("mip_obj_value = ", h5.get_scalar("mip_obj_value"))
+
+
+
+
+
+
+
+
+lp_obj_value =  2909.0
+mip_obj_value =  2921.0
+
+
+
+
[ ]:
+
+
+

+
+
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/guide/features.ipynb b/0.4/guide/features.ipynb new file mode 100644 index 00000000..495e8eaf --- /dev/null +++ b/0.4/guide/features.ipynb @@ -0,0 +1,334 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "cdc6ebe9-d1d4-4de1-9b5a-4fc8ef57b11b", + "metadata": {}, + "source": [ + "# Feature Extractors\n", + "\n", + "In the previous page, we introduced *training data collectors*, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce **feature extractors**, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model." + ] + }, + { + "cell_type": "markdown", + "id": "b4026de5", + "metadata": {}, + "source": [ + "\n", + "## Overview\n", + "\n", + "Feature extraction is an important step of the process of building a machine learning model because it helps to reduce the complexity of the data and convert it into a format that is more easily processed. Previous research has proposed converting absolute variable coefficients, for example, into relative values which are invariant to various transformations, such as problem scaling, making them more amenable to learning. Various other transformations have also been described.\n", + "\n", + "In the framework, we treat data collection and feature extraction as two separate steps to accelerate the model development cycle. Specifically, collectors are typically time-consuming, as they often need to solve the problem to optimality, and therefore focus on collecting and storing all data that may or may not be relevant, in its raw format. Feature extractors, on the other hand, focus entirely on filtering the data and improving its representation, and are therefore much faster to run. Experimenting with new data representations, therefore, can be done without resolving the instances.\n", + "\n", + "In MIPLearn, extractors implement the abstract class [FeatureExtractor][FeatureExtractor], which has methods that take as input an [H5File][H5File] and produce either: (i) instance features, which describe the entire instances; (ii) variable features, which describe a particular decision variables; or (iii) constraint features, which describe a particular constraint. The extractor is free to implement only a subset of these methods, if it is known that it will not be used with a machine learning component that requires the other types of features.\n", + "\n", + "[FeatureExtractor]: ../../api/collectors/#miplearn.features.fields.FeaturesExtractor\n", + "[H5File]: ../../api/helpers/#miplearn.h5.H5File" + ] + }, + { + "cell_type": "markdown", + "id": "b2d9736c", + "metadata": {}, + "source": [ + "\n", + "## H5FieldsExtractor\n", + "\n", + "[H5FieldsExtractor][H5FieldsExtractor], the most simple extractor in MIPLearn, simple extracts data that is already available in the HDF5 file, assembles it into a matrix and returns it as-is. The fields used to build instance, variable and constraint features are user-specified. The class also performs checks to ensure that the shapes of the returned matrices make sense." + ] + }, + { + "cell_type": "markdown", + "id": "e8184dff", + "metadata": {}, + "source": [ + "### Example\n", + "\n", + "The example below demonstrates the usage of H5FieldsExtractor in a randomly generated instance of the multi-dimensional knapsack problem." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "ed9a18c8", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "instance features (11,) \n", + " [-1531.24308771 -350. -692. -454.\n", + " -709. -605. -543. -321.\n", + " -674. -571. -341. ]\n", + "variable features (10, 4) \n", + " [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43468018e+01]\n", + " [-1.53124309e+03 -6.92000000e+02 2.51703322e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504150e+01]\n", + " [-1.53124309e+03 -7.09000000e+02 1.11373022e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055283e+02]\n", + " [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693771e+02]\n", + " [-1.53124309e+03 -3.21000000e+02 1.07488781e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -6.74000000e+02 8.82293701e-01 0.00000000e+00]\n", + " [-1.53124309e+03 -5.71000000e+02 0.00000000e+00 1.41129074e+02]\n", + " [-1.53124309e+03 -3.41000000e+02 1.28830120e-01 0.00000000e+00]]\n", + "constraint features (5, 3) \n", + " [[ 1.3100000e+03 -1.5978307e-01 0.0000000e+00]\n", + " [ 9.8800000e+02 -3.2881632e-01 0.0000000e+00]\n", + " [ 1.0040000e+03 -4.0601316e-01 0.0000000e+00]\n", + " [ 1.2690000e+03 -1.3659772e-01 0.0000000e+00]\n", + " [ 1.0070000e+03 -2.8800571e-01 0.0000000e+00]]\n" + ] + } + ], + "source": [ + "from glob import glob\n", + "from shutil import rmtree\n", + "\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "\n", + "from miplearn.collectors.basic import BasicCollector\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "from miplearn.h5 import H5File\n", + "from miplearn.io import write_pkl_gz\n", + "from miplearn.problems.multiknapsack import (\n", + " MultiKnapsackGenerator,\n", + " build_multiknapsack_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate some random multiknapsack instances\n", + "rmtree(\"data/multiknapsack/\", ignore_errors=True)\n", + "write_pkl_gz(\n", + " MultiKnapsackGenerator(\n", + " n=randint(low=10, high=11),\n", + " m=randint(low=5, high=6),\n", + " w=uniform(loc=0, scale=1000),\n", + " K=uniform(loc=100, scale=0),\n", + " u=uniform(loc=1, scale=0),\n", + " alpha=uniform(loc=0.25, scale=0),\n", + " w_jitter=uniform(loc=0.95, scale=0.1),\n", + " p_jitter=uniform(loc=0.75, scale=0.5),\n", + " fix_w=True,\n", + " ).generate(10),\n", + " \"data/multiknapsack\",\n", + ")\n", + "\n", + "# Run the basic collector\n", + "BasicCollector().collect(\n", + " glob(\"data/multiknapsack/*\"),\n", + " build_multiknapsack_model_gurobipy,\n", + " n_jobs=4,\n", + ")\n", + "\n", + "ext = H5FieldsExtractor(\n", + " # Use as instance features the value of the LP relaxation and the\n", + " # vector of objective coefficients.\n", + " instance_fields=[\n", + " \"lp_obj_value\",\n", + " \"static_var_obj_coeffs\",\n", + " ],\n", + " # For each variable, use as features the optimal value of the LP\n", + " # relaxation, the variable objective coefficient, the variable's\n", + " # value its reduced cost.\n", + " var_fields=[\n", + " \"lp_obj_value\",\n", + " \"static_var_obj_coeffs\",\n", + " \"lp_var_values\",\n", + " \"lp_var_reduced_costs\",\n", + " ],\n", + " # For each constraint, use as features the RHS, dual value and slack.\n", + " constr_fields=[\n", + " \"static_constr_rhs\",\n", + " \"lp_constr_dual_values\",\n", + " \"lp_constr_slacks\",\n", + " ],\n", + ")\n", + "\n", + "with H5File(\"data/multiknapsack/00000.h5\") as h5:\n", + " # Extract and print instance features\n", + " x1 = ext.get_instance_features(h5)\n", + " print(\"instance features\", x1.shape, \"\\n\", x1)\n", + "\n", + " # Extract and print variable features\n", + " x2 = ext.get_var_features(h5)\n", + " print(\"variable features\", x2.shape, \"\\n\", x2)\n", + "\n", + " # Extract and print constraint features\n", + " x3 = ext.get_constr_features(h5)\n", + " print(\"constraint features\", x3.shape, \"\\n\", x3)" + ] + }, + { + "cell_type": "markdown", + "id": "2da2e74e", + "metadata": {}, + "source": [ + "\n", + "[H5FieldsExtractor]: ../../api/collectors/#miplearn.features.fields.H5FieldsExtractor" + ] + }, + { + "cell_type": "markdown", + "id": "d879c0d3", + "metadata": {}, + "source": [ + "
\n", + "Warning\n", + "\n", + "You should ensure that the number of features remains the same for all relevant HDF5 files. In the previous example, to illustrate this issue, we used variable objective coefficients as instance features. While this is allowed, note that this requires all problem instances to have the same number of variables; otherwise the number of features would vary from instance to instance and MIPLearn would be unable to concatenate the matrices.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cd0ba071", + "metadata": {}, + "source": [ + "## AlvLouWeh2017Extractor\n", + "\n", + "Alvarez, Louveaux and Wehenkel (2017) proposed a set features to describe a particular decision variable in a given node of the branch-and-bound tree, and applied it to the problem of mimicking strong branching decisions. The class [AlvLouWeh2017Extractor][] implements a subset of these features (40 out of 64), which are available outside of the branch-and-bound tree. Some features are derived from the static defintion of the problem (i.e. from objective function and constraint data), while some features are derived from the solution to the LP relaxation. The features have been designed to be: (i) independent of the size of the problem; (ii) invariant with respect to irrelevant problem transformations, such as row and column permutation; and (iii) independent of the scale of the problem. We refer to the paper for a more complete description.\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "a1bc38fe", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "x1 (10, 40) \n", + " [[-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 6.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 6.00e-01 1.00e+00 1.75e+01 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 1.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 7.00e-01 1.00e+00 5.10e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 3.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 9.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 5.00e-01 1.00e+00 1.30e+01 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 2.00e-01 1.00e+00 9.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 8.00e-01 1.00e+00 3.40e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 7.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 6.00e-01 1.00e+00 3.80e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 8.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 7.00e-01 1.00e+00 3.30e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 3.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 1.00e+00 1.00e+00 5.70e+00 1.00e+00 1.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 6.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 8.00e-01 1.00e+00 6.80e+00 1.00e+00 2.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 4.00e-01 1.00e+00 6.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 8.00e-01 1.00e+00 1.40e+00 1.00e+00 1.00e-01\n", + " 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n", + " [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 5.00e-01\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 1.00e+00 5.00e-01 1.00e+00 7.60e+00 1.00e+00 1.00e-01\n", + " 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n", + " 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]]\n" + ] + } + ], + "source": [ + "from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n", + "from miplearn.h5 import H5File\n", + "\n", + "# Build the extractor\n", + "ext = AlvLouWeh2017Extractor()\n", + "\n", + "# Open previously-created multiknapsack training data\n", + "with H5File(\"data/multiknapsack/00000.h5\") as h5:\n", + " # Extract and print variable features\n", + " x1 = ext.get_var_features(h5)\n", + " print(\"x1\", x1.shape, \"\\n\", x1.round(1))" + ] + }, + { + "cell_type": "markdown", + "id": "286c9927", + "metadata": {}, + "source": [ + "
\n", + "References\n", + "\n", + "* **Alvarez, Alejandro Marcos.** *Computational and theoretical synergies between linear optimization and supervised machine learning.* (2016). University of Liège.\n", + "* **Alvarez, Alejandro Marcos, Quentin Louveaux, and Louis Wehenkel.** *A machine learning-based approximation of strong branching.* INFORMS Journal on Computing 29.1 (2017): 185-195.\n", + "\n", + "
" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/guide/features/index.html b/0.4/guide/features/index.html new file mode 100644 index 00000000..34e5422e --- /dev/null +++ b/0.4/guide/features/index.html @@ -0,0 +1,538 @@ + + + + + + + + 7. Feature Extractors — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + + +
+
+
+
+ +
+ +
+

7. Feature Extractors

+

In the previous page, we introduced training data collectors, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce feature extractors, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model.

+
+

7.1. Overview

+

Feature extraction is an important step of the process of building a machine learning model because it helps to reduce the complexity of the data and convert it into a format that is more easily processed. Previous research has proposed converting absolute variable coefficients, for example, into relative values which are invariant to various transformations, such as problem scaling, making them more amenable to learning. Various other transformations have also been described.

+

In the framework, we treat data collection and feature extraction as two separate steps to accelerate the model development cycle. Specifically, collectors are typically time-consuming, as they often need to solve the problem to optimality, and therefore focus on collecting and storing all data that may or may not be relevant, in its raw format. Feature extractors, on the other hand, focus entirely on filtering the data and improving its representation, and are therefore much faster to run. +Experimenting with new data representations, therefore, can be done without resolving the instances.

+

In MIPLearn, extractors implement the abstract class FeatureExtractor, which has methods that take as input an H5File and produce either: (i) instance features, which describe the entire instances; (ii) variable features, which describe a particular decision variables; or (iii) constraint features, which describe a particular constraint. The extractor is free to implement only a +subset of these methods, if it is known that it will not be used with a machine learning component that requires the other types of features.

+
+
+

7.2. H5FieldsExtractor

+

H5FieldsExtractor, the most simple extractor in MIPLearn, simple extracts data that is already available in the HDF5 file, assembles it into a matrix and returns it as-is. The fields used to build instance, variable and constraint features are user-specified. The class also performs checks to ensure that the shapes of the returned matrices make sense.

+
+

Example

+

The example below demonstrates the usage of H5FieldsExtractor in a randomly generated instance of the multi-dimensional knapsack problem.

+
+
[1]:
+
+
+
from glob import glob
+from shutil import rmtree
+
+import numpy as np
+from scipy.stats import uniform, randint
+
+from miplearn.collectors.basic import BasicCollector
+from miplearn.extractors.fields import H5FieldsExtractor
+from miplearn.h5 import H5File
+from miplearn.io import write_pkl_gz
+from miplearn.problems.multiknapsack import (
+    MultiKnapsackGenerator,
+    build_multiknapsack_model_gurobipy,
+)
+
+# Set random seed to make example reproducible
+np.random.seed(42)
+
+# Generate some random multiknapsack instances
+rmtree("data/multiknapsack/", ignore_errors=True)
+write_pkl_gz(
+    MultiKnapsackGenerator(
+        n=randint(low=10, high=11),
+        m=randint(low=5, high=6),
+        w=uniform(loc=0, scale=1000),
+        K=uniform(loc=100, scale=0),
+        u=uniform(loc=1, scale=0),
+        alpha=uniform(loc=0.25, scale=0),
+        w_jitter=uniform(loc=0.95, scale=0.1),
+        p_jitter=uniform(loc=0.75, scale=0.5),
+        fix_w=True,
+    ).generate(10),
+    "data/multiknapsack",
+)
+
+# Run the basic collector
+BasicCollector().collect(
+    glob("data/multiknapsack/*"),
+    build_multiknapsack_model_gurobipy,
+    n_jobs=4,
+)
+
+ext = H5FieldsExtractor(
+    # Use as instance features the value of the LP relaxation and the
+    # vector of objective coefficients.
+    instance_fields=[
+        "lp_obj_value",
+        "static_var_obj_coeffs",
+    ],
+    # For each variable, use as features the optimal value of the LP
+    # relaxation, the variable objective coefficient, the variable's
+    # value its reduced cost.
+    var_fields=[
+        "lp_obj_value",
+        "static_var_obj_coeffs",
+        "lp_var_values",
+        "lp_var_reduced_costs",
+    ],
+    # For each constraint, use as features the RHS, dual value and slack.
+    constr_fields=[
+        "static_constr_rhs",
+        "lp_constr_dual_values",
+        "lp_constr_slacks",
+    ],
+)
+
+with H5File("data/multiknapsack/00000.h5") as h5:
+    # Extract and print instance features
+    x1 = ext.get_instance_features(h5)
+    print("instance features", x1.shape, "\n", x1)
+
+    # Extract and print variable features
+    x2 = ext.get_var_features(h5)
+    print("variable features", x2.shape, "\n", x2)
+
+    # Extract and print constraint features
+    x3 = ext.get_constr_features(h5)
+    print("constraint features", x3.shape, "\n", x3)
+
+
+
+
+
+
+
+
+instance features (11,)
+ [-1531.24308771  -350.          -692.          -454.
+  -709.          -605.          -543.          -321.
+  -674.          -571.          -341.        ]
+variable features (10, 4)
+ [[-1.53124309e+03 -3.50000000e+02  0.00000000e+00  9.43468018e+01]
+ [-1.53124309e+03 -6.92000000e+02  2.51703322e-01  0.00000000e+00]
+ [-1.53124309e+03 -4.54000000e+02  0.00000000e+00  8.25504150e+01]
+ [-1.53124309e+03 -7.09000000e+02  1.11373022e-01  0.00000000e+00]
+ [-1.53124309e+03 -6.05000000e+02  1.00000000e+00 -1.26055283e+02]
+ [-1.53124309e+03 -5.43000000e+02  0.00000000e+00  1.68693771e+02]
+ [-1.53124309e+03 -3.21000000e+02  1.07488781e-01  0.00000000e+00]
+ [-1.53124309e+03 -6.74000000e+02  8.82293701e-01  0.00000000e+00]
+ [-1.53124309e+03 -5.71000000e+02  0.00000000e+00  1.41129074e+02]
+ [-1.53124309e+03 -3.41000000e+02  1.28830120e-01  0.00000000e+00]]
+constraint features (5, 3)
+ [[ 1.3100000e+03 -1.5978307e-01  0.0000000e+00]
+ [ 9.8800000e+02 -3.2881632e-01  0.0000000e+00]
+ [ 1.0040000e+03 -4.0601316e-01  0.0000000e+00]
+ [ 1.2690000e+03 -1.3659772e-01  0.0000000e+00]
+ [ 1.0070000e+03 -2.8800571e-01  0.0000000e+00]]
+
+
+
+

Warning

+

You should ensure that the number of features remains the same for all relevant HDF5 files. In the previous example, to illustrate this issue, we used variable objective coefficients as instance features. While this is allowed, note that this requires all problem instances to have the same number of variables; otherwise the number of features would vary from instance to instance and MIPLearn would be unable to concatenate the matrices.

+
+
+
+
+

7.3. AlvLouWeh2017Extractor

+

Alvarez, Louveaux and Wehenkel (2017) proposed a set features to describe a particular decision variable in a given node of the branch-and-bound tree, and applied it to the problem of mimicking strong branching decisions. The class AlvLouWeh2017Extractor implements a subset of these features (40 out of 64), which are available outside of the branch-and-bound tree. Some features are derived from the static defintion of the problem (i.e. from objective function and +constraint data), while some features are derived from the solution to the LP relaxation. The features have been designed to be: (i) independent of the size of the problem; (ii) invariant with respect to irrelevant problem transformations, such as row and column permutation; and (iii) independent of the scale of the problem. We refer to the paper for a more complete description.

+
+

Example

+
+
[2]:
+
+
+
from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor
+from miplearn.h5 import H5File
+
+# Build the extractor
+ext = AlvLouWeh2017Extractor()
+
+# Open previously-created multiknapsack training data
+with H5File("data/multiknapsack/00000.h5") as h5:
+    # Extract and print variable features
+    x1 = ext.get_var_features(h5)
+    print("x1", x1.shape, "\n", x1.round(1))
+
+
+
+
+
+
+
+
+x1 (10, 40)
+ [[-1.00e+00  1.00e+20  1.00e-01  1.00e+00  0.00e+00  1.00e+00  6.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  6.00e-01  1.00e+00  1.75e+01  1.00e+00  2.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00 -1.00e+00  0.00e+00  1.00e+20]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  1.00e-01  1.00e+00  1.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  7.00e-01  1.00e+00  5.10e+00  1.00e+00  2.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   3.00e-01 -1.00e+00 -1.00e+00  0.00e+00  0.00e+00]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  0.00e+00  1.00e+00  9.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  5.00e-01  1.00e+00  1.30e+01  1.00e+00  2.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00 -1.00e+00  0.00e+00  1.00e+20]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  2.00e-01  1.00e+00  9.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  8.00e-01  1.00e+00  3.40e+00  1.00e+00  2.00e-01
+   1.00e+00  1.00e-01  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   1.00e-01 -1.00e+00 -1.00e+00  0.00e+00  0.00e+00]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  1.00e-01  1.00e+00  7.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  6.00e-01  1.00e+00  3.80e+00  1.00e+00  2.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00 -1.00e+00 -1.00e+00  0.00e+00  0.00e+00]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  1.00e-01  1.00e+00  8.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  7.00e-01  1.00e+00  3.30e+00  1.00e+00  2.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00 -1.00e+00  0.00e+00  1.00e+20]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  0.00e+00  1.00e+00  3.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  1.00e+00  1.00e+00  5.70e+00  1.00e+00  1.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   1.00e-01 -1.00e+00 -1.00e+00  0.00e+00  0.00e+00]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  1.00e-01  1.00e+00  6.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  8.00e-01  1.00e+00  6.80e+00  1.00e+00  2.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   1.00e-01 -1.00e+00 -1.00e+00  0.00e+00  0.00e+00]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  4.00e-01  1.00e+00  6.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  8.00e-01  1.00e+00  1.40e+00  1.00e+00  1.00e-01
+   1.00e+00  1.00e-01  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00 -1.00e+00  0.00e+00  1.00e+20]
+ [-1.00e+00  1.00e+20  1.00e-01  1.00e+00  0.00e+00  1.00e+00  5.00e-01
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  1.00e+00  5.00e-01  1.00e+00  7.60e+00  1.00e+00  1.00e-01
+   1.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
+   1.00e-01 -1.00e+00 -1.00e+00  0.00e+00  0.00e+00]]
+
+
+
+

References

+
    +
  • Alvarez, Alejandro Marcos. Computational and theoretical synergies between linear optimization and supervised machine learning. (2016). University of Liège.

  • +
  • Alvarez, Alejandro Marcos, Quentin Louveaux, and Louis Wehenkel. A machine learning-based approximation of strong branching. INFORMS Journal on Computing 29.1 (2017): 185-195.

  • +
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/guide/primal.ipynb b/0.4/guide/primal.ipynb new file mode 100644 index 00000000..26464ce6 --- /dev/null +++ b/0.4/guide/primal.ipynb @@ -0,0 +1,291 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "880cf4c7-d3c4-4b92-85c7-04a32264cdae", + "metadata": {}, + "source": [ + "# Primal Components\n", + "\n", + "In MIPLearn, a **primal component** is class that uses machine learning to predict a (potentially partial) assignment of values to the decision variables of the problem. Predicting high-quality primal solutions may be beneficial, as they allow the MIP solver to prune potentially large portions of the search space. Alternatively, if proof of optimality is not required, the MIP solver can be used to complete the partial solution generated by the machine learning model and and double-check its feasibility. MIPLearn allows both of these usage patterns.\n", + "\n", + "In this page, we describe the four primal components currently included in MIPLearn, which employ machine learning in different ways. Each component is highly configurable, and accepts an user-provided machine learning model, which it uses for all predictions. Each component can also be configured to provide the solution to the solver in multiple ways, depending on whether proof of optimality is required.\n", + "\n", + "## Primal component actions\n", + "\n", + "Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.\n", + "\n", + "The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart](SetWarmStart). The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n", + "\n", + "[SetWarmStart]: ../../api/components/#miplearn.components.primal.actions.SetWarmStart\n", + "\n", + "The second approach is to **fix the decision variables** to their predicted values, then solve a restricted optimization problem on the remaining variables. This approach is implemented by the class `FixVariables`. The main advantage is its potential speedup: if machine learning can accurately predict values for a significant portion of the decision variables, then the MIP solver can typically complete the solution in a small fraction of the time it would take to find the same solution from scratch. The main disadvantage of this approach is that it loses optimality guarantees; that is, the complete solution found by the MIP solver may no longer be globally optimal. Also, if the machine learning predictions are not sufficiently accurate, there might not even be a feasible assignment for the variables that were left free.\n", + "\n", + "Finally, the third approach, which tries to strike a balance between the two previous ones, is to **enforce proximity** to a given solution. This strategy is implemented by the class `EnforceProximity`. More precisely, given values $\\bar{x}_1,\\ldots,\\bar{x}_n$ for a subset of binary decision variables $x_1,\\ldots,x_n$, this approach adds the constraint\n", + "\n", + "$$\n", + "\\sum_{i : \\bar{x}_i=0} x_i + \\sum_{i : \\bar{x}_i=1} \\left(1 - x_i\\right) \\leq k,\n", + "$$\n", + "to the problem, where $k$ is a user-defined parameter, which indicates how many of the predicted variables are allowed to deviate from the machine learning suggestion. The main advantage of this approach, compared to fixing variables, is its tolerance to lower-quality machine learning predictions. Its main disadvantage is that it typically leads to smaller speedups, especially for larger values of $k$. This approach also loses optimality guarantees.\n", + "\n", + "## Memorizing primal component\n", + "\n", + "A simple machine learning strategy for the prediction of primal solutions is to memorize all distinct solutions seen during training, then try to predict, during inference time, which of those memorized solutions are most likely to be feasible and to provide a good objective value for the current instance. The most promising solutions may alternatively be combined into a single partial solution, which is then provided to the MIP solver. Both variations of this strategy are implemented by the `MemorizingPrimalComponent` class. Note that it is only applicable if the problem size, and in fact if the meaning of the decision variables, remains the same across problem instances.\n", + "\n", + "More precisely, let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. Given a new instance $I_{n+1}$, `MemorizingPrimalComponent` expects a user-provided binary classifier that assigns (through the `predict_proba` method, following scikit-learn's conventions) a score $\\delta_i$ to each solution $\\bar{x}^i$, such that solutions with higher score are more likely to be good solutions for $I_{n+1}$. The features provided to the classifier are the instance features computed by an user-provided extractor. Given these scores, the component then performs one of the following to actions, as decided by the user:\n", + "\n", + "1. Selects the top $k$ solutions with the highest scores and provides them to the solver; this is implemented by `SelectTopSolutions`, and it is typically used with the `SetWarmStart` action.\n", + "\n", + "2. Merges the top $k$ solutions into a single partial solution, then provides it to the solver. This is implemented by `MergeTopSolutions`. More precisely, suppose that the machine learning regressor ordered the solutions in the sequence $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_n}$, with the most promising solutions appearing first, and with ties being broken arbitrarily. The component starts by keeping only the $k$ most promising solutions $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_k}$. Then it computes, for each binary decision variable $x_l$, its average assigned value $\\tilde{x}_l$:\n", + "$$\n", + " \\tilde{x}_l = \\frac{1}{k} \\sum_{j=1}^k \\bar{x}^{i_j}_l.\n", + "$$\n", + " Finally, the component constructs a merged solution $y$, defined as:\n", + "$$\n", + " y_j = \\begin{cases}\n", + " 0 & \\text{ if } \\tilde{x}_l \\le \\theta_0 \\\\\n", + " 1 & \\text{ if } \\tilde{x}_l \\ge \\theta_1 \\\\\n", + " \\square & \\text{otherwise,}\n", + " \\end{cases}\n", + "$$\n", + " where $\\theta_0$ and $\\theta_1$ are user-specified parameters, and where $\\square$ indicates that the variable is left undefined. The solution $y$ is then provided by the solver using any of the three approaches defined in the previous section.\n", + "\n", + "The above specification of `MemorizingPrimalComponent` is meant to be as general as possible. Simpler strategies can be implemented by configuring this component in specific ways. For example, a simpler approach employed in the literature is to collect all optimal solutions, then provide the entire list of solutions to the solver as warm starts, without any filtering or post-processing. This strategy can be implemented with `MemorizingPrimalComponent` by using a model that returns a constant value for all solutions (e.g. [scikit-learn's DummyClassifier][DummyClassifier]), then selecting the top $n$ (instead of $k$) solutions. See example below. Another simple approach is taking the solution to the most similar instance, and using it, by itself, as a warm start. This can be implemented by using a model that computes distances between the current instance and the training ones (e.g. [scikit-learn's KNeighborsClassifier][KNeighborsClassifier]), then select the solution to the nearest one. See also example below. More complex strategies, of course, can also be configured.\n", + "\n", + "[DummyClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html\n", + "[KNeighborsClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html\n", + "\n", + "### Examples" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "253adbf4", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from sklearn.dummy import DummyClassifier\n", + "from sklearn.neighbors import KNeighborsClassifier\n", + "\n", + "from miplearn.components.primal.actions import (\n", + " SetWarmStart,\n", + " FixVariables,\n", + " EnforceProximity,\n", + ")\n", + "from miplearn.components.primal.mem import (\n", + " MemorizingPrimalComponent,\n", + " SelectTopSolutions,\n", + " MergeTopSolutions,\n", + ")\n", + "from miplearn.extractors.dummy import DummyExtractor\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "\n", + "# Configures a memorizing primal component that collects\n", + "# all distinct solutions seen during training and provides\n", + "# them to the solver without any filtering or post-processing.\n", + "comp1 = MemorizingPrimalComponent(\n", + " clf=DummyClassifier(),\n", + " extractor=DummyExtractor(),\n", + " constructor=SelectTopSolutions(1_000_000),\n", + " action=SetWarmStart(),\n", + ")\n", + "\n", + "# Configures a memorizing primal component that finds the\n", + "# training instance with the closest objective function, then\n", + "# fixes the decision variables to the values they assumed\n", + "# at the optimal solution for that instance.\n", + "comp2 = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=1),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_var_obj_coeffs\"],\n", + " ),\n", + " constructor=SelectTopSolutions(1),\n", + " action=FixVariables(),\n", + ")\n", + "\n", + "# Configures a memorizing primal component that finds the distinct\n", + "# solutions to the 10 most similar training problem instances,\n", + "# selects the 3 solutions that were most often optimal to these\n", + "# training instances, combines them into a single partial solution,\n", + "# then enforces proximity, allowing at most 3 variables to deviate\n", + "# from the machine learning suggestion.\n", + "comp3 = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=10),\n", + " extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n", + " constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n", + " action=EnforceProximity(3),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "f194a793", + "metadata": {}, + "source": [ + "## Independent vars primal component\n", + "\n", + "Instead of memorizing previously-seen primal solutions, it is also natural to use machine learning models to directly predict the values of the decision variables, constructing a solution from scratch. This approach has the benefit of potentially constructing novel high-quality solutions, never observed in the training data. Two variations of this strategy are supported by MIPLearn: (i) predicting the values of the decision variables independently, using multiple ML models; or (ii) predicting the values jointly, with a single model. We describe the first variation in this section, and the second variation in the next section.\n", + "\n", + "Let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. For each binary decision variable $x_j$, the component `IndependentVarsPrimalComponent` creates a copy of a user-provided binary classifier and trains it to predict the optimal value of $x_j$, given $\\bar{x}^1_j,\\ldots,\\bar{x}^n_j$ as training labels. The features provided to the model are the variable features computed by an user-provided extractor. During inference time, the component uses these $n$ binary classifiers to construct a solution and provides it to the solver using one of the available actions.\n", + "\n", + "Three issues often arise in practice when using this approach:\n", + "\n", + " 1. For certain binary variables $x_j$, it is frequently the case that its optimal value is either always zero or always one in the training dataset, which poses problems to some standard scikit-learn classifiers, since they do not expect a single class. The wrapper `SingleClassFix` can be used to fix this issue (see example below).\n", + "2. It is also frequently the case that machine learning classifier can only reliably predict the values of some variables with high accuracy, not all of them. In this situation, instead of computing a complete primal solution, it may be more beneficial to construct a partial solution containing values only for the variables for which the ML made a high-confidence prediction. The meta-classifier `MinProbabilityClassifier` can be used for this purpose. It asks the base classifier for the probability of the value being zero or one (using the `predict_proba` method) and erases from the primal solution all values whose probabilities are below a given threshold.\n", + "3. To make multiple copies of the provided ML classifier, MIPLearn uses the standard `sklearn.base.clone` method, which may not be suitable for classifiers from other frameworks. To handle this, it is possible to override the clone function using the `clone_fn` constructor argument.\n", + "\n", + "### Examples" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "3fc0b5d1", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from sklearn.linear_model import LogisticRegression\n", + "from miplearn.classifiers.minprob import MinProbabilityClassifier\n", + "from miplearn.classifiers.singleclass import SingleClassFix\n", + "from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n", + "from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "\n", + "# Configures a primal component that independently predicts the value of each\n", + "# binary variable using logistic regression and provides it to the solver as\n", + "# warm start. Erases predictions with probability less than 99%; applies\n", + "# single-class fix; and uses AlvLouWeh2017 features.\n", + "comp = IndependentVarsPrimalComponent(\n", + " base_clf=SingleClassFix(\n", + " MinProbabilityClassifier(\n", + " base_clf=LogisticRegression(),\n", + " thresholds=[0.99, 0.99],\n", + " ),\n", + " ),\n", + " extractor=AlvLouWeh2017Extractor(),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "45107a0c", + "metadata": {}, + "source": [ + "## Joint vars primal component\n", + "In the previous subsection, we used multiple machine learning models to independently predict the values of the binary decision variables. When these values are correlated, an alternative approach is to jointly predict the values of all binary variables using a single machine learning model. This strategy is implemented by `JointVarsPrimalComponent`. Compared to the previous ones, this component is much more straightforwad. It simply extracts instance features, using the user-provided feature extractor, then directly trains the user-provided binary classifier (using the `fit` method), without making any copies. The trained classifier is then used to predict entire solutions (using the `predict` method), which are given to the solver using one of the previously discussed methods. In the example below, we illustrate the usage of this component with a simple feed-forward neural network.\n", + "\n", + "`JointVarsPrimalComponent` can also be used to implement strategies that use multiple machine learning models, but not indepedently. For example, a common strategy in multioutput prediction is building a *classifier chain*. In this approach, the first decision variable is predicted using the instance features alone; but the $n$-th decision variable is predicted using the instance features plus the predicted values of the $n-1$ previous variables. This can be easily implemented using scikit-learn's `ClassifierChain` estimator, as shown in the example below.\n", + "\n", + "### Examples" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "cf9b52dd", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from sklearn.multioutput import ClassifierChain\n", + "from sklearn.neural_network import MLPClassifier\n", + "from miplearn.components.primal.joint import JointVarsPrimalComponent\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "\n", + "# Configures a primal component that uses a feedforward neural network\n", + "# to jointly predict the values of the binary variables, based on the\n", + "# objective cost function, and provides the solution to the solver as\n", + "# a warm start.\n", + "comp = JointVarsPrimalComponent(\n", + " clf=MLPClassifier(),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_var_obj_coeffs\"],\n", + " ),\n", + " action=SetWarmStart(),\n", + ")\n", + "\n", + "# Configures a primal component that uses a chain of logistic regression\n", + "# models to jointly predict the values of the binary variables, based on\n", + "# the objective function.\n", + "comp = JointVarsPrimalComponent(\n", + " clf=ClassifierChain(SingleClassFix(LogisticRegression())),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_var_obj_coeffs\"],\n", + " ),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "dddf7be4", + "metadata": {}, + "source": [ + "## Expert primal component\n", + "\n", + "Before spending time and effort choosing a machine learning strategy and tweaking its parameters, it is usually a good idea to evaluate what would be the performance impact of the model if its predictions were 100% accurate. This is especially important for the prediction of warm starts, since they are not always very beneficial. To simplify this task, MIPLearn provides `ExpertPrimalComponent`, a component which simply loads the optimal solution from the HDF5 file, assuming that it has already been computed, then directly provides it to the solver using one of the available methods. This component is useful in benchmarks, to evaluate how close to the best theoretical performance the machine learning components are.\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "9e2e81b9", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [], + "source": [ + "from miplearn.components.primal.expert import ExpertPrimalComponent\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "\n", + "# Configures an expert primal component, which reads a pre-computed\n", + "# optimal solution from the HDF5 file and provides it to the solver\n", + "# as warm start.\n", + "comp = ExpertPrimalComponent(action=SetWarmStart())" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/guide/primal/index.html b/0.4/guide/primal/index.html new file mode 100644 index 00000000..bfe31b78 --- /dev/null +++ b/0.4/guide/primal/index.html @@ -0,0 +1,541 @@ + + + + + + + + 8. Primal Components — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + + +
+
+
+
+ +
+ +
+

8. Primal Components

+

In MIPLearn, a primal component is class that uses machine learning to predict a (potentially partial) assignment of values to the decision variables of the problem. Predicting high-quality primal solutions may be beneficial, as they allow the MIP solver to prune potentially large portions of the search space. Alternatively, if proof of optimality is not required, the MIP solver can be used to complete the partial solution generated by the machine learning model and and double-check its +feasibility. MIPLearn allows both of these usage patterns.

+

In this page, we describe the four primal components currently included in MIPLearn, which employ machine learning in different ways. Each component is highly configurable, and accepts an user-provided machine learning model, which it uses for all predictions. Each component can also be configured to provide the solution to the solver in multiple ways, depending on whether proof of optimality is required.

+
+

8.1. Primal component actions

+

Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.

+

The first approach is to provide the solution to the solver as a warm start. This is implemented by the class SetWarmStart. The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates +each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.

+

The second approach is to fix the decision variables to their predicted values, then solve a restricted optimization problem on the remaining variables. This approach is implemented by the class FixVariables. The main advantage is its potential speedup: if machine learning can accurately predict values for a significant portion of the decision variables, then the MIP solver can typically complete the solution in a small fraction of the time it would take to find the same solution from +scratch. The main disadvantage of this approach is that it loses optimality guarantees; that is, the complete solution found by the MIP solver may no longer be globally optimal. Also, if the machine learning predictions are not sufficiently accurate, there might not even be a feasible assignment for the variables that were left free.

+

Finally, the third approach, which tries to strike a balance between the two previous ones, is to enforce proximity to a given solution. This strategy is implemented by the class EnforceProximity. More precisely, given values \(\bar{x}_1,\ldots,\bar{x}_n\) for a subset of binary decision variables \(x_1,\ldots,x_n\), this approach adds the constraint

+
+\[\sum_{i : \bar{x}_i=0} x_i + \sum_{i : \bar{x}_i=1} \left(1 - x_i\right) \leq k,\]
+

to the problem, where \(k\) is a user-defined parameter, which indicates how many of the predicted variables are allowed to deviate from the machine learning suggestion. The main advantage of this approach, compared to fixing variables, is its tolerance to lower-quality machine learning predictions. Its main disadvantage is that it typically leads to smaller speedups, especially for larger values of \(k\). This approach also loses optimality guarantees.

+
+
+

8.2. Memorizing primal component

+

A simple machine learning strategy for the prediction of primal solutions is to memorize all distinct solutions seen during training, then try to predict, during inference time, which of those memorized solutions are most likely to be feasible and to provide a good objective value for the current instance. The most promising solutions may alternatively be combined into a single partial solution, which is then provided to the MIP solver. Both variations of this strategy are implemented by the +MemorizingPrimalComponent class. Note that it is only applicable if the problem size, and in fact if the meaning of the decision variables, remains the same across problem instances.

+

More precisely, let \(I_1,\ldots,I_n\) be the training instances, and let \(\bar{x}^1,\ldots,\bar{x}^n\) be their respective optimal solutions. Given a new instance \(I_{n+1}\), MemorizingPrimalComponent expects a user-provided binary classifier that assigns (through the predict_proba method, following scikit-learn’s conventions) a score \(\delta_i\) to each solution \(\bar{x}^i\), such that solutions with higher score are more likely to be good solutions for +\(I_{n+1}\). The features provided to the classifier are the instance features computed by an user-provided extractor. Given these scores, the component then performs one of the following to actions, as decided by the user:

+
    +
  1. Selects the top \(k\) solutions with the highest scores and provides them to the solver; this is implemented by SelectTopSolutions, and it is typically used with the SetWarmStart action.

  2. +
  3. Merges the top \(k\) solutions into a single partial solution, then provides it to the solver. This is implemented by MergeTopSolutions. More precisely, suppose that the machine learning regressor ordered the solutions in the sequence \(\bar{x}^{i_1},\ldots,\bar{x}^{i_n}\), with the most promising solutions appearing first, and with ties being broken arbitrarily. The component starts by keeping only the \(k\) most promising solutions \(\bar{x}^{i_1},\ldots,\bar{x}^{i_k}\). +Then it computes, for each binary decision variable \(x_l\), its average assigned value \(\tilde{x}_l\):

    +
    +\[\tilde{x}_l = \frac{1}{k} \sum_{j=1}^k \bar{x}^{i_j}_l.\]
    +

    Finally, the component constructs a merged solution \(y\), defined as:

    +
    +\[\begin{split}y_j = \begin{cases} + 0 & \text{ if } \tilde{x}_l \le \theta_0 \\ + 1 & \text{ if } \tilde{x}_l \ge \theta_1 \\ + \square & \text{otherwise,} +\end{cases}\end{split}\]
    +

    where \(\theta_0\) and \(\theta_1\) are user-specified parameters, and where \(\square\) indicates that the variable is left undefined. The solution \(y\) is then provided by the solver using any of the three approaches defined in the previous section.

    +
  4. +
+

The above specification of MemorizingPrimalComponent is meant to be as general as possible. Simpler strategies can be implemented by configuring this component in specific ways. For example, a simpler approach employed in the literature is to collect all optimal solutions, then provide the entire list of solutions to the solver as warm starts, without any filtering or post-processing. This strategy can be implemented with MemorizingPrimalComponent by using a model that returns a constant +value for all solutions (e.g. scikit-learn’s DummyClassifier), then selecting the top \(n\) (instead of \(k\)) solutions. See example below. Another simple approach is taking the solution to the most similar instance, and using it, by itself, as a warm start. This can be implemented by using a model that computes distances between the current instance and the training ones (e.g. scikit-learn’s +KNeighborsClassifier), then select the solution to the nearest one. See also example below. More complex strategies, of course, can also be configured.

+
+

Examples

+
+
[1]:
+
+
+
from sklearn.dummy import DummyClassifier
+from sklearn.neighbors import KNeighborsClassifier
+
+from miplearn.components.primal.actions import (
+    SetWarmStart,
+    FixVariables,
+    EnforceProximity,
+)
+from miplearn.components.primal.mem import (
+    MemorizingPrimalComponent,
+    SelectTopSolutions,
+    MergeTopSolutions,
+)
+from miplearn.extractors.dummy import DummyExtractor
+from miplearn.extractors.fields import H5FieldsExtractor
+
+# Configures a memorizing primal component that collects
+# all distinct solutions seen during training and provides
+# them to the solver without any filtering or post-processing.
+comp1 = MemorizingPrimalComponent(
+    clf=DummyClassifier(),
+    extractor=DummyExtractor(),
+    constructor=SelectTopSolutions(1_000_000),
+    action=SetWarmStart(),
+)
+
+# Configures a memorizing primal component that finds the
+# training instance with the closest objective function, then
+# fixes the decision variables to the values they assumed
+# at the optimal solution for that instance.
+comp2 = MemorizingPrimalComponent(
+    clf=KNeighborsClassifier(n_neighbors=1),
+    extractor=H5FieldsExtractor(
+        instance_fields=["static_var_obj_coeffs"],
+    ),
+    constructor=SelectTopSolutions(1),
+    action=FixVariables(),
+)
+
+# Configures a memorizing primal component that finds the distinct
+# solutions to the 10 most similar training problem instances,
+# selects the 3 solutions that were most often optimal to these
+# training instances, combines them into a single partial solution,
+# then enforces proximity, allowing at most 3 variables to deviate
+# from the machine learning suggestion.
+comp3 = MemorizingPrimalComponent(
+    clf=KNeighborsClassifier(n_neighbors=10),
+    extractor=H5FieldsExtractor(instance_fields=["static_var_obj_coeffs"]),
+    constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),
+    action=EnforceProximity(3),
+)
+
+
+
+
+
+
+

8.3. Independent vars primal component

+

Instead of memorizing previously-seen primal solutions, it is also natural to use machine learning models to directly predict the values of the decision variables, constructing a solution from scratch. This approach has the benefit of potentially constructing novel high-quality solutions, never observed in the training data. Two variations of this strategy are supported by MIPLearn: (i) predicting the values of the decision variables independently, using multiple ML models; or (ii) predicting +the values jointly, with a single model. We describe the first variation in this section, and the second variation in the next section.

+

Let \(I_1,\ldots,I_n\) be the training instances, and let \(\bar{x}^1,\ldots,\bar{x}^n\) be their respective optimal solutions. For each binary decision variable \(x_j\), the component IndependentVarsPrimalComponent creates a copy of a user-provided binary classifier and trains it to predict the optimal value of \(x_j\), given \(\bar{x}^1_j,\ldots,\bar{x}^n_j\) as training labels. The features provided to the model are the variable features computed by an user-provided +extractor. During inference time, the component uses these \(n\) binary classifiers to construct a solution and provides it to the solver using one of the available actions.

+

Three issues often arise in practice when using this approach:

+
    +
  1. For certain binary variables \(x_j\), it is frequently the case that its optimal value is either always zero or always one in the training dataset, which poses problems to some standard scikit-learn classifiers, since they do not expect a single class. The wrapper SingleClassFix can be used to fix this issue (see example below).

  2. +
  3. It is also frequently the case that machine learning classifier can only reliably predict the values of some variables with high accuracy, not all of them. In this situation, instead of computing a complete primal solution, it may be more beneficial to construct a partial solution containing values only for the variables for which the ML made a high-confidence prediction. The meta-classifier MinProbabilityClassifier can be used for this purpose. It asks the base classifier for the +probability of the value being zero or one (using the predict_proba method) and erases from the primal solution all values whose probabilities are below a given threshold.

  4. +
  5. To make multiple copies of the provided ML classifier, MIPLearn uses the standard sklearn.base.clone method, which may not be suitable for classifiers from other frameworks. To handle this, it is possible to override the clone function using the clone_fn constructor argument.

  6. +
+
+

Examples

+
+
[2]:
+
+
+
from sklearn.linear_model import LogisticRegression
+from miplearn.classifiers.minprob import MinProbabilityClassifier
+from miplearn.classifiers.singleclass import SingleClassFix
+from miplearn.components.primal.indep import IndependentVarsPrimalComponent
+from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor
+from miplearn.components.primal.actions import SetWarmStart
+
+# Configures a primal component that independently predicts the value of each
+# binary variable using logistic regression and provides it to the solver as
+# warm start. Erases predictions with probability less than 99%; applies
+# single-class fix; and uses AlvLouWeh2017 features.
+comp = IndependentVarsPrimalComponent(
+    base_clf=SingleClassFix(
+        MinProbabilityClassifier(
+            base_clf=LogisticRegression(),
+            thresholds=[0.99, 0.99],
+        ),
+    ),
+    extractor=AlvLouWeh2017Extractor(),
+    action=SetWarmStart(),
+)
+
+
+
+
+
+
+

8.4. Joint vars primal component

+

In the previous subsection, we used multiple machine learning models to independently predict the values of the binary decision variables. When these values are correlated, an alternative approach is to jointly predict the values of all binary variables using a single machine learning model. This strategy is implemented by JointVarsPrimalComponent. Compared to the previous ones, this component is much more straightforwad. It simply extracts instance features, using the user-provided feature +extractor, then directly trains the user-provided binary classifier (using the fit method), without making any copies. The trained classifier is then used to predict entire solutions (using the predict method), which are given to the solver using one of the previously discussed methods. In the example below, we illustrate the usage of this component with a simple feed-forward neural network.

+

JointVarsPrimalComponent can also be used to implement strategies that use multiple machine learning models, but not indepedently. For example, a common strategy in multioutput prediction is building a classifier chain. In this approach, the first decision variable is predicted using the instance features alone; but the \(n\)-th decision variable is predicted using the instance features plus the predicted values of the \(n-1\) previous variables. This can be easily implemented +using scikit-learn’s ClassifierChain estimator, as shown in the example below.

+
+

Examples

+
+
[3]:
+
+
+
from sklearn.multioutput import ClassifierChain
+from sklearn.neural_network import MLPClassifier
+from miplearn.components.primal.joint import JointVarsPrimalComponent
+from miplearn.extractors.fields import H5FieldsExtractor
+from miplearn.components.primal.actions import SetWarmStart
+
+# Configures a primal component that uses a feedforward neural network
+# to jointly predict the values of the binary variables, based on the
+# objective cost function, and provides the solution to the solver as
+# a warm start.
+comp = JointVarsPrimalComponent(
+    clf=MLPClassifier(),
+    extractor=H5FieldsExtractor(
+        instance_fields=["static_var_obj_coeffs"],
+    ),
+    action=SetWarmStart(),
+)
+
+# Configures a primal component that uses a chain of logistic regression
+# models to jointly predict the values of the binary variables, based on
+# the objective function.
+comp = JointVarsPrimalComponent(
+    clf=ClassifierChain(SingleClassFix(LogisticRegression())),
+    extractor=H5FieldsExtractor(
+        instance_fields=["static_var_obj_coeffs"],
+    ),
+    action=SetWarmStart(),
+)
+
+
+
+
+
+
+

8.5. Expert primal component

+

Before spending time and effort choosing a machine learning strategy and tweaking its parameters, it is usually a good idea to evaluate what would be the performance impact of the model if its predictions were 100% accurate. This is especially important for the prediction of warm starts, since they are not always very beneficial. To simplify this task, MIPLearn provides ExpertPrimalComponent, a component which simply loads the optimal solution from the HDF5 file, assuming that it has already +been computed, then directly provides it to the solver using one of the available methods. This component is useful in benchmarks, to evaluate how close to the best theoretical performance the machine learning components are.

+
+

Example

+
+
[4]:
+
+
+
from miplearn.components.primal.expert import ExpertPrimalComponent
+from miplearn.components.primal.actions import SetWarmStart
+
+# Configures an expert primal component, which reads a pre-computed
+# optimal solution from the HDF5 file and provides it to the solver
+# as warm start.
+comp = ExpertPrimalComponent(action=SetWarmStart())
+
+
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/guide/problems.ipynb b/0.4/guide/problems.ipynb new file mode 100644 index 00000000..acc35fb2 --- /dev/null +++ b/0.4/guide/problems.ipynb @@ -0,0 +1,1607 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "f89436b4-5bc5-4ae3-a20a-522a2cd65274", + "metadata": {}, + "source": [ + "# Benchmark Problems\n", + "\n", + "## Overview\n", + "\n", + "Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n", + "\n", + "To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n", + "\n", + "In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm." + ] + }, + { + "cell_type": "markdown", + "id": "bd99c51f", + "metadata": {}, + "source": [ + "
\n", + "Warning\n", + "\n", + "The random instance generators and formulations shown below are subject to change. If you use them in your research, for reproducibility, you should specify the MIPLearn version and all parameters.\n", + "
\n", + "\n", + "
\n", + "Note\n", + "\n", + "- To make the instances easier to process, all formulations are written as a minimization problem.\n", + "- Some problem formulations, such as the one for the *traveling salesman problem*, contain an exponential number of constraints, which are enforced through constraint generation. The MPS files for these problems contain only the constraints that were generated during a trial run, not the entire set of constraints. Resolving the MPS file, therefore, may not generate a feasible primal solution for the problem.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "830f3784-a3fc-4e2f-a484-e7808841ffe8", + "metadata": { + "tags": [] + }, + "source": [ + "## Bin Packing\n", + "\n", + "**Bin packing** is a combinatorial optimization problem that asks for the optimal way to pack a given set of items into a finite number of containers (or bins) of fixed capacity. More specifically, the problem is to assign indivisible items of different sizes to identical bins, while minimizing the number of bins used. The problem is NP-hard and has many practical applications, including logistics and warehouse management, where it is used to determine how to best store and transport goods using a limited amount of space." + ] + }, + { + "cell_type": "markdown", + "id": "af933298-92a9-4c5d-8d07-0d4918dedbb8", + "metadata": { + "tags": [] + }, + "source": [ + "### Formulation\n", + "\n", + "Let $n$ be the number of items, and $s_i$ the size of the $i$-th item. Also let $B$ be the size of the bins. For each bin $j$, let $y_j$ be a binary decision variable which equals one if the bin is used. For every item-bin pair $(i,j)$, let $x_{ij}$ be a binary decision variable which equals one if item $i$ is assigned to bin $j$. The bin packing problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "5e502345", + "metadata": {}, + "source": [ + "\n", + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{j=1}^n y_j \\\\\n", + "\\text{subject to} \\;\\;\\;\n", + " & \\sum_{i=1}^n s_i x_{ij} \\leq B y_j & \\forall j=1,\\ldots,n \\\\\n", + " & \\sum_{j=1}^n x_{ij} = 1 & \\forall i=1,\\ldots,n \\\\\n", + " & y_i \\in \\{0,1\\} & \\forall i=1,\\ldots,n \\\\\n", + " & x_{ij} \\in \\{0,1\\} & \\forall i,j=1,\\ldots,n \\\\\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "9cba2077", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "Random instances of the bin packing problem can be generated using the class [BinPackGenerator][BinPackGenerator].\n", + "\n", + "If `fix_items=False`, the class samples the user-provided probability distributions `n`, `sizes` and `capacity` to decide, respectively, the number of items, the sizes of the items and capacity of the bin. All values are sampled independently.\n", + "\n", + "If `fix_items=True`, the class creates a reference instance, using the method previously described, then generates additional instances by perturbing its item sizes and bin capacity. More specifically, the sizes of the items are set to $s_i \\gamma_i$, where $s_i$ is the size of the $i$-th item in the reference instance and $\\gamma_i$ is sampled from `sizes_jitter`. Similarly, the bin size is set to $B \\beta$, where $B$ is the reference bin size and $\\beta$ is sampled from `capacity_jitter`. The number of items remains the same across all generated instances.\n", + "\n", + "[BinPackGenerator]: ../../api/problems/#miplearn.problems.binpack.BinPackGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "2bc62803", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "f14e560c-ef9f-4c48-8467-72d6acce5f9f", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.409419720Z", + "start_time": "2023-11-07T16:29:47.824353556Z" + }, + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0 [ 8.47 26. 19.52 14.11 3.65 3.65 1.4 21.76 14.82 16.96] 102.24\n", + "1 [ 8.69 22.78 17.81 14.83 4.12 3.67 1.46 22.05 13.66 18.08] 93.41\n", + "2 [ 8.55 25.9 20. 15.89 3.75 3.59 1.51 21.4 13.89 17.68] 90.69\n", + "3 [10.13 22.62 18.89 14.4 3.92 3.94 1.36 23.69 15.85 19.26] 107.9\n", + "4 [ 9.55 25.77 16.79 14.06 3.55 3.76 1.42 20.66 16.02 17.19] 95.62\n", + "5 [ 9.44 22.06 19.41 13.69 4.28 4.11 1.36 19.51 15.98 18.43] 104.58\n", + "6 [ 9.87 21.74 17.78 13.82 4.18 4. 1.4 19.76 14.46 17.08] 104.59\n", + "7 [ 9.62 25.61 18.2 13.83 4.07 4.1 1.47 22.83 15.01 17.78] 98.55\n", + "8 [ 8.47 21.9 16.58 15.37 3.76 3.91 1.57 20.57 14.76 18.61] 94.58\n", + "9 [ 8.57 22.77 17.06 16.25 4.14 4. 1.56 22.97 14.09 19.09] 100.79\n", + "\n", + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 20 rows, 110 columns and 210 nonzeros\n", + "Model fingerprint: 0x1ff9913f\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+02]\n", + " Objective range [1e+00, 1e+00]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective 5.0000000\n", + "Presolve time: 0.00s\n", + "Presolved: 20 rows, 110 columns, 210 nonzeros\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "\n", + "Root relaxation: objective 1.274844e+00, 38 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1.27484 0 4 5.00000 1.27484 74.5% - 0s\n", + "H 0 0 4.0000000 1.27484 68.1% - 0s\n", + "H 0 0 2.0000000 1.27484 36.3% - 0s\n", + " 0 0 1.27484 0 4 2.00000 1.27484 36.3% - 0s\n", + "\n", + "Explored 1 nodes (38 simplex iterations) in 0.03 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 3: 2 4 5 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.000000000000e+00, best bound 2.000000000000e+00, gap 0.0000%\n", + "\n", + "User-callback calls 143, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.binpack import BinPackGenerator, build_binpack_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances of the binpack problem with ten items\n", + "data = BinPackGenerator(\n", + " n=randint(low=10, high=11),\n", + " sizes=uniform(loc=0, scale=25),\n", + " capacity=uniform(loc=100, scale=0),\n", + " sizes_jitter=uniform(loc=0.9, scale=0.2),\n", + " capacity_jitter=uniform(loc=0.9, scale=0.2),\n", + " fix_items=True,\n", + ").generate(10)\n", + "\n", + "# Print sizes and capacities\n", + "for i in range(10):\n", + " print(i, data[i].sizes, data[i].capacity)\n", + "print()\n", + "\n", + "# Optimize first instance\n", + "model = build_binpack_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "9a3df608-4faf-444b-b5c2-18d3e90cbb5a", + "metadata": { + "tags": [] + }, + "source": [ + "## Multi-Dimensional Knapsack\n", + "\n", + "The **multi-dimensional knapsack problem** is a generalization of the classic knapsack problem, which involves selecting a subset of items to be placed in a knapsack such that the total value of the items is maximized without exceeding a maximum weight. In this generalization, items have multiple weights (representing multiple resources), and multiple weight constraints must be satisfied." + ] + }, + { + "cell_type": "markdown", + "id": "8d989002-d837-4ccf-a224-0504a6d66473", + "metadata": { + "tags": [] + }, + "source": [ + "### Formulation\n", + "\n", + "Let $n$ be the number of items and $m$ be the number of resources. For each item $j$ and resource $i$, let $p_j$ be the price of the item, let $w_{ij}$ be the amount of resource $j$ item $i$ consumes (i.e. the $j$-th weight of the item), and let $b_i$ be the total amount of resource $i$ available (or the size of the $j$-th knapsack). The formulation is given by:" + ] + }, + { + "cell_type": "markdown", + "id": "d0d3ea42", + "metadata": {}, + "source": [ + "\n", + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & - \\sum_{j=1}^n p_j x_j\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j=1}^n w_{ij} x_j \\leq b_i\n", + " & \\forall i=1,\\ldots,m \\\\\n", + " & x_j \\in \\{0,1\\}\n", + " & \\forall j=1,\\ldots,n\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "81b5b085-cfa9-45ce-9682-3aeb9be96cba", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [MultiKnapsackGenerator][MultiKnapsackGenerator] can be used to generate random instances of this problem. The number of items $n$ and knapsacks $m$ are sampled from the user-provided probability distributions `n` and `m`. The weights $w_{ij}$ are sampled independently from the provided distribution `w`. The capacity of knapsack $i$ is set to\n", + "\n", + "[MultiKnapsackGenerator]: ../../api/problems/#miplearn.problems.multiknapsack.MultiKnapsackGenerator\n", + "\n", + "$$\n", + " b_i = \\alpha_i \\sum_{j=1}^n w_{ij}\n", + "$$\n", + "\n", + "where $\\alpha_i$, the tightness ratio, is sampled from the provided probability\n", + "distribution `alpha`. To make the instances more challenging, the costs of the items\n", + "are linearly correlated to their average weights. More specifically, the price of each\n", + "item $j$ is set to:\n", + "\n", + "$$\n", + " p_j = \\sum_{i=1}^m \\frac{w_{ij}}{m} + K u_j,\n", + "$$\n", + "\n", + "where $K$, the correlation coefficient, and $u_j$, the correlation multiplier, are sampled\n", + "from the provided probability distributions `K` and `u`.\n", + "\n", + "If `fix_w=True` is provided, then $w_{ij}$ are kept the same in all generated instances. This also implies that $n$ and $m$ are kept fixed. Although the prices and capacities are derived from $w_{ij}$, as long as `u` and `K` are not constants, the generated instances will still not be completely identical.\n", + "\n", + "\n", + "If a probability distribution `w_jitter` is provided, then item weights will be set to $w_{ij} \\gamma_{ij}$ where $\\gamma_{ij}$ is sampled from `w_jitter`. When combined with `fix_w=True`, this argument may be used to generate instances where the weight of each item is roughly the same, but not exactly identical, across all instances. The prices of the items and the capacities of the knapsacks will be calculated as above, but using these perturbed weights instead.\n", + "\n", + "By default, all generated prices, weights and capacities are rounded to the nearest integer number. If `round=False` is provided, this rounding will be disabled." + ] + }, + { + "cell_type": "markdown", + "id": "f92135b8-67e7-4ec5-aeff-2fc17ad5e46d", + "metadata": {}, + "source": [ + "
\n", + "References\n", + "\n", + "* **Freville, Arnaud, and Gérard Plateau.** *An efficient preprocessing procedure for the multidimensional 0–1 knapsack problem.* Discrete applied mathematics 49.1-3 (1994): 189-212.\n", + "* **Fréville, Arnaud.** *The multidimensional 0–1 knapsack problem: An overview.* European Journal of Operational Research 155.1 (2004): 1-21.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "f12a066f", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "1ce5f8fb-2769-4fbd-a40c-fd62b897690a", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.485068449Z", + "start_time": "2023-11-07T16:29:48.406139946Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "prices\n", + " [350. 692. 454. 709. 605. 543. 321. 674. 571. 341.]\n", + "weights\n", + " [[392. 977. 764. 622. 158. 163. 56. 840. 574. 696.]\n", + " [ 20. 948. 860. 209. 178. 184. 293. 541. 414. 305.]\n", + " [629. 135. 278. 378. 466. 803. 205. 492. 584. 45.]\n", + " [630. 173. 64. 907. 947. 794. 312. 99. 711. 439.]\n", + " [117. 506. 35. 915. 266. 662. 312. 516. 521. 178.]]\n", + "capacities\n", + " [1310. 988. 1004. 1269. 1007.]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 5 rows, 10 columns and 50 nonzeros\n", + "Model fingerprint: 0xaf3ac15e\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [2e+01, 1e+03]\n", + " Objective range [3e+02, 7e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+03, 1e+03]\n", + "Found heuristic solution: objective -804.0000000\n", + "Presolve removed 0 rows and 3 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 7 columns, 34 nonzeros\n", + "Variable types: 0 continuous, 7 integer (7 binary)\n", + "\n", + "Root relaxation: objective -1.428726e+03, 4 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 -1428.7265 0 4 -804.00000 -1428.7265 77.7% - 0s\n", + "H 0 0 -1279.000000 -1428.7265 11.7% - 0s\n", + "\n", + "Cutting planes:\n", + " Cover: 1\n", + "\n", + "Explored 1 nodes (4 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 2: -1279 -804 \n", + "No other solutions better than -1279\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective -1.279000000000e+03, best bound -1.279000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 490, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.multiknapsack import (\n", + " MultiKnapsackGenerator,\n", + " build_multiknapsack_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate ten similar random instances of the multiknapsack problem with\n", + "# ten items, five resources and weights around [0, 1000].\n", + "data = MultiKnapsackGenerator(\n", + " n=randint(low=10, high=11),\n", + " m=randint(low=5, high=6),\n", + " w=uniform(loc=0, scale=1000),\n", + " K=uniform(loc=100, scale=0),\n", + " u=uniform(loc=1, scale=0),\n", + " alpha=uniform(loc=0.25, scale=0),\n", + " w_jitter=uniform(loc=0.95, scale=0.1),\n", + " p_jitter=uniform(loc=0.75, scale=0.5),\n", + " fix_w=True,\n", + ").generate(10)\n", + "\n", + "# Print data for one of the instances\n", + "print(\"prices\\n\", data[0].prices)\n", + "print(\"weights\\n\", data[0].weights)\n", + "print(\"capacities\\n\", data[0].capacities)\n", + "print()\n", + "\n", + "# Build model and optimize\n", + "model = build_multiknapsack_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "e20376b0-0781-4bfa-968f-ded5fa47e176", + "metadata": { + "tags": [] + }, + "source": [ + "## Capacitated P-Median\n", + "\n", + "The **capacitated p-median** problem is a variation of the classic $p$-median problem, in which a set of customers must be served by a set of facilities. In the capacitated $p$-Median problem, each facility has a fixed capacity, and the goal is to minimize the total cost of serving the customers while ensuring that the capacity of each facility is not exceeded. Variations of problem are often used in logistics and supply chain management to determine the most efficient locations for warehouses or distribution centers." + ] + }, + { + "cell_type": "markdown", + "id": "2af65137-109e-4ca0-8753-bd999825204f", + "metadata": { + "tags": [] + }, + "source": [ + "### Formulation\n", + "\n", + "Let $I=\\{1,\\ldots,n\\}$ be the set of customers. For each customer $i \\in I$, let $d_i$ be its demand and let $y_i$ be a binary decision variable that equals one if we decide to open a facility at that customer's location. For each pair $(i,j) \\in I \\times I$, let $x_{ij}$ be a binary decision variable that equals one if customer $i$ is assigned to facility $j$. Furthermore, let $w_{ij}$ be the cost of serving customer $i$ from facility $j$, let $p$ be the number of facilities we must open, and let $c_j$ be the capacity of facility $j$. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "a2494ab1-d306-4db7-a100-8f1dfd4a55d7", + "metadata": { + "tags": [] + }, + "source": [ + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & \\sum_{i \\in I} \\sum_{j \\in I} w_{ij} x_{ij}\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j \\in I} x_{ij} = 1 & \\forall i \\in I \\\\\n", + " & \\sum_{j \\in I} y_j = p \\\\\n", + " & \\sum_{i \\in I} d_i x_{ij} \\leq c_j y_j & \\forall j \\in I \\\\\n", + " & x_{ij} \\in \\{0, 1\\} & \\forall i, j \\in I \\\\\n", + " & y_j \\in \\{0, 1\\} & \\forall j \\in I\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "9dddf0d6-1f86-40d4-93a8-ccfe93d38e0d", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [PMedianGenerator][PMedianGenerator] can be used to generate random instances of this problem. First, it decides the number of customers and the parameter $p$ by sampling the provided `n` and `p` distributions, respectively. Then, for each customer $i$, the class builds its geographical location $(x_i, y_i)$ by sampling the provided `x` and `y` distributions. For each $i$, the demand for customer $i$ and the capacity of facility $i$ are decided by sampling the provided distributions `demands` and `capacities`, respectively. Finally, the costs $w_{ij}$ are set to the Euclidean distance between the locations of customers $i$ and $j$.\n", + "\n", + "If `fixed=True`, then the number of customers, their locations, the parameter $p$, the demands and the capacities are only sampled from their respective distributions exactly once, to build a reference instance which is then randomly perturbed. Specifically, in each perturbation, the distances, demands and capacities are multiplied by random scaling factors sampled from the distributions `distances_jitter`, `demands_jitter` and `capacities_jitter`, respectively. The result is a list of instances that have the same set of customers, but slightly different demands, capacities and distances.\n", + "\n", + "[PMedianGenerator]: ../../api/problems/#miplearn.problems.pmedian.PMedianGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "4e701397", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "4e0e4223-b4e0-4962-a157-82a23a86e37d", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.575025403Z", + "start_time": "2023-11-07T16:29:48.453962705Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "p = 5\n", + "distances =\n", + " [[ 0. 50.17 82.42 32.76 33.2 35.45 86.88 79.11 43.17 66.2 ]\n", + " [ 50.17 0. 72.64 72.51 17.06 80.25 39.92 68.93 43.41 42.96]\n", + " [ 82.42 72.64 0. 71.69 70.92 82.51 67.88 3.76 39.74 30.73]\n", + " [ 32.76 72.51 71.69 0. 56.56 11.03 101.35 69.39 42.09 68.58]\n", + " [ 33.2 17.06 70.92 56.56 0. 63.68 54.71 67.16 34.89 44.99]\n", + " [ 35.45 80.25 82.51 11.03 63.68 0. 111.04 80.29 52.78 79.36]\n", + " [ 86.88 39.92 67.88 101.35 54.71 111.04 0. 65.13 61.37 40.82]\n", + " [ 79.11 68.93 3.76 69.39 67.16 80.29 65.13 0. 36.26 27.24]\n", + " [ 43.17 43.41 39.74 42.09 34.89 52.78 61.37 36.26 0. 26.62]\n", + " [ 66.2 42.96 30.73 68.58 44.99 79.36 40.82 27.24 26.62 0. ]]\n", + "demands = [6.12 1.39 2.92 3.66 4.56 7.85 2. 5.14 5.92 0.46]\n", + "capacities = [151.89 42.63 16.26 237.22 241.41 202.1 76.15 24.42 171.06 110.04]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 21 rows, 110 columns and 220 nonzeros\n", + "Model fingerprint: 0x8d8d9346\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "Coefficient statistics:\n", + " Matrix range [5e-01, 2e+02]\n", + " Objective range [4e+00, 1e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 5e+00]\n", + "Found heuristic solution: objective 368.7900000\n", + "Presolve time: 0.00s\n", + "Presolved: 21 rows, 110 columns, 220 nonzeros\n", + "Variable types: 0 continuous, 110 integer (110 binary)\n", + "Found heuristic solution: objective 245.6400000\n", + "\n", + "Root relaxation: objective 0.000000e+00, 18 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 0.00000 0 6 245.64000 0.00000 100% - 0s\n", + "H 0 0 185.1900000 0.00000 100% - 0s\n", + "H 0 0 148.6300000 17.14595 88.5% - 0s\n", + "H 0 0 113.1800000 17.14595 84.9% - 0s\n", + " 0 0 17.14595 0 10 113.18000 17.14595 84.9% - 0s\n", + "H 0 0 99.5000000 17.14595 82.8% - 0s\n", + "H 0 0 98.3900000 17.14595 82.6% - 0s\n", + "H 0 0 93.9800000 64.28872 31.6% - 0s\n", + " 0 0 64.28872 0 15 93.98000 64.28872 31.6% - 0s\n", + "H 0 0 93.9200000 64.28872 31.5% - 0s\n", + " 0 0 86.06884 0 15 93.92000 86.06884 8.36% - 0s\n", + "* 0 0 0 91.2300000 91.23000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (70 simplex iterations) in 0.08 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 10: 91.23 93.92 93.98 ... 368.79\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 9.123000000000e+01, best bound 9.123000000000e+01, gap 0.0000%\n", + "\n", + "User-callback calls 190, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.pmedian import PMedianGenerator, build_pmedian_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with ten customers located in a\n", + "# 100x100 square, with demands in [0,10], capacities in [0, 250].\n", + "data = PMedianGenerator(\n", + " x=uniform(loc=0.0, scale=100.0),\n", + " y=uniform(loc=0.0, scale=100.0),\n", + " n=randint(low=10, high=11),\n", + " p=randint(low=5, high=6),\n", + " demands=uniform(loc=0, scale=10),\n", + " capacities=uniform(loc=0, scale=250),\n", + " distances_jitter=uniform(loc=0.9, scale=0.2),\n", + " demands_jitter=uniform(loc=0.9, scale=0.2),\n", + " capacities_jitter=uniform(loc=0.9, scale=0.2),\n", + " fixed=True,\n", + ").generate(10)\n", + "\n", + "# Print data for one of the instances\n", + "print(\"p =\", data[0].p)\n", + "print(\"distances =\\n\", data[0].distances)\n", + "print(\"demands =\", data[0].demands)\n", + "print(\"capacities =\", data[0].capacities)\n", + "print()\n", + "\n", + "# Build and optimize model\n", + "model = build_pmedian_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "36129dbf-ecba-4026-ad4d-f2356bad4a26", + "metadata": {}, + "source": [ + "## Set cover\n", + "\n", + "The **set cover problem** is a classical NP-hard optimization problem which aims to minimize the number of sets needed to cover all elements in a given universe. Each set may contain a different number of elements, and sets may overlap with each other. This problem can be useful in various real-world scenarios such as scheduling, resource allocation, and network design." + ] + }, + { + "cell_type": "markdown", + "id": "d5254e7a", + "metadata": {}, + "source": [ + "### Formulation\n", + "\n", + "Let $U = \\{1,\\ldots,n\\}$ be a given universe set, and let $S=\\{S_1,\\ldots,S_m\\}$ be a collection of sets whose union equal $U$. For each $j \\in \\{1,\\ldots,m\\}$, let $w_j$ be the weight of set $S_j$, and let $x_j$ be a binary decision variable that equals one if set $S_j$ is chosen. The set cover problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "5062d606-678c-45ba-9a45-d3c8b7401ad1", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & \\sum_{j=1}^m w_j x_j\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j : i \\in S_j} x_j \\geq 1 & \\forall i \\in \\{1,\\ldots,n\\} \\\\\n", + " & x_j \\in \\{0, 1\\} & \\forall j \\in \\{1,\\ldots,m\\}\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "2732c050-2e11-44fc-bdd1-1b804a60f166", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [SetCoverGenerator] can generate random instances of this problem. The class first decides the number of elements and sets by sampling the provided distributions `n_elements` and `n_sets`, respectively. Then it generates a random incidence matrix $M$, as follows:\n", + "\n", + "1. The density $d$ of $M$ is decided by sampling the provided probability distribution `density`.\n", + "2. Each entry of $M$ is then sampled from the Bernoulli distribution, with probability $d$.\n", + "3. To ensure that each element belongs to at least one set, the class identifies elements that are not contained in any set, then assigns them to a random set (chosen uniformly).\n", + "4. Similarly, to ensure that each set contains at least one element, the class identifies empty sets, then modifies them to include one random element (chosen uniformly).\n", + "\n", + "Finally, the weight of set $j$ is set to $w_j + K | S_j |$, where $w_j$ and $k$ are sampled from `costs` and `K`, respectively, and where $|S_j|$ denotes the size of set $S_j$. The parameter $K$ is used to introduce some correlation between the size of the set and its weight, making the instance more challenging. Note that `K` is only sampled once for the entire instance.\n", + "\n", + "If `fix_sets=True`, then all generated instances have exactly the same sets and elements. The costs of the sets, however, are multiplied by random scaling factors sampled from the provided probability distribution `costs_jitter`.\n", + "\n", + "[SetCoverGenerator]: ../../api/problems/#miplearn.problems.setcover.SetCoverGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "569aa5ec-d475-41fa-a5d9-0b1a675fdf95", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "3224845b-9afd-463e-abf4-e0e93d304859", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.804292323Z", + "start_time": "2023-11-07T16:29:48.492933268Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "matrix\n", + " [[1 0 0 0 1 1 1 0 0 0]\n", + " [1 0 0 1 1 1 1 0 1 1]\n", + " [0 1 1 1 1 0 1 0 0 1]\n", + " [0 1 1 0 0 0 1 1 0 1]\n", + " [1 1 1 0 1 0 1 0 0 1]]\n", + "costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n", + " 425.33]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 5 rows, 10 columns and 28 nonzeros\n", + "Model fingerprint: 0xe5c2d4fa\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [7e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective 213.4900000\n", + "Presolve removed 5 rows and 10 columns\n", + "Presolve time: 0.00s\n", + "Presolve: All rows and columns removed\n", + "\n", + "Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 1: 213.49 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.134900000000e+02, best bound 2.134900000000e+02, gap 0.0000%\n", + "\n", + "User-callback calls 178, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.setcover import SetCoverGenerator, build_setcover_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Build random instances with five elements, ten sets and costs\n", + "# in the [0, 1000] interval, with a correlation factor of 25 and\n", + "# an incidence matrix with 25% density.\n", + "data = SetCoverGenerator(\n", + " n_elements=randint(low=5, high=6),\n", + " n_sets=randint(low=10, high=11),\n", + " costs=uniform(loc=0.0, scale=1000.0),\n", + " costs_jitter=uniform(loc=0.90, scale=0.20),\n", + " density=uniform(loc=0.5, scale=0.00),\n", + " K=uniform(loc=25.0, scale=0.0),\n", + " fix_sets=True,\n", + ").generate(10)\n", + "\n", + "# Print problem data for one instance\n", + "print(\"matrix\\n\", data[0].incidence_matrix)\n", + "print(\"costs\", data[0].costs)\n", + "print()\n", + "\n", + "# Build and optimize model\n", + "model = build_setcover_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "255a4e88-2e38-4a1b-ba2e-806b6bd4c815", + "metadata": {}, + "source": [ + "## Set Packing\n", + "\n", + "**Set packing** is a classical optimization problem that asks for the maximum number of disjoint sets within a given list. This problem often arises in real-world situations where a finite number of resources need to be allocated to tasks, such as airline flight crew scheduling." + ] + }, + { + "cell_type": "markdown", + "id": "19342eb1", + "metadata": {}, + "source": [ + "### Formulation\n", + "\n", + "Let $U=\\{1,\\ldots,n\\}$ be a given universe set, and let $S = \\{S_1, \\ldots, S_m\\}$ be a collection of subsets of $U$. For each subset $j \\in \\{1, \\ldots, m\\}$, let $w_j$ be the weight of $S_j$ and let $x_j$ be a binary decision variable which equals one if set $S_j$ is chosen. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "0391b35b", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + " \\text{minimize}\\;\\;\\;\n", + " & -\\sum_{j=1}^m w_j x_j\n", + " \\\\\n", + " \\text{subject to}\\;\\;\\;\n", + " & \\sum_{j : i \\in S_j} x_j \\leq 1 & \\forall i \\in \\{1,\\ldots,n\\} \\\\\n", + " & x_j \\in \\{0, 1\\} & \\forall j \\in \\{1,\\ldots,m\\}\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "c2d7df7b", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [SetPackGenerator][SetPackGenerator] can generate random instances of this problem. It accepts exactly the same arguments, and generates instance data in exactly the same way as [SetCoverGenerator][SetCoverGenerator]. For more details, please see the documentation for that class.\n", + "\n", + "[SetPackGenerator]: ../../api/problems/#miplearn.problems.setpack.SetPackGenerator\n", + "[SetCoverGenerator]: ../../api/problems/#miplearn.problems.setcover.SetCoverGenerator\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "cc797da7", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.806917868Z", + "start_time": "2023-11-07T16:29:48.781619530Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "matrix\n", + " [[1 0 0 0 1 1 1 0 0 0]\n", + " [1 0 0 1 1 1 1 0 1 1]\n", + " [0 1 1 1 1 0 1 0 0 1]\n", + " [0 1 1 0 0 0 1 1 0 1]\n", + " [1 1 1 0 1 0 1 0 0 1]]\n", + "costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n", + " 425.33]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 5 rows, 10 columns and 28 nonzeros\n", + "Model fingerprint: 0x4ee91388\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [7e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective -1265.560000\n", + "Presolve removed 5 rows and 10 columns\n", + "Presolve time: 0.00s\n", + "Presolve: All rows and columns removed\n", + "\n", + "Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 2: -1986.37 -1265.56 \n", + "No other solutions better than -1986.37\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective -1.986370000000e+03, best bound -1.986370000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 238, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.setpack import SetPackGenerator, build_setpack_model_gurobipy\n", + "\n", + "# Set random seed, to make example reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Build random instances with five elements, ten sets and costs\n", + "# in the [0, 1000] interval, with a correlation factor of 25 and\n", + "# an incidence matrix with 25% density.\n", + "data = SetPackGenerator(\n", + " n_elements=randint(low=5, high=6),\n", + " n_sets=randint(low=10, high=11),\n", + " costs=uniform(loc=0.0, scale=1000.0),\n", + " costs_jitter=uniform(loc=0.90, scale=0.20),\n", + " density=uniform(loc=0.5, scale=0.00),\n", + " K=uniform(loc=25.0, scale=0.0),\n", + " fix_sets=True,\n", + ").generate(10)\n", + "\n", + "# Print problem data for one instance\n", + "print(\"matrix\\n\", data[0].incidence_matrix)\n", + "print(\"costs\", data[0].costs)\n", + "print()\n", + "\n", + "# Build and optimize model\n", + "model = build_setpack_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "373e450c-8f8b-4b59-bf73-251bdd6ff67e", + "metadata": {}, + "source": [ + "## Stable Set\n", + "\n", + "The **maximum-weight stable set problem** is a classical optimization problem in graph theory which asks for the maximum-weight subset of vertices in a graph such that no two vertices in the subset are adjacent. The problem often arises in real-world scheduling or resource allocation situations, where stable sets represent tasks or resources that can be chosen simultaneously without conflicts.\n", + "\n", + "### Formulation\n", + "\n", + "Let $G=(V,E)$ be a simple undirected graph, and for each vertex $v \\in V$, let $w_v$ be its weight. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "2f74dd10", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\; & -\\sum_{v \\in V} w_v x_v \\\\\n", + "\\text{such that} \\;\\;\\; & x_v + x_u \\leq 1 & \\forall (v,u) \\in E \\\\\n", + "& x_v \\in \\{0, 1\\} & \\forall v \\in V\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "ef030168", + "metadata": {}, + "source": [ + "\n", + "### Random instance generator\n", + "\n", + "The class [MaxWeightStableSetGenerator][MaxWeightStableSetGenerator] can be used to generate random instances of this problem. The class first samples the user-provided probability distributions `n` and `p` to decide the number of vertices and the density of the graph. Then, it generates a random Erdős-Rényi graph $G_{n,p}$. We recall that, in such a graph, each potential edge is included with probabilty $p$, independently for each other. The class then samples the provided probability distribution `w` to decide the vertex weights.\n", + "\n", + "[MaxWeightStableSetGenerator]: ../../api/problems/#miplearn.problems.stab.MaxWeightStableSetGenerator\n", + "\n", + "If `fix_graph=True`, then all generated instances have the same random graph. For each instance, the weights are decided by sampling `w`, as described above.\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "0f996e99-0ec9-472b-be8a-30c9b8556931", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.954896857Z", + "start_time": "2023-11-07T16:29:48.825579097Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "graph [(0, 2), (0, 4), (0, 8), (1, 2), (1, 3), (1, 5), (1, 6), (1, 9), (2, 5), (2, 9), (3, 6), (3, 7), (6, 9), (7, 8), (8, 9)]\n", + "weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n", + "weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n", + "\n", + "Set parameter PreCrush to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 15 rows, 10 columns and 30 nonzeros\n", + "Model fingerprint: 0x3240ea4a\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [6e+00, 1e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective -219.1400000\n", + "Presolve removed 7 rows and 2 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 8 rows, 8 columns, 19 nonzeros\n", + "Variable types: 0 continuous, 8 integer (8 binary)\n", + "\n", + "Root relaxation: objective -2.205650e+02, 5 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 infeasible 0 -219.14000 -219.14000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: -219.14 \n", + "No other solutions better than -219.14\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective -2.191400000000e+02, best bound -2.191400000000e+02, gap 0.0000%\n", + "\n", + "User-callback calls 299, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.stab import (\n", + " MaxWeightStableSetGenerator,\n", + " build_stab_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with a fixed 10-node graph,\n", + "# 25% density and random weights in the [0, 100] interval.\n", + "data = MaxWeightStableSetGenerator(\n", + " w=uniform(loc=0.0, scale=100.0),\n", + " n=randint(low=10, high=11),\n", + " p=uniform(loc=0.25, scale=0.0),\n", + " fix_graph=True,\n", + ").generate(10)\n", + "\n", + "# Print the graph and weights for two instances\n", + "print(\"graph\", data[0].graph.edges)\n", + "print(\"weights[0]\", data[0].weights)\n", + "print(\"weights[1]\", data[1].weights)\n", + "print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_stab_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "444d1092-fd83-4957-b691-a198d56ba066", + "metadata": {}, + "source": [ + "## Traveling Salesman\n", + "\n", + "Given a list of cities and the distances between them, the **traveling salesman problem** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes." + ] + }, + { + "cell_type": "markdown", + "id": "da3ca69c", + "metadata": {}, + "source": [ + "### Formulation\n", + "\n", + "Let $G=(V,E)$ be a simple undirected graph. For each edge $e \\in E$, let $d_e$ be its weight (or distance) and let $x_e$ be a binary decision variable which equals one if $e$ is included in the route. The problem is formulated as:" + ] + }, + { + "cell_type": "markdown", + "id": "9cf296e9", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{e \\in E} d_e x_e \\\\\n", + "\\text{such that} \\;\\;\\;\n", + " & \\sum_{e : \\delta(v)} x_e = 2 & \\forall v \\in V, \\\\\n", + " & \\sum_{e \\in \\delta(S)} x_e \\geq 2 & \\forall S \\subsetneq V, |S| \\neq \\emptyset, \\\\\n", + " & x_e \\in \\{0, 1\\} & \\forall e \\in E,\n", + "\\end{align*}\n", + "$$\n", + "where $\\delta(v)$ denotes the set of edges adjacent to vertex $v$, and $\\delta(S)$ denotes the set of edges that have one extremity in $S$ and one in $V \\setminus S$. Because of its exponential size, we enforce the second set of inequalities as lazy constraints." + ] + }, + { + "cell_type": "markdown", + "id": "eba3dbe5", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [TravelingSalesmanGenerator][TravelingSalesmanGenerator] can be used to generate random instances of this problem. Initially, the class samples the user-provided probability distribution `n` to decide how many cities to generate. Then, for each city $i$, the class generates its geographical location $(x_i, y_i)$ by sampling the provided distributions `x` and `y`. The distance $d_{ij}$ between cities $i$ and $j$ is then set to\n", + "$$\n", + "\\gamma_{ij} \\sqrt{(x_i - x_j)^2 + (y_i - y_j)^2},\n", + "$$\n", + "where $\\gamma$ is a random scaling factor sampled from the provided probability distribution `gamma`.\n", + "\n", + "If `fix_cities=True`, then the list of cities is kept the same for all generated instances. The $\\gamma$ values, however, and therefore also the distances, are still different. By default, all distances $d_{ij}$ are rounded to the nearest integer. If `round=False` is provided, this rounding will be disabled.\n", + "\n", + "[TravelingSalesmanGenerator]: ../../api/problems/#miplearn.problems.tsp.TravelingSalesmanGenerator" + ] + }, + { + "cell_type": "markdown", + "id": "61f16c56", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "9d0c56c6", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:48.958833448Z", + "start_time": "2023-11-07T16:29:48.898121017Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "distances[0]\n", + " [[ 0. 513. 762. 358. 325. 374. 932. 731. 391. 634.]\n", + " [ 513. 0. 726. 765. 163. 754. 409. 719. 446. 400.]\n", + " [ 762. 726. 0. 780. 756. 744. 656. 40. 383. 334.]\n", + " [ 358. 765. 780. 0. 549. 117. 925. 702. 422. 728.]\n", + " [ 325. 163. 756. 549. 0. 663. 526. 708. 377. 462.]\n", + " [ 374. 754. 744. 117. 663. 0. 1072. 802. 501. 853.]\n", + " [ 932. 409. 656. 925. 526. 1072. 0. 654. 603. 433.]\n", + " [ 731. 719. 40. 702. 708. 802. 654. 0. 381. 255.]\n", + " [ 391. 446. 383. 422. 377. 501. 603. 381. 0. 287.]\n", + " [ 634. 400. 334. 728. 462. 853. 433. 255. 287. 0.]]\n", + "distances[1]\n", + " [[ 0. 493. 900. 354. 323. 367. 841. 727. 444. 668.]\n", + " [ 493. 0. 690. 687. 175. 725. 368. 744. 398. 446.]\n", + " [ 900. 690. 0. 666. 728. 827. 736. 41. 371. 317.]\n", + " [ 354. 687. 666. 0. 570. 104. 1090. 712. 454. 648.]\n", + " [ 323. 175. 728. 570. 0. 655. 521. 650. 356. 469.]\n", + " [ 367. 725. 827. 104. 655. 0. 1146. 779. 476. 752.]\n", + " [ 841. 368. 736. 1090. 521. 1146. 0. 681. 565. 394.]\n", + " [ 727. 744. 41. 712. 650. 779. 681. 0. 374. 286.]\n", + " [ 444. 398. 371. 454. 356. 476. 565. 374. 0. 274.]\n", + " [ 668. 446. 317. 648. 469. 752. 394. 286. 274. 0.]]\n", + "\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", + "Model fingerprint: 0x719675e5\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [4e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 10 rows, 45 columns, 90 nonzeros\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "\n", + "Root relaxation: objective 2.921000e+03, 17 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + "* 0 0 0 2921.0000000 2921.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Lazy constraints: 3\n", + "\n", + "Explored 1 nodes (17 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 2921 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.921000000000e+03, best bound 2.921000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 106, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanGenerator,\n", + " build_tsp_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with a fixed ten cities in the 1000x1000 box\n", + "# and random distance scaling factors in the [0.90, 1.10] interval.\n", + "data = TravelingSalesmanGenerator(\n", + " n=randint(low=10, high=11),\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " gamma=uniform(loc=0.90, scale=0.20),\n", + " fix_cities=True,\n", + " round=True,\n", + ").generate(10)\n", + "\n", + "# Print distance matrices for the first two instances\n", + "print(\"distances[0]\\n\", data[0].distances)\n", + "print(\"distances[1]\\n\", data[1].distances)\n", + "print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_tsp_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "26dfc157-11f4-4564-b368-95ee8200875e", + "metadata": {}, + "source": [ + "## Unit Commitment\n", + "\n", + "The **unit commitment problem** is a mixed-integer optimization problem which asks which power generation units should be turned on and off, at what time, and at what capacity, in order to meet the demand for electricity generation at the lowest cost. Numerous operational constraints are typically enforced, such as *ramping constraints*, which prevent generation units from changing power output levels too quickly from one time step to the next, and *minimum-up* and *minimum-down* constraints, which prevent units from switching on and off too frequently. The unit commitment problem is widely used in power systems planning and operations." + ] + }, + { + "cell_type": "markdown", + "id": "7048d771", + "metadata": {}, + "source": [ + "\n", + "
\n", + "Note\n", + "\n", + "MIPLearn includes a simple formulation for the unit commitment problem, which enforces only minimum and maximum power production, as well as minimum-up and minimum-down constraints. The formulation does not enforce, for example, ramping trajectories, piecewise-linear cost curves, start-up costs or transmission and n-1 security constraints. For a more complete set of formulations, solution methods and realistic benchmark instances for the problem, see [UnitCommitment.jl](https://github.com/ANL-CEEESA/UnitCommitment.jl).\n", + "
\n", + "\n", + "### Formulation\n", + "\n", + "Let $T$ be the number of time steps, $G$ be the number of generation units, and let $D_t$ be the power demand (in MW) at time $t$. For each generating unit $g$, let $P^\\max_g$ and $P^\\min_g$ be the maximum and minimum amount of power the unit is able to produce when switched on; let $L_g$ and $l_g$ be the minimum up- and down-time for unit $g$; let $C^\\text{fixed}$ be the cost to keep unit $g$ on for one time step, regardless of its power output level; let $C^\\text{start}$ be the cost to switch unit $g$ on; and let $C^\\text{var}$ be the cost for generator $g$ to produce 1 MW of power. In this formulation, we assume linear production costs. For each generator $g$ and time $t$, let $x_{gt}$ be a binary variable which equals one if unit $g$ is on at time $t$, let $w_{gt}$ be a binary variable which equals one if unit $g$ switches from being off at time $t-1$ to being on at time $t$, and let $p_{gt}$ be a continuous variable which indicates the amount of power generated. The formulation is given by:" + ] + }, + { + "cell_type": "markdown", + "id": "bec5ee1c", + "metadata": {}, + "source": [ + "\n", + "$$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{t=1}^T \\sum_{g=1}^G \\left(\n", + " x_{gt} C^\\text{fixed}_g\n", + " + w_{gt} C^\\text{start}_g\n", + " + p_{gt} C^\\text{var}_g\n", + " \\right)\n", + " \\\\\n", + "\\text{such that} \\;\\;\\;\n", + " & \\sum_{k=t-L_g+1}^t w_{gk} \\leq x_{gt}\n", + " & \\forall g\\; \\forall t=L_g-1,\\ldots,T-1 \\\\\n", + " & \\sum_{k=g-l_g+1}^T w_{gt} \\leq 1 - x_{g,t-l_g+1}\n", + " & \\forall g \\forall t=l_g-1,\\ldots,T-1 \\\\\n", + " & w_{gt} \\geq x_{gt} - x_{g,t-1}\n", + " & \\forall g \\forall t=1,\\ldots,T-1 \\\\\n", + " & \\sum_{g=1}^G p_{gt} \\geq D_t\n", + " & \\forall t \\\\\n", + " & P^\\text{min}_g x_{gt} \\leq p_{gt}\n", + " & \\forall g, t \\\\\n", + " & p_{gt} \\leq P^\\text{max}_g x_{gt}\n", + " & \\forall g, t \\\\\n", + " & x_{gt} \\in \\{0, 1\\}\n", + " & \\forall g, t.\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "4a1ffb4c", + "metadata": {}, + "source": [ + "\n", + "The first set of inequalities enforces minimum up-time constraints: if unit $g$ is down at time $t$, then it cannot start up during the previous $L_g$ time steps. The second set of inequalities enforces minimum down-time constraints, and is symmetrical to the previous one. The third set ensures that if unit $g$ starts up at time $t$, then the start up variable must be one. The fourth set ensures that demand is satisfied at each time period. The fifth and sixth sets enforce bounds to the quantity of power generated by each unit.\n", + "\n", + "
\n", + "References\n", + "\n", + "- *Bendotti, P., Fouilhoux, P. & Rottner, C.* **The min-up/min-down unit commitment polytope.** J Comb Optim 36, 1024-1058 (2018). https://doi.org/10.1007/s10878-018-0273-y\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "01bed9fc", + "metadata": {}, + "source": [ + "\n", + "### Random instance generator\n", + "\n", + "The class `UnitCommitmentGenerator` can be used to generate random instances of this problem.\n", + "\n", + "First, the user-provided probability distributions `n_units` and `n_periods` are sampled to determine the number of generating units and the number of time steps, respectively. Then, for each unit, the probabilities `max_power` and `min_power` are sampled to determine the unit's maximum and minimum power output. To make it easier to generate valid ranges, `min_power` is not specified as the absolute power level in MW, but rather as a multiplier of `max_power`; for example, if `max_power` samples to 100 and `min_power` samples to 0.5, then the unit's power range is set to `[50,100]`. Then, the distributions `cost_startup`, `cost_prod` and `cost_fixed` are sampled to determine the unit's startup, variable and fixed costs, while the distributions `min_uptime` and `min_downtime` are sampled to determine its minimum up/down-time.\n", + "\n", + "After parameters for the units have been generated, the class then generates a periodic demand curve, with a peak every 12 time steps, in the range $(0.4C, 0.8C)$, where $C$ is the sum of all units' maximum power output. Finally, all costs and demand values are perturbed by random scaling factors independently sampled from the distributions `cost_jitter` and `demand_jitter`, respectively.\n", + "\n", + "If `fix_units=True`, then the list of generators (with their respective parameters) is kept the same for all generated instances. If `cost_jitter` and `demand_jitter` are provided, the instances will still have slightly different costs and demands." + ] + }, + { + "cell_type": "markdown", + "id": "855b87b4", + "metadata": {}, + "source": [ + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "6217da7c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:49.061613905Z", + "start_time": "2023-11-07T16:29:48.941857719Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "min_power[0] [117.79 245.85 271.85 207.7 81.38]\n", + "max_power[0] [218.54 477.82 379.4 319.4 120.21]\n", + "min_uptime[0] [7 6 3 5 7]\n", + "min_downtime[0] [7 3 5 6 2]\n", + "min_power[0] [117.79 245.85 271.85 207.7 81.38]\n", + "cost_startup[0] [3042.42 5247.56 4319.45 2912.29 6118.53]\n", + "cost_prod[0] [ 6.97 14.61 18.32 22.8 39.26]\n", + "cost_fixed[0] [199.67 514.23 592.41 46.45 607.54]\n", + "demand[0]\n", + " [ 905.06 915.41 1166.52 1212.29 1127.81 953.52 905.06 796.21 783.78\n", + " 866.23 768.62 899.59 905.06 946.23 1087.61 1004.24 1048.36 992.03\n", + " 905.06 750.82 691.48 606.15 658.5 809.95]\n", + "\n", + "min_power[1] [117.79 245.85 271.85 207.7 81.38]\n", + "max_power[1] [218.54 477.82 379.4 319.4 120.21]\n", + "min_uptime[1] [7 6 3 5 7]\n", + "min_downtime[1] [7 3 5 6 2]\n", + "min_power[1] [117.79 245.85 271.85 207.7 81.38]\n", + "cost_startup[1] [2458.08 6200.26 4585.74 2666.05 4783.34]\n", + "cost_prod[1] [ 6.31 13.33 20.42 24.37 46.86]\n", + "cost_fixed[1] [196.9 416.42 655.57 52.51 626.15]\n", + "demand[1]\n", + " [ 981.42 840.07 1095.59 1102.03 1088.41 932.29 863.67 848.56 761.33\n", + " 828.28 775.18 834.99 959.76 865.72 1193.52 1058.92 985.19 893.92\n", + " 962.16 781.88 723.15 639.04 602.4 787.02]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 578 rows, 360 columns and 2128 nonzeros\n", + "Model fingerprint: 0x4dc1c661\n", + "Variable types: 120 continuous, 240 integer (240 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 5e+02]\n", + " Objective range [7e+00, 6e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+03]\n", + "Presolve removed 244 rows and 131 columns\n", + "Presolve time: 0.01s\n", + "Presolved: 334 rows, 229 columns, 842 nonzeros\n", + "Variable types: 116 continuous, 113 integer (113 binary)\n", + "Found heuristic solution: objective 440662.46430\n", + "Found heuristic solution: objective 429461.97680\n", + "Found heuristic solution: objective 374043.64040\n", + "\n", + "Root relaxation: objective 3.361348e+05, 142 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 336134.820 0 18 374043.640 336134.820 10.1% - 0s\n", + "H 0 0 368600.14450 336134.820 8.81% - 0s\n", + "H 0 0 364721.76610 336134.820 7.84% - 0s\n", + " 0 0 cutoff 0 364721.766 364721.766 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 3\n", + " Cover: 8\n", + " Implied bound: 29\n", + " Clique: 222\n", + " MIR: 7\n", + " Flow cover: 7\n", + " RLT: 1\n", + " Relax-and-lift: 7\n", + "\n", + "Explored 1 nodes (234 simplex iterations) in 0.02 seconds (0.02 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 5: 364722 368600 374044 ... 440662\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 3.647217661000e+05, best bound 3.647217661000e+05, gap 0.0000%\n", + "\n", + "User-callback calls 677, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.uc import UnitCommitmentGenerator, build_uc_model_gurobipy\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate a random instance with 5 generators and 24 time steps\n", + "data = UnitCommitmentGenerator(\n", + " n_units=randint(low=5, high=6),\n", + " n_periods=randint(low=24, high=25),\n", + " max_power=uniform(loc=50, scale=450),\n", + " min_power=uniform(loc=0.5, scale=0.25),\n", + " cost_startup=uniform(loc=0, scale=10_000),\n", + " cost_prod=uniform(loc=0, scale=50),\n", + " cost_fixed=uniform(loc=0, scale=1_000),\n", + " min_uptime=randint(low=2, high=8),\n", + " min_downtime=randint(low=2, high=8),\n", + " cost_jitter=uniform(loc=0.75, scale=0.5),\n", + " demand_jitter=uniform(loc=0.9, scale=0.2),\n", + " fix_units=True,\n", + ").generate(10)\n", + "\n", + "# Print problem data for the two first instances\n", + "for i in range(2):\n", + " print(f\"min_power[{i}]\", data[i].min_power)\n", + " print(f\"max_power[{i}]\", data[i].max_power)\n", + " print(f\"min_uptime[{i}]\", data[i].min_uptime)\n", + " print(f\"min_downtime[{i}]\", data[i].min_downtime)\n", + " print(f\"min_power[{i}]\", data[i].min_power)\n", + " print(f\"cost_startup[{i}]\", data[i].cost_startup)\n", + " print(f\"cost_prod[{i}]\", data[i].cost_prod)\n", + " print(f\"cost_fixed[{i}]\", data[i].cost_fixed)\n", + " print(f\"demand[{i}]\\n\", data[i].demand)\n", + " print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_uc_model_gurobipy(data[0])\n", + "model.optimize()" + ] + }, + { + "cell_type": "markdown", + "id": "169293c7-33e1-4d28-8d39-9982776251d7", + "metadata": {}, + "source": [ + "## Vertex Cover\n", + "\n", + "**Minimum weight vertex cover** is a classical optimization problem in graph theory where the goal is to find the minimum-weight set of vertices that are connected to all of the edges in the graph. The problem generalizes one of Karp's 21 NP-complete problems and has applications in various fields, including bioinformatics and machine learning." + ] + }, + { + "cell_type": "markdown", + "id": "91f5781a", + "metadata": {}, + "source": [ + "\n", + "### Formulation\n", + "\n", + "Let $G=(V,E)$ be a simple graph. For each vertex $v \\in V$, let $w_g$ be its weight, and let $x_v$ be a binary decision variable which equals one if $v$ is included in the cover. The mixed-integer linear formulation for the problem is given by:" + ] + }, + { + "cell_type": "markdown", + "id": "544754cb", + "metadata": {}, + "source": [ + " $$\n", + "\\begin{align*}\n", + "\\text{minimize} \\;\\;\\;\n", + " & \\sum_{v \\in V} w_v \\\\\n", + "\\text{such that} \\;\\;\\;\n", + " & x_i + x_j \\ge 1 & \\forall \\{i, j\\} \\in E, \\\\\n", + " & x_{i,j} \\in \\{0, 1\\}\n", + " & \\forall \\{i,j\\} \\in E.\n", + "\\end{align*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "35c99166", + "metadata": {}, + "source": [ + "### Random instance generator\n", + "\n", + "The class [MinWeightVertexCoverGenerator][MinWeightVertexCoverGenerator] can be used to generate random instances of this problem. The class accepts exactly the same parameters and behaves exactly in the same way as [MaxWeightStableSetGenerator][MaxWeightStableSetGenerator]. See the [stable set section](#Stable-Set) for more details.\n", + "\n", + "[MinWeightVertexCoverGenerator]: ../../api/problems/#module-miplearn.problems.vertexcover\n", + "[MaxWeightStableSetGenerator]: ../../api/problems/#miplearn.problems.stab.MaxWeightStableSetGenerator\n", + "\n", + "### Example" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "5fff7afe-5b7a-4889-a502-66751ec979bf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-11-07T16:29:49.075657363Z", + "start_time": "2023-11-07T16:29:49.049561363Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "graph [(0, 2), (0, 4), (0, 8), (1, 2), (1, 3), (1, 5), (1, 6), (1, 9), (2, 5), (2, 9), (3, 6), (3, 7), (6, 9), (7, 8), (8, 9)]\n", + "weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n", + "weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n", + "\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 15 rows, 10 columns and 30 nonzeros\n", + "Model fingerprint: 0x2d2d1390\n", + "Variable types: 0 continuous, 10 integer (10 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [6e+00, 1e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+00, 1e+00]\n", + "Found heuristic solution: objective 301.0000000\n", + "Presolve removed 7 rows and 2 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 8 rows, 8 columns, 19 nonzeros\n", + "Variable types: 0 continuous, 8 integer (8 binary)\n", + "\n", + "Root relaxation: objective 2.995750e+02, 8 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 infeasible 0 301.00000 301.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (8 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 301 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 3.010000000000e+02, best bound 3.010000000000e+02, gap 0.0000%\n", + "\n", + "User-callback calls 326, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "import random\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.problems.vertexcover import (\n", + " MinWeightVertexCoverGenerator,\n", + " build_vertexcover_model_gurobipy,\n", + ")\n", + "\n", + "# Set random seed to make example reproducible\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate random instances with a fixed 10-node graph,\n", + "# 25% density and random weights in the [0, 100] interval.\n", + "data = MinWeightVertexCoverGenerator(\n", + " w=uniform(loc=0.0, scale=100.0),\n", + " n=randint(low=10, high=11),\n", + " p=uniform(loc=0.25, scale=0.0),\n", + " fix_graph=True,\n", + ").generate(10)\n", + "\n", + "# Print the graph and weights for two instances\n", + "print(\"graph\", data[0].graph.edges)\n", + "print(\"weights[0]\", data[0].weights)\n", + "print(\"weights[1]\", data[1].weights)\n", + "print()\n", + "\n", + "# Load and optimize the first instance\n", + "model = build_vertexcover_model_gurobipy(data[0])\n", + "model.optimize()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/guide/problems/index.html b/0.4/guide/problems/index.html new file mode 100644 index 00000000..fc9a6d5e --- /dev/null +++ b/0.4/guide/problems/index.html @@ -0,0 +1,1647 @@ + + + + + + + + 5. Benchmark Problems — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ + +
+
+ +
+ +
+

5. Benchmark Problems

+
+

5.1. Overview

+

Benchmark sets such as MIPLIB or TSPLIB are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of +magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.

+

To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very +similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.

+

In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm.

+
+

Warning

+

The random instance generators and formulations shown below are subject to change. If you use them in your research, for reproducibility, you should specify the MIPLearn version and all parameters.

+
+
+

Note

+
    +
  • To make the instances easier to process, all formulations are written as a minimization problem.

  • +
  • Some problem formulations, such as the one for the traveling salesman problem, contain an exponential number of constraints, which are enforced through constraint generation. The MPS files for these problems contain only the constraints that were generated during a trial run, not the entire set of constraints. Resolving the MPS file, therefore, may not generate a feasible primal solution for the problem.

  • +
+
+
+
+

5.2. Bin Packing

+

Bin packing is a combinatorial optimization problem that asks for the optimal way to pack a given set of items into a finite number of containers (or bins) of fixed capacity. More specifically, the problem is to assign indivisible items of different sizes to identical bins, while minimizing the number of bins used. The problem is NP-hard and has many practical applications, including logistics and warehouse management, where it is used to determine how to best store and transport goods using +a limited amount of space.

+
+

Formulation

+

Let \(n\) be the number of items, and \(s_i\) the size of the \(i\)-th item. Also let \(B\) be the size of the bins. For each bin \(j\), let \(y_j\) be a binary decision variable which equals one if the bin is used. For every item-bin pair \((i,j)\), let \(x_{ij}\) be a binary decision variable which equals one if item \(i\) is assigned to bin \(j\). The bin packing problem is formulated as:

+
+\[\begin{split}\begin{align*} +\text{minimize} \;\;\; + & \sum_{j=1}^n y_j \\ +\text{subject to} \;\;\; + & \sum_{i=1}^n s_i x_{ij} \leq B y_j & \forall j=1,\ldots,n \\ + & \sum_{j=1}^n x_{ij} = 1 & \forall i=1,\ldots,n \\ + & y_i \in \{0,1\} & \forall i=1,\ldots,n \\ + & x_{ij} \in \{0,1\} & \forall i,j=1,\ldots,n \\ +\end{align*}\end{split}\]
+
+
+

Random instance generator

+

Random instances of the bin packing problem can be generated using the class BinPackGenerator.

+

If fix_items=False, the class samples the user-provided probability distributions n, sizes and capacity to decide, respectively, the number of items, the sizes of the items and capacity of the bin. All values are sampled independently.

+

If fix_items=True, the class creates a reference instance, using the method previously described, then generates additional instances by perturbing its item sizes and bin capacity. More specifically, the sizes of the items are set to \(s_i \gamma_i\), where \(s_i\) is the size of the \(i\)-th item in the reference instance and \(\gamma_i\) is sampled from sizes_jitter. Similarly, the bin size is set to \(B \beta\), where \(B\) is the reference bin size and +\(\beta\) is sampled from capacity_jitter. The number of items remains the same across all generated instances.

+
+
+

Example

+
+
[1]:
+
+
+
import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.binpack import BinPackGenerator, build_binpack_model_gurobipy
+
+# Set random seed, to make example reproducible
+np.random.seed(42)
+
+# Generate random instances of the binpack problem with ten items
+data = BinPackGenerator(
+    n=randint(low=10, high=11),
+    sizes=uniform(loc=0, scale=25),
+    capacity=uniform(loc=100, scale=0),
+    sizes_jitter=uniform(loc=0.9, scale=0.2),
+    capacity_jitter=uniform(loc=0.9, scale=0.2),
+    fix_items=True,
+).generate(10)
+
+# Print sizes and capacities
+for i in range(10):
+    print(i, data[i].sizes, data[i].capacity)
+print()
+
+# Optimize first instance
+model = build_binpack_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+0 [ 8.47 26.   19.52 14.11  3.65  3.65  1.4  21.76 14.82 16.96] 102.24
+1 [ 8.69 22.78 17.81 14.83  4.12  3.67  1.46 22.05 13.66 18.08] 93.41
+2 [ 8.55 25.9  20.   15.89  3.75  3.59  1.51 21.4  13.89 17.68] 90.69
+3 [10.13 22.62 18.89 14.4   3.92  3.94  1.36 23.69 15.85 19.26] 107.9
+4 [ 9.55 25.77 16.79 14.06  3.55  3.76  1.42 20.66 16.02 17.19] 95.62
+5 [ 9.44 22.06 19.41 13.69  4.28  4.11  1.36 19.51 15.98 18.43] 104.58
+6 [ 9.87 21.74 17.78 13.82  4.18  4.    1.4  19.76 14.46 17.08] 104.59
+7 [ 9.62 25.61 18.2  13.83  4.07  4.1   1.47 22.83 15.01 17.78] 98.55
+8 [ 8.47 21.9  16.58 15.37  3.76  3.91  1.57 20.57 14.76 18.61] 94.58
+9 [ 8.57 22.77 17.06 16.25  4.14  4.    1.56 22.97 14.09 19.09] 100.79
+
+Restricted license - for non-production use only - expires 2024-10-28
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 20 rows, 110 columns and 210 nonzeros
+Model fingerprint: 0x1ff9913f
+Variable types: 0 continuous, 110 integer (110 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+02]
+  Objective range  [1e+00, 1e+00]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+00, 1e+00]
+Found heuristic solution: objective 5.0000000
+Presolve time: 0.00s
+Presolved: 20 rows, 110 columns, 210 nonzeros
+Variable types: 0 continuous, 110 integer (110 binary)
+
+Root relaxation: objective 1.274844e+00, 38 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0    1.27484    0    4    5.00000    1.27484  74.5%     -    0s
+H    0     0                       4.0000000    1.27484  68.1%     -    0s
+H    0     0                       2.0000000    1.27484  36.3%     -    0s
+     0     0    1.27484    0    4    2.00000    1.27484  36.3%     -    0s
+
+Explored 1 nodes (38 simplex iterations) in 0.03 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 3: 2 4 5
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 2.000000000000e+00, best bound 2.000000000000e+00, gap 0.0000%
+
+User-callback calls 143, time in user-callback 0.00 sec
+
+
+
+
+
+

5.3. Multi-Dimensional Knapsack

+

The multi-dimensional knapsack problem is a generalization of the classic knapsack problem, which involves selecting a subset of items to be placed in a knapsack such that the total value of the items is maximized without exceeding a maximum weight. In this generalization, items have multiple weights (representing multiple resources), and multiple weight constraints must be satisfied.

+
+

Formulation

+

Let \(n\) be the number of items and \(m\) be the number of resources. For each item \(j\) and resource \(i\), let \(p_j\) be the price of the item, let \(w_{ij}\) be the amount of resource \(j\) item \(i\) consumes (i.e. the \(j\)-th weight of the item), and let \(b_i\) be the total amount of resource \(i\) available (or the size of the \(j\)-th knapsack). The formulation is given by:

+
+\[\begin{split}\begin{align*} + \text{minimize}\;\;\; + & - \sum_{j=1}^n p_j x_j + \\ + \text{subject to}\;\;\; + & \sum_{j=1}^n w_{ij} x_j \leq b_i + & \forall i=1,\ldots,m \\ + & x_j \in \{0,1\} + & \forall j=1,\ldots,n +\end{align*}\end{split}\]
+
+
+

Random instance generator

+

The class MultiKnapsackGenerator can be used to generate random instances of this problem. The number of items \(n\) and knapsacks \(m\) are sampled from the user-provided probability distributions n and m. The weights \(w_{ij}\) are sampled independently from the provided distribution w. The capacity of knapsack \(i\) is set to

+
+\[b_i = \alpha_i \sum_{j=1}^n w_{ij}\]
+

where \(\alpha_i\), the tightness ratio, is sampled from the provided probability distribution alpha. To make the instances more challenging, the costs of the items are linearly correlated to their average weights. More specifically, the price of each item \(j\) is set to:

+
+\[p_j = \sum_{i=1}^m \frac{w_{ij}}{m} + K u_j,\]
+

where \(K\), the correlation coefficient, and \(u_j\), the correlation multiplier, are sampled from the provided probability distributions K and u.

+

If fix_w=True is provided, then \(w_{ij}\) are kept the same in all generated instances. This also implies that \(n\) and \(m\) are kept fixed. Although the prices and capacities are derived from \(w_{ij}\), as long as u and K are not constants, the generated instances will still not be completely identical.

+

If a probability distribution w_jitter is provided, then item weights will be set to \(w_{ij} \gamma_{ij}\) where \(\gamma_{ij}\) is sampled from w_jitter. When combined with fix_w=True, this argument may be used to generate instances where the weight of each item is roughly the same, but not exactly identical, across all instances. The prices of the items and the capacities of the knapsacks will be calculated as above, but using these perturbed weights instead.

+

By default, all generated prices, weights and capacities are rounded to the nearest integer number. If round=False is provided, this rounding will be disabled.

+
+

References

+
    +
  • Freville, Arnaud, and Gérard Plateau. An efficient preprocessing procedure for the multidimensional 0–1 knapsack problem. Discrete applied mathematics 49.1-3 (1994): 189-212.

  • +
  • Fréville, Arnaud. The multidimensional 0–1 knapsack problem: An overview. European Journal of Operational Research 155.1 (2004): 1-21.

  • +
+
+
+
+

Example

+
+
[2]:
+
+
+
import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.multiknapsack import (
+    MultiKnapsackGenerator,
+    build_multiknapsack_model_gurobipy,
+)
+
+# Set random seed, to make example reproducible
+np.random.seed(42)
+
+# Generate ten similar random instances of the multiknapsack problem with
+# ten items, five resources and weights around [0, 1000].
+data = MultiKnapsackGenerator(
+    n=randint(low=10, high=11),
+    m=randint(low=5, high=6),
+    w=uniform(loc=0, scale=1000),
+    K=uniform(loc=100, scale=0),
+    u=uniform(loc=1, scale=0),
+    alpha=uniform(loc=0.25, scale=0),
+    w_jitter=uniform(loc=0.95, scale=0.1),
+    p_jitter=uniform(loc=0.75, scale=0.5),
+    fix_w=True,
+).generate(10)
+
+# Print data for one of the instances
+print("prices\n", data[0].prices)
+print("weights\n", data[0].weights)
+print("capacities\n", data[0].capacities)
+print()
+
+# Build model and optimize
+model = build_multiknapsack_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+prices
+ [350. 692. 454. 709. 605. 543. 321. 674. 571. 341.]
+weights
+ [[392. 977. 764. 622. 158. 163.  56. 840. 574. 696.]
+ [ 20. 948. 860. 209. 178. 184. 293. 541. 414. 305.]
+ [629. 135. 278. 378. 466. 803. 205. 492. 584.  45.]
+ [630. 173.  64. 907. 947. 794. 312.  99. 711. 439.]
+ [117. 506.  35. 915. 266. 662. 312. 516. 521. 178.]]
+capacities
+ [1310.  988. 1004. 1269. 1007.]
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 5 rows, 10 columns and 50 nonzeros
+Model fingerprint: 0xaf3ac15e
+Variable types: 0 continuous, 10 integer (10 binary)
+Coefficient statistics:
+  Matrix range     [2e+01, 1e+03]
+  Objective range  [3e+02, 7e+02]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+03, 1e+03]
+Found heuristic solution: objective -804.0000000
+Presolve removed 0 rows and 3 columns
+Presolve time: 0.00s
+Presolved: 5 rows, 7 columns, 34 nonzeros
+Variable types: 0 continuous, 7 integer (7 binary)
+
+Root relaxation: objective -1.428726e+03, 4 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 -1428.7265    0    4 -804.00000 -1428.7265  77.7%     -    0s
+H    0     0                    -1279.000000 -1428.7265  11.7%     -    0s
+
+Cutting planes:
+  Cover: 1
+
+Explored 1 nodes (4 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 2: -1279 -804
+No other solutions better than -1279
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective -1.279000000000e+03, best bound -1.279000000000e+03, gap 0.0000%
+
+User-callback calls 490, time in user-callback 0.00 sec
+
+
+
+
+
+

5.4. Capacitated P-Median

+

The capacitated p-median problem is a variation of the classic \(p\)-median problem, in which a set of customers must be served by a set of facilities. In the capacitated \(p\)-Median problem, each facility has a fixed capacity, and the goal is to minimize the total cost of serving the customers while ensuring that the capacity of each facility is not exceeded. Variations of problem are often used in logistics and supply chain management to determine the most efficient locations for +warehouses or distribution centers.

+
+

Formulation

+

Let \(I=\{1,\ldots,n\}\) be the set of customers. For each customer \(i \in I\), let \(d_i\) be its demand and let \(y_i\) be a binary decision variable that equals one if we decide to open a facility at that customer’s location. For each pair \((i,j) \in I \times I\), let \(x_{ij}\) be a binary decision variable that equals one if customer \(i\) is assigned to facility \(j\). Furthermore, let \(w_{ij}\) be the cost of serving customer \(i\) from facility +\(j\), let \(p\) be the number of facilities we must open, and let \(c_j\) be the capacity of facility \(j\). The problem is formulated as:

+
+\[\begin{split}\begin{align*} + \text{minimize}\;\;\; + & \sum_{i \in I} \sum_{j \in I} w_{ij} x_{ij} + \\ + \text{subject to}\;\;\; + & \sum_{j \in I} x_{ij} = 1 & \forall i \in I \\ + & \sum_{j \in I} y_j = p \\ + & \sum_{i \in I} d_i x_{ij} \leq c_j y_j & \forall j \in I \\ + & x_{ij} \in \{0, 1\} & \forall i, j \in I \\ + & y_j \in \{0, 1\} & \forall j \in I +\end{align*}\end{split}\]
+
+
+

Random instance generator

+

The class PMedianGenerator can be used to generate random instances of this problem. First, it decides the number of customers and the parameter \(p\) by sampling the provided n and p distributions, respectively. Then, for each customer \(i\), the class builds its geographical location \((x_i, y_i)\) by sampling the provided x and y distributions. For each \(i\), the demand for customer \(i\) +and the capacity of facility \(i\) are decided by sampling the provided distributions demands and capacities, respectively. Finally, the costs \(w_{ij}\) are set to the Euclidean distance between the locations of customers \(i\) and \(j\).

+

If fixed=True, then the number of customers, their locations, the parameter \(p\), the demands and the capacities are only sampled from their respective distributions exactly once, to build a reference instance which is then randomly perturbed. Specifically, in each perturbation, the distances, demands and capacities are multiplied by random scaling factors sampled from the distributions distances_jitter, demands_jitter and capacities_jitter, respectively. The result is a +list of instances that have the same set of customers, but slightly different demands, capacities and distances.

+
+
+

Example

+
+
[3]:
+
+
+
import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.pmedian import PMedianGenerator, build_pmedian_model_gurobipy
+
+# Set random seed, to make example reproducible
+np.random.seed(42)
+
+# Generate random instances with ten customers located in a
+# 100x100 square, with demands in [0,10], capacities in [0, 250].
+data = PMedianGenerator(
+    x=uniform(loc=0.0, scale=100.0),
+    y=uniform(loc=0.0, scale=100.0),
+    n=randint(low=10, high=11),
+    p=randint(low=5, high=6),
+    demands=uniform(loc=0, scale=10),
+    capacities=uniform(loc=0, scale=250),
+    distances_jitter=uniform(loc=0.9, scale=0.2),
+    demands_jitter=uniform(loc=0.9, scale=0.2),
+    capacities_jitter=uniform(loc=0.9, scale=0.2),
+    fixed=True,
+).generate(10)
+
+# Print data for one of the instances
+print("p =", data[0].p)
+print("distances =\n", data[0].distances)
+print("demands =", data[0].demands)
+print("capacities =", data[0].capacities)
+print()
+
+# Build and optimize model
+model = build_pmedian_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+p = 5
+distances =
+ [[  0.    50.17  82.42  32.76  33.2   35.45  86.88  79.11  43.17  66.2 ]
+ [ 50.17   0.    72.64  72.51  17.06  80.25  39.92  68.93  43.41  42.96]
+ [ 82.42  72.64   0.    71.69  70.92  82.51  67.88   3.76  39.74  30.73]
+ [ 32.76  72.51  71.69   0.    56.56  11.03 101.35  69.39  42.09  68.58]
+ [ 33.2   17.06  70.92  56.56   0.    63.68  54.71  67.16  34.89  44.99]
+ [ 35.45  80.25  82.51  11.03  63.68   0.   111.04  80.29  52.78  79.36]
+ [ 86.88  39.92  67.88 101.35  54.71 111.04   0.    65.13  61.37  40.82]
+ [ 79.11  68.93   3.76  69.39  67.16  80.29  65.13   0.    36.26  27.24]
+ [ 43.17  43.41  39.74  42.09  34.89  52.78  61.37  36.26   0.    26.62]
+ [ 66.2   42.96  30.73  68.58  44.99  79.36  40.82  27.24  26.62   0.  ]]
+demands = [6.12 1.39 2.92 3.66 4.56 7.85 2.   5.14 5.92 0.46]
+capacities = [151.89  42.63  16.26 237.22 241.41 202.1   76.15  24.42 171.06 110.04]
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 21 rows, 110 columns and 220 nonzeros
+Model fingerprint: 0x8d8d9346
+Variable types: 0 continuous, 110 integer (110 binary)
+Coefficient statistics:
+  Matrix range     [5e-01, 2e+02]
+  Objective range  [4e+00, 1e+02]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+00, 5e+00]
+Found heuristic solution: objective 368.7900000
+Presolve time: 0.00s
+Presolved: 21 rows, 110 columns, 220 nonzeros
+Variable types: 0 continuous, 110 integer (110 binary)
+Found heuristic solution: objective 245.6400000
+
+Root relaxation: objective 0.000000e+00, 18 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0    0.00000    0    6  245.64000    0.00000   100%     -    0s
+H    0     0                     185.1900000    0.00000   100%     -    0s
+H    0     0                     148.6300000   17.14595  88.5%     -    0s
+H    0     0                     113.1800000   17.14595  84.9%     -    0s
+     0     0   17.14595    0   10  113.18000   17.14595  84.9%     -    0s
+H    0     0                      99.5000000   17.14595  82.8%     -    0s
+H    0     0                      98.3900000   17.14595  82.6%     -    0s
+H    0     0                      93.9800000   64.28872  31.6%     -    0s
+     0     0   64.28872    0   15   93.98000   64.28872  31.6%     -    0s
+H    0     0                      93.9200000   64.28872  31.5%     -    0s
+     0     0   86.06884    0   15   93.92000   86.06884  8.36%     -    0s
+*    0     0               0      91.2300000   91.23000  0.00%     -    0s
+
+Explored 1 nodes (70 simplex iterations) in 0.08 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 10: 91.23 93.92 93.98 ... 368.79
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 9.123000000000e+01, best bound 9.123000000000e+01, gap 0.0000%
+
+User-callback calls 190, time in user-callback 0.00 sec
+
+
+
+
+
+

5.5. Set cover

+

The set cover problem is a classical NP-hard optimization problem which aims to minimize the number of sets needed to cover all elements in a given universe. Each set may contain a different number of elements, and sets may overlap with each other. This problem can be useful in various real-world scenarios such as scheduling, resource allocation, and network design.

+
+

Formulation

+

Let \(U = \{1,\ldots,n\}\) be a given universe set, and let \(S=\{S_1,\ldots,S_m\}\) be a collection of sets whose union equal \(U\). For each \(j \in \{1,\ldots,m\}\), let \(w_j\) be the weight of set \(S_j\), and let \(x_j\) be a binary decision variable that equals one if set \(S_j\) is chosen. The set cover problem is formulated as:

+
+\[\begin{split}\begin{align*} + \text{minimize}\;\;\; + & \sum_{j=1}^m w_j x_j + \\ + \text{subject to}\;\;\; + & \sum_{j : i \in S_j} x_j \geq 1 & \forall i \in \{1,\ldots,n\} \\ + & x_j \in \{0, 1\} & \forall j \in \{1,\ldots,m\} +\end{align*}\end{split}\]
+
+
+

Random instance generator

+

The class SetCoverGenerator can generate random instances of this problem. The class first decides the number of elements and sets by sampling the provided distributions n_elements and n_sets, respectively. Then it generates a random incidence matrix \(M\), as follows:

+
    +
  1. The density \(d\) of \(M\) is decided by sampling the provided probability distribution density.

  2. +
  3. Each entry of \(M\) is then sampled from the Bernoulli distribution, with probability \(d\).

  4. +
  5. To ensure that each element belongs to at least one set, the class identifies elements that are not contained in any set, then assigns them to a random set (chosen uniformly).

  6. +
  7. Similarly, to ensure that each set contains at least one element, the class identifies empty sets, then modifies them to include one random element (chosen uniformly).

  8. +
+

Finally, the weight of set \(j\) is set to \(w_j + K | S_j |\), where \(w_j\) and \(k\) are sampled from costs and K, respectively, and where \(|S_j|\) denotes the size of set \(S_j\). The parameter \(K\) is used to introduce some correlation between the size of the set and its weight, making the instance more challenging. Note that K is only sampled once for the entire instance.

+

If fix_sets=True, then all generated instances have exactly the same sets and elements. The costs of the sets, however, are multiplied by random scaling factors sampled from the provided probability distribution costs_jitter.

+
+
+

Example

+
+
[4]:
+
+
+
import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.setcover import SetCoverGenerator, build_setcover_model_gurobipy
+
+# Set random seed, to make example reproducible
+np.random.seed(42)
+
+# Build random instances with five elements, ten sets and costs
+# in the [0, 1000] interval, with a correlation factor of 25 and
+# an incidence matrix with 25% density.
+data = SetCoverGenerator(
+    n_elements=randint(low=5, high=6),
+    n_sets=randint(low=10, high=11),
+    costs=uniform(loc=0.0, scale=1000.0),
+    costs_jitter=uniform(loc=0.90, scale=0.20),
+    density=uniform(loc=0.5, scale=0.00),
+    K=uniform(loc=25.0, scale=0.0),
+    fix_sets=True,
+).generate(10)
+
+# Print problem data for one instance
+print("matrix\n", data[0].incidence_matrix)
+print("costs", data[0].costs)
+print()
+
+# Build and optimize model
+model = build_setcover_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+matrix
+ [[1 0 0 0 1 1 1 0 0 0]
+ [1 0 0 1 1 1 1 0 1 1]
+ [0 1 1 1 1 0 1 0 0 1]
+ [0 1 1 0 0 0 1 1 0 1]
+ [1 1 1 0 1 0 1 0 0 1]]
+costs [1044.58  850.13 1014.5   944.83  697.9   971.87  213.49  220.98   70.23
+  425.33]
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 5 rows, 10 columns and 28 nonzeros
+Model fingerprint: 0xe5c2d4fa
+Variable types: 0 continuous, 10 integer (10 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [7e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+00, 1e+00]
+Found heuristic solution: objective 213.4900000
+Presolve removed 5 rows and 10 columns
+Presolve time: 0.00s
+Presolve: All rows and columns removed
+
+Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
+Thread count was 1 (of 20 available processors)
+
+Solution count 1: 213.49
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 2.134900000000e+02, best bound 2.134900000000e+02, gap 0.0000%
+
+User-callback calls 178, time in user-callback 0.00 sec
+
+
+
+
+
+

5.6. Set Packing

+

Set packing is a classical optimization problem that asks for the maximum number of disjoint sets within a given list. This problem often arises in real-world situations where a finite number of resources need to be allocated to tasks, such as airline flight crew scheduling.

+
+

Formulation

+

Let \(U=\{1,\ldots,n\}\) be a given universe set, and let \(S = \{S_1, \ldots, S_m\}\) be a collection of subsets of \(U\). For each subset \(j \in \{1, \ldots, m\}\), let \(w_j\) be the weight of \(S_j\) and let \(x_j\) be a binary decision variable which equals one if set \(S_j\) is chosen. The problem is formulated as:

+
+\[\begin{split}\begin{align*} + \text{minimize}\;\;\; + & -\sum_{j=1}^m w_j x_j + \\ + \text{subject to}\;\;\; + & \sum_{j : i \in S_j} x_j \leq 1 & \forall i \in \{1,\ldots,n\} \\ + & x_j \in \{0, 1\} & \forall j \in \{1,\ldots,m\} +\end{align*}\end{split}\]
+
+
+

Random instance generator

+

The class SetPackGenerator can generate random instances of this problem. It accepts exactly the same arguments, and generates instance data in exactly the same way as SetCoverGenerator. For more details, please see the documentation for that class.

+
+
+

Example

+
+
[5]:
+
+
+
import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.setpack import SetPackGenerator, build_setpack_model_gurobipy
+
+# Set random seed, to make example reproducible
+np.random.seed(42)
+
+# Build random instances with five elements, ten sets and costs
+# in the [0, 1000] interval, with a correlation factor of 25 and
+# an incidence matrix with 25% density.
+data = SetPackGenerator(
+    n_elements=randint(low=5, high=6),
+    n_sets=randint(low=10, high=11),
+    costs=uniform(loc=0.0, scale=1000.0),
+    costs_jitter=uniform(loc=0.90, scale=0.20),
+    density=uniform(loc=0.5, scale=0.00),
+    K=uniform(loc=25.0, scale=0.0),
+    fix_sets=True,
+).generate(10)
+
+# Print problem data for one instance
+print("matrix\n", data[0].incidence_matrix)
+print("costs", data[0].costs)
+print()
+
+# Build and optimize model
+model = build_setpack_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+matrix
+ [[1 0 0 0 1 1 1 0 0 0]
+ [1 0 0 1 1 1 1 0 1 1]
+ [0 1 1 1 1 0 1 0 0 1]
+ [0 1 1 0 0 0 1 1 0 1]
+ [1 1 1 0 1 0 1 0 0 1]]
+costs [1044.58  850.13 1014.5   944.83  697.9   971.87  213.49  220.98   70.23
+  425.33]
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 5 rows, 10 columns and 28 nonzeros
+Model fingerprint: 0x4ee91388
+Variable types: 0 continuous, 10 integer (10 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [7e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+00, 1e+00]
+Found heuristic solution: objective -1265.560000
+Presolve removed 5 rows and 10 columns
+Presolve time: 0.00s
+Presolve: All rows and columns removed
+
+Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
+Thread count was 1 (of 20 available processors)
+
+Solution count 2: -1986.37 -1265.56
+No other solutions better than -1986.37
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective -1.986370000000e+03, best bound -1.986370000000e+03, gap 0.0000%
+
+User-callback calls 238, time in user-callback 0.00 sec
+
+
+
+
+
+

5.7. Stable Set

+

The maximum-weight stable set problem is a classical optimization problem in graph theory which asks for the maximum-weight subset of vertices in a graph such that no two vertices in the subset are adjacent. The problem often arises in real-world scheduling or resource allocation situations, where stable sets represent tasks or resources that can be chosen simultaneously without conflicts.

+
+

Formulation

+

Let \(G=(V,E)\) be a simple undirected graph, and for each vertex \(v \in V\), let \(w_v\) be its weight. The problem is formulated as:

+
+\[\begin{split}\begin{align*} +\text{minimize} \;\;\; & -\sum_{v \in V} w_v x_v \\ +\text{such that} \;\;\; & x_v + x_u \leq 1 & \forall (v,u) \in E \\ +& x_v \in \{0, 1\} & \forall v \in V +\end{align*}\end{split}\]
+
+
+

Random instance generator

+

The class MaxWeightStableSetGenerator can be used to generate random instances of this problem. The class first samples the user-provided probability distributions n and p to decide the number of vertices and the density of the graph. Then, it generates a random Erdős-Rényi graph \(G_{n,p}\). We recall that, in such a graph, each potential edge is included with probabilty \(p\), independently for each +other. The class then samples the provided probability distribution w to decide the vertex weights.

+

If fix_graph=True, then all generated instances have the same random graph. For each instance, the weights are decided by sampling w, as described above.

+
+
+

Example

+
+
[6]:
+
+
+
import random
+import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.stab import (
+    MaxWeightStableSetGenerator,
+    build_stab_model_gurobipy,
+)
+
+# Set random seed to make example reproducible
+random.seed(42)
+np.random.seed(42)
+
+# Generate random instances with a fixed 10-node graph,
+# 25% density and random weights in the [0, 100] interval.
+data = MaxWeightStableSetGenerator(
+    w=uniform(loc=0.0, scale=100.0),
+    n=randint(low=10, high=11),
+    p=uniform(loc=0.25, scale=0.0),
+    fix_graph=True,
+).generate(10)
+
+# Print the graph and weights for two instances
+print("graph", data[0].graph.edges)
+print("weights[0]", data[0].weights)
+print("weights[1]", data[1].weights)
+print()
+
+# Load and optimize the first instance
+model = build_stab_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+graph [(0, 2), (0, 4), (0, 8), (1, 2), (1, 3), (1, 5), (1, 6), (1, 9), (2, 5), (2, 9), (3, 6), (3, 7), (6, 9), (7, 8), (8, 9)]
+weights[0] [37.45 95.07 73.2  59.87 15.6  15.6   5.81 86.62 60.11 70.81]
+weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]
+
+Set parameter PreCrush to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 15 rows, 10 columns and 30 nonzeros
+Model fingerprint: 0x3240ea4a
+Variable types: 0 continuous, 10 integer (10 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [6e+00, 1e+02]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+00, 1e+00]
+Found heuristic solution: objective -219.1400000
+Presolve removed 7 rows and 2 columns
+Presolve time: 0.00s
+Presolved: 8 rows, 8 columns, 19 nonzeros
+Variable types: 0 continuous, 8 integer (8 binary)
+
+Root relaxation: objective -2.205650e+02, 5 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 infeasible    0      -219.14000 -219.14000  0.00%     -    0s
+
+Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 1: -219.14
+No other solutions better than -219.14
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective -2.191400000000e+02, best bound -2.191400000000e+02, gap 0.0000%
+
+User-callback calls 299, time in user-callback 0.00 sec
+
+
+
+
+
+

5.8. Traveling Salesman

+

Given a list of cities and the distances between them, the traveling salesman problem asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp’s 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.

+
+

Formulation

+

Let \(G=(V,E)\) be a simple undirected graph. For each edge \(e \in E\), let \(d_e\) be its weight (or distance) and let \(x_e\) be a binary decision variable which equals one if \(e\) is included in the route. The problem is formulated as:

+
+\[\begin{split}\begin{align*} +\text{minimize} \;\;\; + & \sum_{e \in E} d_e x_e \\ +\text{such that} \;\;\; + & \sum_{e : \delta(v)} x_e = 2 & \forall v \in V, \\ + & \sum_{e \in \delta(S)} x_e \geq 2 & \forall S \subsetneq V, |S| \neq \emptyset, \\ + & x_e \in \{0, 1\} & \forall e \in E, +\end{align*}\end{split}\]
+

where \(\delta(v)\) denotes the set of edges adjacent to vertex \(v\), and \(\delta(S)\) denotes the set of edges that have one extremity in \(S\) and one in \(V \setminus S\). Because of its exponential size, we enforce the second set of inequalities as lazy constraints.

+
+
+

Random instance generator

+

The class TravelingSalesmanGenerator can be used to generate random instances of this problem. Initially, the class samples the user-provided probability distribution n to decide how many cities to generate. Then, for each city \(i\), the class generates its geographical location \((x_i, y_i)\) by sampling the provided distributions x and y. The distance \(d_{ij}\) between cities \(i\) and +\(j\) is then set to

+
+\[\gamma_{ij} \sqrt{(x_i - x_j)^2 + (y_i - y_j)^2},\]
+

where \(\gamma\) is a random scaling factor sampled from the provided probability distribution gamma.

+

If fix_cities=True, then the list of cities is kept the same for all generated instances. The \(\gamma\) values, however, and therefore also the distances, are still different. By default, all distances \(d_{ij}\) are rounded to the nearest integer. If round=False is provided, this rounding will be disabled.

+
+
+

Example

+
+
[7]:
+
+
+
import random
+import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.tsp import (
+    TravelingSalesmanGenerator,
+    build_tsp_model_gurobipy,
+)
+
+# Set random seed to make example reproducible
+random.seed(42)
+np.random.seed(42)
+
+# Generate random instances with a fixed ten cities in the 1000x1000 box
+# and random distance scaling factors in the [0.90, 1.10] interval.
+data = TravelingSalesmanGenerator(
+    n=randint(low=10, high=11),
+    x=uniform(loc=0.0, scale=1000.0),
+    y=uniform(loc=0.0, scale=1000.0),
+    gamma=uniform(loc=0.90, scale=0.20),
+    fix_cities=True,
+    round=True,
+).generate(10)
+
+# Print distance matrices for the first two instances
+print("distances[0]\n", data[0].distances)
+print("distances[1]\n", data[1].distances)
+print()
+
+# Load and optimize the first instance
+model = build_tsp_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+distances[0]
+ [[   0.  513.  762.  358.  325.  374.  932.  731.  391.  634.]
+ [ 513.    0.  726.  765.  163.  754.  409.  719.  446.  400.]
+ [ 762.  726.    0.  780.  756.  744.  656.   40.  383.  334.]
+ [ 358.  765.  780.    0.  549.  117.  925.  702.  422.  728.]
+ [ 325.  163.  756.  549.    0.  663.  526.  708.  377.  462.]
+ [ 374.  754.  744.  117.  663.    0. 1072.  802.  501.  853.]
+ [ 932.  409.  656.  925.  526. 1072.    0.  654.  603.  433.]
+ [ 731.  719.   40.  702.  708.  802.  654.    0.  381.  255.]
+ [ 391.  446.  383.  422.  377.  501.  603.  381.    0.  287.]
+ [ 634.  400.  334.  728.  462.  853.  433.  255.  287.    0.]]
+distances[1]
+ [[   0.  493.  900.  354.  323.  367.  841.  727.  444.  668.]
+ [ 493.    0.  690.  687.  175.  725.  368.  744.  398.  446.]
+ [ 900.  690.    0.  666.  728.  827.  736.   41.  371.  317.]
+ [ 354.  687.  666.    0.  570.  104. 1090.  712.  454.  648.]
+ [ 323.  175.  728.  570.    0.  655.  521.  650.  356.  469.]
+ [ 367.  725.  827.  104.  655.    0. 1146.  779.  476.  752.]
+ [ 841.  368.  736. 1090.  521. 1146.    0.  681.  565.  394.]
+ [ 727.  744.   41.  712.  650.  779.  681.    0.  374.  286.]
+ [ 444.  398.  371.  454.  356.  476.  565.  374.    0.  274.]
+ [ 668.  446.  317.  648.  469.  752.  394.  286.  274.    0.]]
+
+Set parameter PreCrush to value 1
+Set parameter LazyConstraints to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 10 rows, 45 columns and 90 nonzeros
+Model fingerprint: 0x719675e5
+Variable types: 0 continuous, 45 integer (45 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [4e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [2e+00, 2e+00]
+Presolve time: 0.00s
+Presolved: 10 rows, 45 columns, 90 nonzeros
+Variable types: 0 continuous, 45 integer (45 binary)
+
+Root relaxation: objective 2.921000e+03, 17 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+*    0     0               0    2921.0000000 2921.00000  0.00%     -    0s
+
+Cutting planes:
+  Lazy constraints: 3
+
+Explored 1 nodes (17 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 1: 2921
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 2.921000000000e+03, best bound 2.921000000000e+03, gap 0.0000%
+
+User-callback calls 106, time in user-callback 0.00 sec
+
+
+
+
+
+

5.9. Unit Commitment

+

The unit commitment problem is a mixed-integer optimization problem which asks which power generation units should be turned on and off, at what time, and at what capacity, in order to meet the demand for electricity generation at the lowest cost. Numerous operational constraints are typically enforced, such as ramping constraints, which prevent generation units from changing power output levels too quickly from one time step to the next, and minimum-up and minimum-down constraints, +which prevent units from switching on and off too frequently. The unit commitment problem is widely used in power systems planning and operations.

+
+

Note

+

MIPLearn includes a simple formulation for the unit commitment problem, which enforces only minimum and maximum power production, as well as minimum-up and minimum-down constraints. The formulation does not enforce, for example, ramping trajectories, piecewise-linear cost curves, start-up costs or transmission and n-1 security constraints. For a more complete set of formulations, solution methods and realistic benchmark instances for the problem, see +UnitCommitment.jl.

+
+
+

Formulation

+

Let \(T\) be the number of time steps, \(G\) be the number of generation units, and let \(D_t\) be the power demand (in MW) at time \(t\). For each generating unit \(g\), let \(P^\max_g\) and \(P^\min_g\) be the maximum and minimum amount of power the unit is able to produce when switched on; let \(L_g\) and \(l_g\) be the minimum up- and down-time for unit \(g\); let \(C^\text{fixed}\) be the cost to keep unit \(g\) on for one time step, +regardless of its power output level; let \(C^\text{start}\) be the cost to switch unit \(g\) on; and let \(C^\text{var}\) be the cost for generator \(g\) to produce 1 MW of power. In this formulation, we assume linear production costs. For each generator \(g\) and time \(t\), let \(x_{gt}\) be a binary variable which equals one if unit \(g\) is on at time \(t\), let \(w_{gt}\) be a binary variable which equals one if unit \(g\) switches from being off +at time \(t-1\) to being on at time \(t\), and let \(p_{gt}\) be a continuous variable which indicates the amount of power generated. The formulation is given by:

+
+\[\begin{split}\begin{align*} +\text{minimize} \;\;\; + & \sum_{t=1}^T \sum_{g=1}^G \left( + x_{gt} C^\text{fixed}_g + + w_{gt} C^\text{start}_g + + p_{gt} C^\text{var}_g + \right) + \\ +\text{such that} \;\;\; + & \sum_{k=t-L_g+1}^t w_{gk} \leq x_{gt} + & \forall g\; \forall t=L_g-1,\ldots,T-1 \\ + & \sum_{k=g-l_g+1}^T w_{gt} \leq 1 - x_{g,t-l_g+1} + & \forall g \forall t=l_g-1,\ldots,T-1 \\ + & w_{gt} \geq x_{gt} - x_{g,t-1} + & \forall g \forall t=1,\ldots,T-1 \\ + & \sum_{g=1}^G p_{gt} \geq D_t + & \forall t \\ + & P^\text{min}_g x_{gt} \leq p_{gt} + & \forall g, t \\ + & p_{gt} \leq P^\text{max}_g x_{gt} + & \forall g, t \\ + & x_{gt} \in \{0, 1\} + & \forall g, t. +\end{align*}\end{split}\]
+

The first set of inequalities enforces minimum up-time constraints: if unit \(g\) is down at time \(t\), then it cannot start up during the previous \(L_g\) time steps. The second set of inequalities enforces minimum down-time constraints, and is symmetrical to the previous one. The third set ensures that if unit \(g\) starts up at time \(t\), then the start up variable must be one. The fourth set ensures that demand is satisfied at each time period. The fifth and sixth sets +enforce bounds to the quantity of power generated by each unit.

+
+

References

+ +
+
+
+

Random instance generator

+

The class UnitCommitmentGenerator can be used to generate random instances of this problem.

+

First, the user-provided probability distributions n_units and n_periods are sampled to determine the number of generating units and the number of time steps, respectively. Then, for each unit, the probabilities max_power and min_power are sampled to determine the unit’s maximum and minimum power output. To make it easier to generate valid ranges, min_power is not specified as the absolute power level in MW, but rather as a multiplier of max_power; for example, if +max_power samples to 100 and min_power samples to 0.5, then the unit’s power range is set to [50,100]. Then, the distributions cost_startup, cost_prod and cost_fixed are sampled to determine the unit’s startup, variable and fixed costs, while the distributions min_uptime and min_downtime are sampled to determine its minimum up/down-time.

+

After parameters for the units have been generated, the class then generates a periodic demand curve, with a peak every 12 time steps, in the range \((0.4C, 0.8C)\), where \(C\) is the sum of all units’ maximum power output. Finally, all costs and demand values are perturbed by random scaling factors independently sampled from the distributions cost_jitter and demand_jitter, respectively.

+

If fix_units=True, then the list of generators (with their respective parameters) is kept the same for all generated instances. If cost_jitter and demand_jitter are provided, the instances will still have slightly different costs and demands.

+
+
+

Example

+
+
[8]:
+
+
+
import random
+import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.uc import UnitCommitmentGenerator, build_uc_model_gurobipy
+
+# Set random seed to make example reproducible
+random.seed(42)
+np.random.seed(42)
+
+# Generate a random instance with 5 generators and 24 time steps
+data = UnitCommitmentGenerator(
+    n_units=randint(low=5, high=6),
+    n_periods=randint(low=24, high=25),
+    max_power=uniform(loc=50, scale=450),
+    min_power=uniform(loc=0.5, scale=0.25),
+    cost_startup=uniform(loc=0, scale=10_000),
+    cost_prod=uniform(loc=0, scale=50),
+    cost_fixed=uniform(loc=0, scale=1_000),
+    min_uptime=randint(low=2, high=8),
+    min_downtime=randint(low=2, high=8),
+    cost_jitter=uniform(loc=0.75, scale=0.5),
+    demand_jitter=uniform(loc=0.9, scale=0.2),
+    fix_units=True,
+).generate(10)
+
+# Print problem data for the two first instances
+for i in range(2):
+    print(f"min_power[{i}]", data[i].min_power)
+    print(f"max_power[{i}]", data[i].max_power)
+    print(f"min_uptime[{i}]", data[i].min_uptime)
+    print(f"min_downtime[{i}]", data[i].min_downtime)
+    print(f"min_power[{i}]", data[i].min_power)
+    print(f"cost_startup[{i}]", data[i].cost_startup)
+    print(f"cost_prod[{i}]", data[i].cost_prod)
+    print(f"cost_fixed[{i}]", data[i].cost_fixed)
+    print(f"demand[{i}]\n", data[i].demand)
+    print()
+
+# Load and optimize the first instance
+model = build_uc_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+min_power[0] [117.79 245.85 271.85 207.7   81.38]
+max_power[0] [218.54 477.82 379.4  319.4  120.21]
+min_uptime[0] [7 6 3 5 7]
+min_downtime[0] [7 3 5 6 2]
+min_power[0] [117.79 245.85 271.85 207.7   81.38]
+cost_startup[0] [3042.42 5247.56 4319.45 2912.29 6118.53]
+cost_prod[0] [ 6.97 14.61 18.32 22.8  39.26]
+cost_fixed[0] [199.67 514.23 592.41  46.45 607.54]
+demand[0]
+ [ 905.06  915.41 1166.52 1212.29 1127.81  953.52  905.06  796.21  783.78
+  866.23  768.62  899.59  905.06  946.23 1087.61 1004.24 1048.36  992.03
+  905.06  750.82  691.48  606.15  658.5   809.95]
+
+min_power[1] [117.79 245.85 271.85 207.7   81.38]
+max_power[1] [218.54 477.82 379.4  319.4  120.21]
+min_uptime[1] [7 6 3 5 7]
+min_downtime[1] [7 3 5 6 2]
+min_power[1] [117.79 245.85 271.85 207.7   81.38]
+cost_startup[1] [2458.08 6200.26 4585.74 2666.05 4783.34]
+cost_prod[1] [ 6.31 13.33 20.42 24.37 46.86]
+cost_fixed[1] [196.9  416.42 655.57  52.51 626.15]
+demand[1]
+ [ 981.42  840.07 1095.59 1102.03 1088.41  932.29  863.67  848.56  761.33
+  828.28  775.18  834.99  959.76  865.72 1193.52 1058.92  985.19  893.92
+  962.16  781.88  723.15  639.04  602.4   787.02]
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 578 rows, 360 columns and 2128 nonzeros
+Model fingerprint: 0x4dc1c661
+Variable types: 120 continuous, 240 integer (240 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 5e+02]
+  Objective range  [7e+00, 6e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+00, 1e+03]
+Presolve removed 244 rows and 131 columns
+Presolve time: 0.01s
+Presolved: 334 rows, 229 columns, 842 nonzeros
+Variable types: 116 continuous, 113 integer (113 binary)
+Found heuristic solution: objective 440662.46430
+Found heuristic solution: objective 429461.97680
+Found heuristic solution: objective 374043.64040
+
+Root relaxation: objective 3.361348e+05, 142 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 336134.820    0   18 374043.640 336134.820  10.1%     -    0s
+H    0     0                    368600.14450 336134.820  8.81%     -    0s
+H    0     0                    364721.76610 336134.820  7.84%     -    0s
+     0     0     cutoff    0      364721.766 364721.766  0.00%     -    0s
+
+Cutting planes:
+  Gomory: 3
+  Cover: 8
+  Implied bound: 29
+  Clique: 222
+  MIR: 7
+  Flow cover: 7
+  RLT: 1
+  Relax-and-lift: 7
+
+Explored 1 nodes (234 simplex iterations) in 0.02 seconds (0.02 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 5: 364722 368600 374044 ... 440662
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 3.647217661000e+05, best bound 3.647217661000e+05, gap 0.0000%
+
+User-callback calls 677, time in user-callback 0.00 sec
+
+
+
+
+
+

5.10. Vertex Cover

+

Minimum weight vertex cover is a classical optimization problem in graph theory where the goal is to find the minimum-weight set of vertices that are connected to all of the edges in the graph. The problem generalizes one of Karp’s 21 NP-complete problems and has applications in various fields, including bioinformatics and machine learning.

+
+

Formulation

+

Let \(G=(V,E)\) be a simple graph. For each vertex \(v \in V\), let \(w_g\) be its weight, and let \(x_v\) be a binary decision variable which equals one if \(v\) is included in the cover. The mixed-integer linear formulation for the problem is given by:

+
+\[\begin{split}\begin{align*} +\text{minimize} \;\;\; + & \sum_{v \in V} w_v \\ +\text{such that} \;\;\; + & x_i + x_j \ge 1 & \forall \{i, j\} \in E, \\ + & x_{i,j} \in \{0, 1\} + & \forall \{i,j\} \in E. +\end{align*}\end{split}\]
+
+
+

Random instance generator

+

The class MinWeightVertexCoverGenerator can be used to generate random instances of this problem. The class accepts exactly the same parameters and behaves exactly in the same way as MaxWeightStableSetGenerator. See the stable set section for more details.

+
+
+

Example

+
+
[9]:
+
+
+
import random
+import numpy as np
+from scipy.stats import uniform, randint
+from miplearn.problems.vertexcover import (
+    MinWeightVertexCoverGenerator,
+    build_vertexcover_model_gurobipy,
+)
+
+# Set random seed to make example reproducible
+random.seed(42)
+np.random.seed(42)
+
+# Generate random instances with a fixed 10-node graph,
+# 25% density and random weights in the [0, 100] interval.
+data = MinWeightVertexCoverGenerator(
+    w=uniform(loc=0.0, scale=100.0),
+    n=randint(low=10, high=11),
+    p=uniform(loc=0.25, scale=0.0),
+    fix_graph=True,
+).generate(10)
+
+# Print the graph and weights for two instances
+print("graph", data[0].graph.edges)
+print("weights[0]", data[0].weights)
+print("weights[1]", data[1].weights)
+print()
+
+# Load and optimize the first instance
+model = build_vertexcover_model_gurobipy(data[0])
+model.optimize()
+
+
+
+
+
+
+
+
+graph [(0, 2), (0, 4), (0, 8), (1, 2), (1, 3), (1, 5), (1, 6), (1, 9), (2, 5), (2, 9), (3, 6), (3, 7), (6, 9), (7, 8), (8, 9)]
+weights[0] [37.45 95.07 73.2  59.87 15.6  15.6   5.81 86.62 60.11 70.81]
+weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 15 rows, 10 columns and 30 nonzeros
+Model fingerprint: 0x2d2d1390
+Variable types: 0 continuous, 10 integer (10 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [6e+00, 1e+02]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+00, 1e+00]
+Found heuristic solution: objective 301.0000000
+Presolve removed 7 rows and 2 columns
+Presolve time: 0.00s
+Presolved: 8 rows, 8 columns, 19 nonzeros
+Variable types: 0 continuous, 8 integer (8 binary)
+
+Root relaxation: objective 2.995750e+02, 8 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 infeasible    0       301.00000  301.00000  0.00%     -    0s
+
+Explored 1 nodes (8 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 1: 301
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 3.010000000000e+02, best bound 3.010000000000e+02, gap 0.0000%
+
+User-callback calls 326, time in user-callback 0.00 sec
+
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/guide/solvers.ipynb b/0.4/guide/solvers.ipynb new file mode 100644 index 00000000..c4ee9bc9 --- /dev/null +++ b/0.4/guide/solvers.ipynb @@ -0,0 +1,251 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "id": "9ec1907b-db93-4840-9439-c9005902b968", + "metadata": {}, + "source": [ + "# Learning Solver\n", + "\n", + "On previous pages, we discussed various components of the MIPLearn framework, including training data collectors, feature extractors, and individual machine learning components. In this page, we introduce **LearningSolver**, the main class of the framework which integrates all the aforementioned components into a cohesive whole. Using **LearningSolver** involves three steps: (i) configuring the solver; (ii) training the ML components; and (iii) solving new MIP instances. In the following, we describe each of these steps, then conclude with a complete runnable example.\n", + "\n", + "### Configuring the solver\n", + "\n", + "**LearningSolver** is composed by multiple individual machine learning components, each targeting a different part of the solution process, or implementing a different machine learning strategy. This architecture allows strategies to be easily enabled, disabled or customized, making the framework flexible. By default, no components are provided and **LearningSolver** is equivalent to a traditional MIP solver. To specify additional components, the `components` constructor argument may be used:\n", + "\n", + "```python\n", + "solver = LearningSolver(\n", + " components=[\n", + " comp1,\n", + " comp2,\n", + " comp3,\n", + " ]\n", + ")\n", + "```\n", + "\n", + "In this example, three components `comp1`, `comp2` and `comp3` are provided. The strategies implemented by these components are applied sequentially when solving the problem. For example, `comp1` and `comp2` could fix a subset of decision variables, while `comp3` constructs a warm start for the remaining problem.\n", + "\n", + "### Training and solving new instances\n", + "\n", + "Once a solver is configured, its ML components need to be trained. This can be achieved by the `solver.fit` method, as illustrated below. The method accepts a list of HDF5 files and trains each individual component sequentially. Once the solver is trained, new instances can be solved using `solver.optimize`. The method returns a dictionary of statistics collected by each component, such as the number of variables fixed.\n", + "\n", + "```python\n", + "# Build instances\n", + "train_data = ...\n", + "test_data = ...\n", + "\n", + "# Collect training data\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_model)\n", + "\n", + "# Build solver\n", + "solver = LearningSolver(...)\n", + "\n", + "# Train components\n", + "solver.fit(train_data)\n", + "\n", + "# Solve a new test instance\n", + "stats = solver.optimize(test_data[0], build_model)\n", + "\n", + "```\n", + "\n", + "### Complete example\n", + "\n", + "In the example below, we illustrate the usage of **LearningSolver** by building instances of the Traveling Salesman Problem, collecting training data, training the ML components, then solving a new instance." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "92b09b98", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", + "Model fingerprint: 0x6ddcd141\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [4e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 10 rows, 45 columns, 90 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n", + " 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 15 iterations and 0.00 seconds (0.00 work units)\n", + "Optimal objective 2.761000000e+03\n", + "\n", + "User-callback calls 56, time in user-callback 0.00 sec\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", + "Model fingerprint: 0x74ca3d0a\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [4e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "\n", + "User MIP start produced solution with objective 2796 (0.00s)\n", + "Loaded user MIP start with objective 2796\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 10 rows, 45 columns, 90 nonzeros\n", + "Variable types: 0 continuous, 45 integer (45 binary)\n", + "\n", + "Root relaxation: objective 2.761000e+03, 14 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n", + " 0 0 cutoff 0 2796.00000 2796.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Lazy constraints: 3\n", + "\n", + "Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 2796 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 110, time in user-callback 0.00 sec\n" + ] + }, + { + "data": { + "text/plain": [ + "{'WS: Count': 1, 'WS: Number of variables set': 41.0}" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import random\n", + "\n", + "import numpy as np\n", + "from scipy.stats import uniform, randint\n", + "from sklearn.linear_model import LogisticRegression\n", + "\n", + "from miplearn.classifiers.minprob import MinProbabilityClassifier\n", + "from miplearn.classifiers.singleclass import SingleClassFix\n", + "from miplearn.collectors.basic import BasicCollector\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n", + "from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n", + "from miplearn.io import write_pkl_gz\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanGenerator,\n", + " build_tsp_model_gurobipy,\n", + ")\n", + "from miplearn.solvers.learning import LearningSolver\n", + "\n", + "# Set random seed to make example reproducible.\n", + "random.seed(42)\n", + "np.random.seed(42)\n", + "\n", + "# Generate a few instances of the traveling salesman problem.\n", + "data = TravelingSalesmanGenerator(\n", + " n=randint(low=10, high=11),\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " gamma=uniform(loc=0.90, scale=0.20),\n", + " fix_cities=True,\n", + " round=True,\n", + ").generate(50)\n", + "\n", + "# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n", + "all_data = write_pkl_gz(data, \"data/tsp\")\n", + "\n", + "# Split train/test data\n", + "train_data = all_data[:40]\n", + "test_data = all_data[40:]\n", + "\n", + "# Collect training data\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_tsp_model_gurobipy, n_jobs=4)\n", + "\n", + "# Build learning solver\n", + "solver = LearningSolver(\n", + " components=[\n", + " IndependentVarsPrimalComponent(\n", + " base_clf=SingleClassFix(\n", + " MinProbabilityClassifier(\n", + " base_clf=LogisticRegression(),\n", + " thresholds=[0.95, 0.95],\n", + " ),\n", + " ),\n", + " extractor=AlvLouWeh2017Extractor(),\n", + " action=SetWarmStart(),\n", + " )\n", + " ]\n", + ")\n", + "\n", + "# Train ML models\n", + "solver.fit(train_data)\n", + "\n", + "# Solve a test instance\n", + "solver.optimize(test_data[0], build_tsp_model_gurobipy)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e27d2cbd-5341-461d-bbc1-8131aee8d949", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/guide/solvers/index.html b/0.4/guide/solvers/index.html new file mode 100644 index 00000000..1de87b32 --- /dev/null +++ b/0.4/guide/solvers/index.html @@ -0,0 +1,494 @@ + + + + + + + + 9. Learning Solver — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + + +
+
+
+
+ +
+ +
+

9. Learning Solver

+

On previous pages, we discussed various components of the MIPLearn framework, including training data collectors, feature extractors, and individual machine learning components. In this page, we introduce LearningSolver, the main class of the framework which integrates all the aforementioned components into a cohesive whole. Using LearningSolver involves three steps: (i) configuring the solver; (ii) training the ML components; and (iii) solving new MIP instances. In the following, we +describe each of these steps, then conclude with a complete runnable example.

+
+

9.1. Configuring the solver

+

LearningSolver is composed by multiple individual machine learning components, each targeting a different part of the solution process, or implementing a different machine learning strategy. This architecture allows strategies to be easily enabled, disabled or customized, making the framework flexible. By default, no components are provided and LearningSolver is equivalent to a traditional MIP solver. To specify additional components, the components constructor argument may be used:

+
solver = LearningSolver(
+    components=[
+        comp1,
+        comp2,
+        comp3,
+    ]
+)
+
+
+

In this example, three components comp1, comp2 and comp3 are provided. The strategies implemented by these components are applied sequentially when solving the problem. For example, comp1 and comp2 could fix a subset of decision variables, while comp3 constructs a warm start for the remaining problem.

+
+
+

9.2. Training and solving new instances

+

Once a solver is configured, its ML components need to be trained. This can be achieved by the solver.fit method, as illustrated below. The method accepts a list of HDF5 files and trains each individual component sequentially. Once the solver is trained, new instances can be solved using solver.optimize. The method returns a dictionary of statistics collected by each component, such as the number of variables fixed.

+
# Build instances
+train_data = ...
+test_data = ...
+
+# Collect training data
+bc = BasicCollector()
+bc.collect(train_data, build_model)
+
+# Build solver
+solver = LearningSolver(...)
+
+# Train components
+solver.fit(train_data)
+
+# Solve a new test instance
+stats = solver.optimize(test_data[0], build_model)
+
+
+
+
+

9.3. Complete example

+

In the example below, we illustrate the usage of LearningSolver by building instances of the Traveling Salesman Problem, collecting training data, training the ML components, then solving a new instance.

+
+
[1]:
+
+
+
import random
+
+import numpy as np
+from scipy.stats import uniform, randint
+from sklearn.linear_model import LogisticRegression
+
+from miplearn.classifiers.minprob import MinProbabilityClassifier
+from miplearn.classifiers.singleclass import SingleClassFix
+from miplearn.collectors.basic import BasicCollector
+from miplearn.components.primal.actions import SetWarmStart
+from miplearn.components.primal.indep import IndependentVarsPrimalComponent
+from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor
+from miplearn.io import write_pkl_gz
+from miplearn.problems.tsp import (
+    TravelingSalesmanGenerator,
+    build_tsp_model_gurobipy,
+)
+from miplearn.solvers.learning import LearningSolver
+
+# Set random seed to make example reproducible.
+random.seed(42)
+np.random.seed(42)
+
+# Generate a few instances of the traveling salesman problem.
+data = TravelingSalesmanGenerator(
+    n=randint(low=10, high=11),
+    x=uniform(loc=0.0, scale=1000.0),
+    y=uniform(loc=0.0, scale=1000.0),
+    gamma=uniform(loc=0.90, scale=0.20),
+    fix_cities=True,
+    round=True,
+).generate(50)
+
+# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...
+all_data = write_pkl_gz(data, "data/tsp")
+
+# Split train/test data
+train_data = all_data[:40]
+test_data = all_data[40:]
+
+# Collect training data
+bc = BasicCollector()
+bc.collect(train_data, build_tsp_model_gurobipy, n_jobs=4)
+
+# Build learning solver
+solver = LearningSolver(
+    components=[
+        IndependentVarsPrimalComponent(
+            base_clf=SingleClassFix(
+                MinProbabilityClassifier(
+                    base_clf=LogisticRegression(),
+                    thresholds=[0.95, 0.95],
+                ),
+            ),
+            extractor=AlvLouWeh2017Extractor(),
+            action=SetWarmStart(),
+        )
+    ]
+)
+
+# Train ML models
+solver.fit(train_data)
+
+# Solve a test instance
+solver.optimize(test_data[0], build_tsp_model_gurobipy)
+
+
+
+
+
+
+
+
+Restricted license - for non-production use only - expires 2024-10-28
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 10 rows, 45 columns and 90 nonzeros
+Model fingerprint: 0x6ddcd141
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [4e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [2e+00, 2e+00]
+Presolve time: 0.00s
+Presolved: 10 rows, 45 columns, 90 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    6.3600000e+02   1.700000e+01   0.000000e+00      0s
+      15    2.7610000e+03   0.000000e+00   0.000000e+00      0s
+
+Solved in 15 iterations and 0.00 seconds (0.00 work units)
+Optimal objective  2.761000000e+03
+
+User-callback calls 56, time in user-callback 0.00 sec
+Set parameter PreCrush to value 1
+Set parameter LazyConstraints to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 10 rows, 45 columns and 90 nonzeros
+Model fingerprint: 0x74ca3d0a
+Variable types: 0 continuous, 45 integer (45 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [4e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [2e+00, 2e+00]
+
+User MIP start produced solution with objective 2796 (0.00s)
+Loaded user MIP start with objective 2796
+
+Presolve time: 0.00s
+Presolved: 10 rows, 45 columns, 90 nonzeros
+Variable types: 0 continuous, 45 integer (45 binary)
+
+Root relaxation: objective 2.761000e+03, 14 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 2761.00000    0    - 2796.00000 2761.00000  1.25%     -    0s
+     0     0     cutoff    0      2796.00000 2796.00000  0.00%     -    0s
+
+Cutting planes:
+  Lazy constraints: 3
+
+Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 1: 2796
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%
+
+User-callback calls 110, time in user-callback 0.00 sec
+
+
+
+
[1]:
+
+
+
+
+{'WS: Count': 1, 'WS: Number of variables set': 41.0}
+
+
+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/index.html b/0.4/index.html new file mode 100644 index 00000000..7cb5bd5a --- /dev/null +++ b/0.4/index.html @@ -0,0 +1,386 @@ + + + + + + + + MIPLearn — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + +
+ +
+ Contents +
+ +
+
+
+
+
+ +
+ +
+

MIPLearn

+

MIPLearn is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.

+

Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits.

+
+

Contents

+ + + +
+
+

Authors

+
    +
  • Alinson S. Xavier (Argonne National Laboratory)

  • +
  • Feng Qiu (Argonne National Laboratory)

  • +
  • Xiaoyi Gu (Georgia Institute of Technology)

  • +
  • Berkay Becu (Georgia Institute of Technology)

  • +
  • Santanu S. Dey (Georgia Institute of Technology)

  • +
+
+
+

Acknowledgments

+
    +
  • Based upon work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy.

  • +
  • Based upon work supported by the U.S. Department of Energy Advanced Grid Modeling Program.

  • +
+
+
+

Citing MIPLearn

+

If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:

+
    +
  • Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey. MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3). Zenodo (2023). DOI: https://doi.org/10.5281/zenodo.4287567

  • +
+

If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:

+ +
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/objects.inv b/0.4/objects.inv new file mode 100644 index 00000000..6d9f27f8 Binary files /dev/null and b/0.4/objects.inv differ diff --git a/0.4/py-modindex/index.html b/0.4/py-modindex/index.html new file mode 100644 index 00000000..ffc9f0f8 --- /dev/null +++ b/0.4/py-modindex/index.html @@ -0,0 +1,378 @@ + + + + + + + Python Module Index — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + +
+ + +
+ +
+
+
+
+
+ +
+ + +

Python Module Index

+ +
+ m +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 
+ m
+ miplearn +
    + miplearn.classifiers.minprob +
    + miplearn.classifiers.singleclass +
    + miplearn.collectors.basic +
    + miplearn.components.primal.actions +
    + miplearn.components.primal.expert +
    + miplearn.components.primal.indep +
    + miplearn.components.primal.joint +
    + miplearn.components.primal.mem +
    + miplearn.extractors.AlvLouWeh2017 +
    + miplearn.extractors.fields +
    + miplearn.h5 +
    + miplearn.io +
    + miplearn.problems.binpack +
    + miplearn.problems.multiknapsack +
    + miplearn.problems.pmedian +
    + miplearn.problems.setcover +
    + miplearn.problems.setpack +
    + miplearn.problems.stab +
    + miplearn.problems.tsp +
    + miplearn.problems.uc +
    + miplearn.problems.vertexcover +
    + miplearn.solvers.abstract +
    + miplearn.solvers.gurobi +
    + miplearn.solvers.learning +
+ + +
+ + +
+ + +
+ +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/search/index.html b/0.4/search/index.html new file mode 100644 index 00000000..b4a54451 --- /dev/null +++ b/0.4/search/index.html @@ -0,0 +1,272 @@ + + + + + + + Search — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + +
+ + +
+ +
+
+
+
+
+ +
+ +

Search

+ + + + +

+ Searching for multiple words only shows matches that contain + all words. +

+ + +
+ + + +
+ + + +
+ +
+ + +
+ + +
+ + +
+ +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/searchindex.js b/0.4/searchindex.js new file mode 100644 index 00000000..846edcfc --- /dev/null +++ b/0.4/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"docnames": ["api/collectors", "api/components", "api/helpers", "api/problems", "api/solvers", "guide/collectors", "guide/features", "guide/primal", "guide/problems", "guide/solvers", "index", "tutorials/cuts-gurobipy", "tutorials/getting-started-gurobipy", "tutorials/getting-started-jump", "tutorials/getting-started-pyomo"], "filenames": ["api/collectors.rst", "api/components.rst", "api/helpers.rst", "api/problems.rst", "api/solvers.rst", "guide/collectors.ipynb", "guide/features.ipynb", "guide/primal.ipynb", "guide/problems.ipynb", "guide/solvers.ipynb", "index.rst", "tutorials/cuts-gurobipy.ipynb", "tutorials/getting-started-gurobipy.ipynb", "tutorials/getting-started-jump.ipynb", "tutorials/getting-started-pyomo.ipynb"], "titles": ["11. Collectors & Extractors", "12. Components", "14. Helpers", "10. Benchmark Problems", "13. Solvers", "6. Training Data Collectors", "7. Feature Extractors", "8. Primal Components", "5. Benchmark Problems", "9. Learning Solver", "MIPLearn", "4. User cuts and lazy constraints", "2. Getting started (Gurobipy)", "3. Getting started (JuMP)", "1. Getting started (Pyomo)"], "terms": {"class": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "minprobabilityclassifi": [0, 7, 9], "base_clf": [0, 1, 7, 9], "type": [0, 1, 4, 5, 6, 8, 9, 11, 12, 13, 14], "ani": [0, 1, 2, 4, 5, 7, 8, 12, 13, 14], "threshold": [0, 1, 7, 9], "list": [0, 1, 2, 3, 4, 5, 7, 8, 9, 11, 12, 14], "float": [0, 1, 4, 5, 12, 14], "clone_fn": [0, 1, 7], "callabl": [0, 1, 4], "function": [0, 1, 5, 6, 7, 8, 11, 12, 13, 14], "clone": [0, 1, 7], "base": [0, 1, 2, 4, 6, 7, 10, 12, 13, 14], "baseestim": 0, "meta": [0, 7], "return": [0, 1, 6, 7, 8, 9, 11, 12, 13, 14], "nan": 0, "predict": [0, 1, 7, 11, 12, 13, 14], "made": [0, 7], "have": [0, 3, 5, 6, 8, 11, 12, 13, 14], "probabl": [0, 3, 7, 8, 12, 13, 14], "below": [0, 1, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "given": [0, 4, 5, 6, 7, 8, 11, 12, 13, 14], "more": [0, 3, 5, 6, 7, 8, 11, 12, 13, 14], "specif": [0, 3, 5, 6, 7, 8, 10, 12, 13, 14], "thi": [0, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "call": [0, 8, 9, 11, 13], "predict_proba": [0, 7], "compar": [0, 5, 7, 11], "result": [0, 3, 8], "against": 0, "provid": [0, 1, 3, 5, 7, 8, 9, 10, 11, 12, 13, 14], "If": [0, 3, 7, 8, 10, 11, 12, 13, 14], "one": [0, 1, 3, 7, 8, 11, 12, 13, 14], "i": [0, 1, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "abov": [0, 1, 3, 5, 7, 8, 11, 12, 13, 14], "its": [0, 3, 5, 6, 7, 8, 9, 11, 12, 13, 14], "otherwis": [0, 1, 5, 6, 7], "fit": [0, 1, 4, 7, 9, 11, 12, 13, 14], "x": [0, 3, 5, 7, 8, 9, 11, 12, 13, 14], "ndarrai": [0, 1, 2, 3, 4, 11], "y": [0, 3, 5, 7, 8, 9, 11, 12, 13, 14], "none": [0, 1, 2, 4, 11], "set_fit_request": 0, "bool": [0, 2, 3, 4], "str": [0, 1, 2, 3, 4, 5, 11, 12, 14], "unchang": 0, "request": [0, 10, 12, 13, 14], "metadata": [0, 5], "pass": 0, "method": [0, 3, 5, 6, 7, 8, 9, 10, 12, 13, 14], "note": [0, 3, 6, 7, 8, 11, 12, 13, 14], "onli": [0, 3, 6, 7, 8, 9, 10, 11, 12, 13, 14], "relev": [0, 6], "enable_metadata_rout": 0, "true": [0, 3, 5, 6, 8, 9, 10, 11], "see": [0, 7, 8, 11, 12, 13, 14], "sklearn": [0, 7, 9, 11, 12, 13, 14], "set_config": 0, "pleas": [0, 8, 12, 13, 14], "user": [0, 3, 6, 7, 8, 9, 12, 13, 14], "guid": [0, 12, 13, 14], "how": [0, 5, 7, 8, 11, 12, 13, 14], "rout": [0, 8, 11], "mechan": 0, "work": [0, 8, 9, 10, 11, 12, 13, 14], "The": [0, 3, 5, 6, 7, 8, 9, 11, 12, 13, 14], "option": [0, 1], "each": [0, 1, 3, 6, 7, 8, 9, 11, 12, 13, 14], "paramet": [0, 3, 7, 8, 9, 11, 14], "ar": [0, 3, 5, 6, 7, 8, 9, 11, 12, 13, 14], "ignor": 0, "fals": [0, 2, 3, 4, 8], "estim": [0, 7], "rais": 0, "an": [0, 6, 7, 8, 10, 11, 12, 13, 14], "error": [0, 5], "should": [0, 1, 6, 8, 11, 12, 13, 14], "alia": 0, "instead": [0, 3, 7, 8, 11, 14], "origin": [0, 5, 12, 13, 14], "name": [0, 4, 5, 11, 12], "default": [0, 3, 4, 8, 9], "util": [0, 12, 13, 14], "metadata_rout": 0, "retain": 0, "exist": [0, 8], "allow": [0, 5, 6, 7, 9, 12, 13, 14], "you": [0, 6, 8, 10, 13], "chang": [0, 8, 11, 12, 13, 14], "some": [0, 3, 5, 6, 7, 8, 11, 12, 13, 14], "other": [0, 5, 6, 7, 8, 11, 12, 13, 14], "new": [0, 3, 5, 6, 7, 8, 10, 12, 13, 14], "version": [0, 5, 8, 9, 10, 11, 12, 13, 14], "1": [0, 1, 2, 3, 5, 6, 7, 8, 9, 11, 12, 13, 14], "3": [0, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "us": [0, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "sub": 0, "e": [0, 5, 6, 7, 8, 11, 12, 13, 14], "g": [0, 5, 7, 8, 12, 13, 14], "insid": [0, 11], "pipelin": 0, "ha": [0, 3, 4, 6, 7, 8, 11], "effect": [0, 11], "self": 0, "updat": 0, "object": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14], "set_predict_request": 0, "singleclassfix": [0, 7, 9], "logist": [0, 7, 8], "regress": [0, 7], "issu": [0, 6, 7, 12, 13, 14], "dataset": [0, 7, 11], "contain": [0, 7, 8, 11, 12, 13, 14], "singl": [0, 1, 5, 7, 12, 13, 14], "fix": [0, 3, 7, 8, 9, 12, 13, 14], "train": [0, 1, 6, 7, 8, 10], "data": [0, 3, 6, 7, 8, 9, 10], "alwai": [0, 7], "basiccollector": [0, 5, 6, 9, 11, 12, 13, 14], "skip_lp": [0, 4], "write_mp": 0, "collect": [0, 5, 6, 7, 8, 9, 11, 12, 13, 14], "filenam": [0, 2, 4, 5, 11], "build_model": [0, 4, 9], "n_job": [0, 2, 5, 6, 9, 11, 12, 14], "int": [0, 1, 2, 3, 11, 12, 13, 14], "progress": [0, 2], "verbos": [0, 11], "h5fieldsextractor": [0, 7, 10, 11, 12, 13, 14], "instance_field": [0, 6, 7, 11, 12, 13, 14], "var_field": [0, 6], "constr_field": [0, 6], "featuresextractor": [0, 1], "get_constr_featur": [0, 6], "h5": [0, 4, 5, 6, 11, 12, 13, 14], "h5file": [0, 2, 4, 5, 6], "get_instance_featur": [0, 6], "get_var_featur": [0, 6], "alvlouweh2017extractor": [0, 7, 9, 10], "with_m1": 0, "with_m2": 0, "with_m3": 0, "comput": [0, 1, 6, 7, 10, 11, 12, 14], "static": [0, 4, 6], "variabl": [0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "featur": [0, 4, 5, 7, 9, 10], "describ": [0, 3, 5, 6, 7, 8, 9, 11], "alvarez": [0, 6], "A": [0, 6, 7, 12, 14], "m": [0, 3, 6, 8, 11], "louveaux": [0, 6], "q": 0, "wehenkel": [0, 6], "l": 0, "2017": [0, 6], "machin": [0, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "learn": [0, 5, 6, 7, 8, 10, 12, 13, 14], "approxim": [0, 6], "strong": [0, 6, 11], "branch": [0, 5, 6, 11], "inform": [0, 5, 6, 10, 11], "journal": [0, 6, 8, 10], "29": [0, 6, 8], "185": [0, 6, 8], "195": [0, 6], "enforceproxim": [1, 7], "tol": 1, "primalcomponentact": 1, "perform": [1, 5, 6, 7, 8, 10, 11, 12, 13, 14], "model": [1, 3, 4, 5, 6, 7, 8, 9, 10], "abstractmodel": [1, 4], "var_nam": [1, 4], "var_valu": [1, 4], "stat": [1, 3, 4, 5, 6, 8, 9, 11, 12, 14], "dict": [1, 4], "fixvari": [1, 7], "abc": [1, 4], "abstract": [1, 6, 12, 13, 14], "setwarmstart": [1, 7, 9, 12, 13, 14], "expertprimalcompon": [1, 7], "before_mip": 1, "test_h5": 1, "train_h5": 1, "independentvarsprimalcompon": [1, 7, 9], "extractor": [1, 7, 9, 10, 11, 12, 13, 14], "jointvarsprimalcompon": [1, 7], "clf": [1, 7, 11, 12, 13, 14], "memorizingprimalcompon": [1, 7, 12, 13, 14], "constructor": [1, 3, 7, 9, 12, 13, 14], "solutionconstructor": 1, "memor": [1, 10, 12, 13, 14], "all": [1, 3, 5, 6, 7, 8, 9, 12, 13, 14], "solut": [1, 4, 5, 6, 7, 8, 9, 10, 11], "seen": [1, 7], "dure": [1, 7, 8, 11, 12, 13, 14], "classifi": [1, 7, 9, 12, 13, 14], "which": [1, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "solver": [1, 5, 7, 8, 10, 11, 12, 13, 14], "combin": [1, 3, 7, 8, 10], "multipl": [1, 5, 7, 8, 9, 12, 13, 14], "partial": [1, 5, 7, 12, 13, 14], "mergetopsolut": [1, 7, 12, 13, 14], "k": [1, 3, 6, 7, 8, 12, 13, 14], "warm": [1, 7, 9, 12, 13, 14], "start": [1, 7, 8, 9, 10, 11], "construct": [1, 7, 9, 12, 13, 14], "strategi": [1, 7, 9, 11, 12, 13, 14], "first": [1, 3, 5, 7, 8, 10, 11, 12, 13, 14], "select": [1, 7, 8, 11], "top": [1, 5, 7], "merg": [1, 7, 12, 13, 14], "them": [1, 6, 7, 8, 11, 12, 13, 14], "To": [1, 3, 7, 8, 9, 11, 12, 13, 14], "mean": [1, 7], "optim": [1, 3, 4, 5, 6, 7, 8, 9, 10, 11], "valu": [1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "decis": [1, 5, 6, 7, 8, 9, 11, 12, 13, 14], "set": [1, 3, 5, 6, 7, 9, 10, 11, 12, 13, 14], "zero": [1, 7, 11], "0": [1, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "ii": [1, 5, 6, 7, 8, 9], "iii": [1, 6, 9], "leav": 1, "free": [1, 6, 7], "y_proba": 1, "selecttopsolut": [1, 7], "gzip": [2, 5, 12, 14], "read_pkl_gz": [2, 11, 12, 14], "write_pkl_gz": [2, 5, 6, 9, 11, 12, 14], "obj": [2, 5, 8, 9, 11, 12, 13, 14], "dirnam": 2, "prefix": [2, 5], "mode": [2, 3, 13], "r": [2, 5, 8, 9, 11, 12, 14], "close": [2, 7, 12, 13, 14], "get_arrai": [2, 5], "kei": [2, 5], "get_byt": 2, "byte": 2, "bytearrai": 2, "get_scalar": [2, 5], "get_spars": [2, 5], "coo_matrix": 2, "put_arrai": [2, 5], "put_byt": [2, 5], "put_scalar": [2, 5], "put_spars": [2, 5], "binpackdata": 3, "size": [3, 6, 7, 8, 11], "capac": [3, 8], "bin": [3, 10, 13], "pack": [3, 10], "numpi": [3, 5, 6, 8, 9, 11, 12, 14], "item": [3, 8], "binpackgener": [3, 8], "n": [3, 5, 6, 7, 8, 9, 11, 12, 13, 14], "rv_frozen": 3, "sizes_jitt": [3, 8], "capacity_jitt": [3, 8], "fix_item": [3, 8], "random": [3, 5, 6, 9, 11, 12, 13, 14], "instanc": [3, 5, 6, 7, 10], "gener": [3, 5, 6, 7, 9, 10], "sampl": [3, 8, 12, 13, 14], "distribut": [3, 8, 12, 13, 14], "decid": [3, 7, 8, 12, 13, 14], "respect": [3, 5, 6, 7, 8, 12, 13, 14], "number": [3, 5, 6, 8, 9, 11, 12, 13, 14], "independ": [3, 6, 8, 10, 11], "creat": [3, 5, 6, 7, 8, 11], "refer": [3, 6, 8], "previous": [3, 5, 6, 7, 8, 10, 12, 13, 14], "addit": [3, 8, 9, 11, 12, 13, 14], "perturb": [3, 8, 11], "s_i": [3, 8], "gamma_i": [3, 8], "where": [3, 7, 8, 12, 13, 14], "th": [3, 7, 8], "from": [3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "similarli": [3, 8], "b": [3, 5, 8], "beta": [3, 8], "remain": [3, 5, 6, 7, 8, 9], "same": [3, 6, 7, 8, 11], "across": [3, 7, 8], "appli": [3, 6, 7, 8, 9, 12, 13, 14], "complet": [3, 6, 7, 8, 10, 11], "differ": [3, 7, 8, 9, 11, 12, 13, 14], "n_sampl": 3, "build_binpack_model": [], "gurobimodel": [3, 4, 11, 12], "convert": [3, 6, 11, 12, 13, 14], "concret": [3, 12, 13, 14], "gurobipi": [3, 10, 11, 13, 14], "multiknapsackdata": 3, "price": [3, 5, 8], "weight": [3, 8, 11], "multi": [3, 5, 6, 10], "dimension": [3, 5, 6, 10], "knapsack": [3, 6, 10], "matrix": [3, 5, 6, 8, 9, 11, 12, 13, 14], "multiknapsackgener": [3, 6, 8], "scipi": [3, 5, 6, 8, 9, 11, 12, 14], "_distn_infrastructur": 3, "rv_discrete_frozen": 3, "w": [3, 5, 6, 8, 9, 12], "u": [3, 6, 8, 10, 12, 13, 14], "rv_continuous_frozen": 3, "alpha": [3, 6, 8], "fix_w": [3, 6, 8], "w_jitter": [3, 6, 8], "p_jitter": [3, 6, 8], "round": [3, 5, 6, 8, 9, 11], "constraint": [3, 5, 6, 7, 8, 9, 10, 12, 13, 14], "specifi": [3, 6, 7, 8, 9, 11], "j": [3, 7, 8, 11], "alpha_j": 3, "sum": [3, 8, 12, 13, 14], "rang": [3, 8, 9, 11, 12, 13, 14], "tight": [3, 8], "ratio": [3, 8], "make": [3, 5, 6, 7, 8, 9, 11, 12, 13, 14], "challeng": [3, 8, 12, 13, 14], "cost": [3, 5, 6, 7, 8, 12, 13, 14], "linearli": [3, 8], "correl": [3, 7, 8], "averag": [3, 7, 8], "u_i": 3, "coeffici": [3, 4, 5, 6, 8, 9, 11, 12, 13, 14], "multipli": [3, 8], "onc": [3, 8, 9, 11], "entir": [3, 5, 6, 7, 8, 11, 12, 13, 14], "kept": [3, 8], "also": [3, 5, 6, 7, 8, 10, 11, 12, 13, 14], "impli": [3, 8], "although": [3, 5, 8, 12, 13, 14], "deriv": [3, 6, 8, 11], "long": [3, 8], "constant": [3, 5, 7, 8], "still": [3, 7, 8, 11, 12, 13, 14], "ident": [3, 8], "gamma": [3, 5, 8, 9, 11], "when": [3, 5, 7, 8, 9, 12, 13, 14], "argument": [3, 7, 8, 9], "mai": [3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "roughli": [3, 8], "exactli": [3, 8, 11, 12, 13, 14], "calcul": [3, 8], "By": [3, 8, 9, 12, 13, 14], "nearest": [3, 7, 8, 12, 13, 14], "integ": [3, 4, 5, 8, 9, 10, 11, 12, 13, 14], "disabl": [3, 8, 9], "rv_discret": 3, "rv_continu": 3, "profit": 3, "boolean": 3, "minu": 3, "nois": 3, "ad": [3, 5, 11, 14], "build_multiknapsack_model": [], "pmediandata": 3, "distanc": [3, 7, 8, 11], "demand": [3, 8, 12, 13, 14], "p": [3, 10, 12, 13, 14], "capacit": [3, 10], "median": [3, 10], "between": [3, 6, 7, 8, 11, 12, 13, 14], "custom": [3, 8, 9], "facil": [3, 8], "need": [3, 5, 6, 8, 9, 11, 12, 13, 14], "chosen": [3, 8], "pmediangener": [3, 8], "distances_jitt": [3, 8], "demands_jitt": [3, 8], "capacities_jitt": [3, 8], "Then": [3, 7, 8], "build": [3, 5, 6, 7, 8, 9, 11, 12, 13, 14], "geograph": [3, 8], "locat": [3, 5, 8, 11], "xi": 3, "yi": 3, "For": [3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "final": [3, 7, 8, 11, 12, 13, 14], "euclidean": [3, 8], "factor": [3, 8], "slightli": [3, 8, 11], "coordin": 3, "point": [3, 5, 12, 13, 14], "scale": [3, 5, 6, 8, 9, 10, 11, 12, 13, 14], "build_pmedian_model": [], "setcoverdata": 3, "incidence_matrix": [3, 8], "setpackdata": 3, "maxweightstablesetdata": 3, "graph": [3, 8, 11], "networkx": [3, 11], "maxweightstablesetgener": [3, 8], "fix_graph": [3, 8], "maximum": [3, 8, 11], "stabl": [3, 5, 10, 11], "two": [3, 6, 7, 8, 11, 12, 13, 14], "oper": [3, 8, 12, 13, 14], "erd\u0151": [3, 8], "r\u00e9nyi": [3, 8], "g_": [3, 8], "w_v": [3, 8], "wai": [3, 7, 8], "travelingsalesmandata": [3, 11], "n_citi": [3, 11], "travelingsalesmangener": [3, 5, 8, 9, 11], "fix_citi": [3, 5, 8, 9, 11], "travel": [3, 5, 9, 10], "salesman": [3, 5, 9, 10], "unitcommitmentdata": [3, 12, 13, 14], "min_pow": [3, 8], "max_pow": [3, 8], "min_uptim": [3, 8], "min_downtim": [3, 8], "cost_startup": [3, 8], "cost_prod": [3, 8], "cost_fix": [3, 8], "build_uc_model": [12, 13, 14], "unit": [3, 9, 10, 11, 12, 13, 14], "commit": [3, 10, 12, 13, 14], "accord": 3, "equat": 3, "5": [3, 5, 6, 8, 11, 12, 13, 14], "bendotti": [3, 8], "fouilhoux": [3, 8], "rottner": [3, 8], "c": [3, 5, 8, 12, 13, 14], "min": [3, 5, 8, 11, 12, 13, 14], "up": [3, 5, 8, 9, 11, 12, 13, 14], "down": [3, 5, 8], "polytop": [3, 8], "comb": [3, 8], "36": [3, 8], "1024": [3, 8], "1058": [3, 8], "2018": [3, 8], "http": [3, 8, 10], "doi": [3, 8, 10], "org": [3, 8, 10], "10": [3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "1007": [3, 8], "s10878": [3, 8], "018": [3, 8], "0273": [3, 8], "minweightvertexcoverdata": 3, "where_cut": 4, "cut": [4, 8, 9, 10, 12, 14], "where_default": 4, "where_lazi": 4, "lazi": [4, 8, 9, 10], "add_constr": [4, 11], "constrs_lh": 4, "constrs_sens": 4, "constrs_rh": 4, "extract_after_load": 4, "extract_after_lp": 4, "extract_after_mip": 4, "fix_vari": 4, "lazy_enforc": [4, 11], "violat": [4, 11], "relax": [4, 5, 6, 8, 9, 11, 12, 13, 14], "set_cut": 4, "set_warm_start": 4, "write": [4, 5, 12, 13, 14], "inner": [4, 11, 12, 13, 14], "lazy_separ": [4, 11], "cuts_separ": [4, 11], "cuts_enforc": [4, 11], "constr": 4, "just": [4, 5, 12, 13, 14], "been": [4, 6, 7, 8, 11], "load": [4, 5, 7, 8, 9, 12, 13, 14], "extract": [4, 5, 6, 7], "problem": [4, 5, 6, 7, 9, 10], "etc": [4, 12, 13, 14], "linear": [4, 5, 6, 8, 10, 12, 13, 14], "program": [4, 10, 11, 12, 13, 14], "solv": [4, 5, 6, 7, 10], "dynam": 4, "lp": [4, 5, 6], "basi": [4, 5], "statu": [4, 5], "mix": [4, 5, 8, 10, 11, 12, 13, 14], "mip": [4, 5, 7, 8, 9, 10, 12, 13, 14], "set_time_limit": 4, "time_limit_sec": 4, "learningsolv": [4, 9, 11, 12, 13, 14], "compon": [4, 6, 9, 10, 11, 12, 13, 14], "data_filenam": 4, "step": [5, 6, 8, 9, 12, 13, 14], "assist": 5, "supervis": [5, 6], "larg": [5, 7, 8, 10, 12, 13, 14], "raw": [5, 6], "In": [5, 6, 7, 8, 9, 11, 12, 13, 14], "section": [5, 7, 8], "we": [5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "variou": [5, 6, 7, 8, 9], "includ": [5, 7, 8, 9, 10, 11], "miplearn": [5, 6, 7, 8, 9, 11, 12, 13, 14], "addition": [5, 11], "framework": [5, 6, 7, 9, 10, 11, 12, 13, 14], "follow": [5, 7, 8, 9, 10, 11, 12, 13, 14], "convent": [5, 7, 8, 10], "store": [5, 6, 8, 11, 12, 13, 14], "file": [5, 6, 7, 8, 9, 11, 12, 13, 14], "briefli": [5, 7], "rational": 5, "choos": [5, 7, 11], "analyz": 5, "later": 5, "take": [5, 6, 7, 10, 11], "input": [5, 6, 11, 12, 13, 14], "pickl": [5, 12, 14], "end": [5, 7, 8, 12, 13, 14], "pkl": [5, 9, 11, 12, 14], "gz": [5, 9, 11, 12, 13, 14], "build_tsp_model": 5, "after": [5, 8, 11, 12, 13, 14], "process": [5, 6, 7, 8, 9, 11, 12, 13, 14], "done": [5, 6], "alongsid": 5, "veri": [5, 7, 8, 10, 11, 12, 13, 14], "time": [5, 6, 7, 8, 9, 11, 12, 13, 14], "consum": [5, 6, 8], "thei": [5, 6, 7], "potenti": [5, 7, 8, 11, 12, 13, 14], "hierarch": 5, "hdf": 5, "wa": [5, 8, 9, 11, 12, 13, 14], "develop": [5, 6, 10, 12, 13, 14], "nation": [5, 10], "center": [5, 8], "supercomput": 5, "applic": [5, 7, 8, 11], "ncsa": 5, "organ": 5, "amount": [5, 8, 12, 13, 14], "support": [5, 7, 10, 11, 14], "varieti": [5, 8], "string": [5, 13], "arrai": 5, "csv": 5, "json": [5, 11], "sqlite": 5, "sever": 5, "advantag": [5, 7, 10], "storag": 5, "scalar": 5, "vector": [5, 6, 13], "matric": [5, 6, 8], "relat": [5, 12, 13, 14], "easier": [5, 8, 12, 13, 14], "transfer": 5, "high": [5, 6, 7, 8, 9, 10, 11, 13], "o": 5, "read": [5, 7, 11], "element": [5, 8], "without": [5, 6, 7, 8, 11], "memori": [5, 12, 13, 14], "begin": [5, 7, 8, 11, 12, 13, 14], "dramat": 5, "improv": [5, 6], "reduc": [5, 6, 11, 14], "requir": [5, 6, 7, 11, 12, 13, 14], "especi": [5, 7], "import": [5, 6, 7, 8, 9, 11, 12, 14], "On": [5, 9], "fly": 5, "compress": [5, 12, 13, 14], "can": [5, 6, 7, 8, 9, 10, 11, 12, 13, 14], "transpar": 5, "acceler": [5, 6, 10, 11, 12, 13, 14], "network": [5, 7, 8], "portabl": 5, "well": [5, 8, 12, 13, 14], "typic": [5, 6, 7, 8, 10], "expens": 5, "ensur": [5, 6, 8, 12, 13, 14], "usabl": 5, "futur": [5, 12, 13, 14], "even": [5, 7, 12, 13, 14], "non": [5, 8, 9, 11, 12, 14], "python": [5, 11, 12, 13, 14], "ml": [5, 7, 8, 9, 10, 11, 12, 13, 14], "current": [5, 7, 8, 9, 11, 12, 13, 14], "simpl": [5, 6, 7, 8, 11], "numer": [5, 8], "advanc": [5, 10, 11, 12, 13, 14], "librari": [5, 8], "h5py": 5, "conveni": [5, 11], "access": 5, "less": [5, 7, 8], "prone": 5, "built": 5, "dens": 5, "spars": 5, "arbitrari": 5, "binari": [5, 7, 8, 9, 11, 12, 13, 14], "correspond": 5, "get": [5, 10], "pure": [5, 10], "automat": [5, 10, 11], "check": [5, 6, 7], "show": [5, 12, 13, 14], "usag": [5, 6, 7, 9, 11, 12, 13, 14], "np": [5, 6, 8, 9, 11, 12, 14], "seed": [5, 6, 8, 9, 11, 12, 13, 14], "reproduc": [5, 6, 8, 9, 11], "42": [5, 6, 8, 9, 11, 12, 13, 14], "empti": [5, 8, 11], "test": [5, 9, 11], "x1": [5, 6], "x2": [5, 6], "hello": 5, "world": [5, 8, 12, 13, 14], "x3": [5, 6], "2": [5, 6, 7, 8, 9, 11, 12, 13, 14], "x4": 5, "rand": [5, 13], "x5": 5, "re": 5, "open": [5, 6, 8, 12, 13, 14], "print": [5, 6, 8, 11, 12, 14], "37454012": 5, "9507143": 5, "7319939": 5, "5986585": 5, "15601864": 5, "15599452": 5, "05808361": 5, "8661761": 5, "601115": 5, "6803075671195984": 5, "4504992663860321": 5, "4": [5, 6, 7, 8, 9, 11, 12, 13, 14], "013264961540699005": 5, "9422017335891724": 5, "5632882118225098": 5, "38541650772094727": 5, "015966251492500305": 5, "2308938205242157": 5, "24102546274662018": 5, "6832635402679443": 5, "6099966764450073": 5, "83319491147995": 5, "most": [5, 6, 7, 8, 11, 12, 13, 14], "fundament": 5, "right": [5, 7, 8, 12, 13, 14], "hand": [5, 6, 11, 12, 13, 14], "side": [5, 12, 13, 14], "easili": [5, 6, 7, 9], "effici": [5, 8], "rebuild": 5, "invok": 5, "sensit": 5, "among": 5, "along": 5, "statist": [5, 8, 9, 11, 12, 13, 14], "explor": [5, 8, 9, 11, 12, 13, 14], "node": [5, 6, 8, 9, 11, 12, 13, 14], "wallclock": 5, "phase": 5, "static_": 5, "lp_": 5, "mip_": 5, "shown": [5, 7, 8, 11, 12, 13, 14], "tabl": 5, "descript": [5, 6], "static_constr_lh": 5, "nconstr": 5, "nvar": 5, "left": [5, 7, 8, 12, 13, 14], "static_constr_nam": 5, "static_constr_rh": [5, 6, 12, 13, 14], "static_constr_sens": 5, "sens": [5, 6, 12, 13, 14], "static_obj_offset": 5, "static_sens": 5, "minim": [5, 8, 12, 13, 14], "max": [5, 8, 11, 12, 13, 14], "static_var_lower_bound": 5, "lower": [5, 7], "bound": [5, 6, 8, 9, 11, 12, 13, 14], "static_var_nam": 5, "static_var_obj_coeff": [5, 6, 7, 11], "static_var_typ": 5, "continu": [5, 8, 9, 11, 12, 13, 14], "static_var_upper_bound": 5, "upper": 5, "lp_constr_basis_statu": 5, "lp_constr_dual_valu": [5, 6], "dual": [5, 6, 9, 11, 12, 14], "shadow": 5, "lp_constr_sa_rhs_": 5, "rh": [5, 6, 8, 9, 11, 12, 13, 14], "lp_constr_slack": [5, 6], "slack": [5, 6], "lp_obj_valu": [5, 6], "lp_var_basis_statu": 5, "superbas": 5, "lp_var_reduced_cost": [5, 6], "lp_var_sa_": 5, "ub": 5, "lb": 5, "_": [5, 12, 13, 14], "lp_var_valu": [5, 6], "lp_wallclock_tim": 5, "taken": 5, "second": [5, 7, 8, 9, 11, 12, 13, 14], "mip_constr_slack": 5, "best": [5, 7, 8, 9, 11, 12, 13, 14], "mip_gap": 5, "rel": [5, 6, 7, 8], "gap": [5, 8, 9, 11, 12, 13, 14], "mip_node_count": 5, "mip_obj_bound": 5, "mip_obj_valu": 5, "mip_var_valu": 5, "mip_wallclock_tim": 5, "few": [5, 8, 9, 11, 12, 13, 14], "run": [5, 6, 8, 11, 12, 13, 14], "screen": 5, "uniform": [5, 6, 8, 9, 11, 12, 13, 14], "randint": [5, 6, 8, 9, 11], "glob": [5, 6], "tsp": [5, 8, 9, 11], "build_tsp_model_gurobipi": [5, 8, 9, 11], "io": [5, 6, 9, 11, 12, 14], "low": [5, 6, 8, 9, 11], "11": [5, 6, 8, 9, 11], "loc": [5, 6, 8, 9, 11, 12, 14], "1000": [5, 6, 8, 9, 11, 12, 13, 14], "90": [5, 8, 9], "20": [5, 6, 8, 9, 11, 12, 13, 14], "save": [5, 9], "00000": [5, 6, 8, 9, 11, 12, 13, 14], "00001": [5, 9, 12, 13, 14], "four": [5, 7], "parallel": [5, 11], "bc": [5, 9, 11, 12, 13, 14], "2909": 5, "2921": [5, 8], "previou": [6, 7, 8, 9, 11], "page": [6, 7, 9], "introduc": [6, 8, 9], "collector": [6, 9, 10, 11, 12, 14], "hdf5": [6, 7, 9, 10, 11, 12, 13, 14], "order": [6, 7, 8], "becaus": [6, 8, 11, 12, 13, 14], "help": 6, "complex": [6, 7, 12, 13, 14], "format": [6, 10, 12, 13, 14], "research": [6, 8, 10], "propos": 6, "absolut": [6, 8], "invari": 6, "transform": 6, "amen": 6, "treat": 6, "separ": 6, "cycl": 6, "often": [6, 7, 8, 11, 12, 13, 14], "therefor": [6, 8, 11], "focu": 6, "filter": [6, 7], "represent": [6, 11], "much": [6, 7, 11, 12, 13, 14], "faster": [6, 12, 13, 14], "experi": 6, "resolv": [6, 8], "implement": [6, 7, 9, 10, 11, 12, 13, 14], "featureextractor": 6, "produc": [6, 8, 9, 11, 12, 13, 14], "either": [6, 7, 10, 12, 13, 14], "particular": [6, 8, 10], "subset": [6, 7, 8, 9], "known": 6, "alreadi": [6, 7, 11, 12, 13, 14], "avail": [6, 7, 8, 9, 11, 12, 13, 14], "assembl": 6, "field": [6, 7, 8, 10, 11, 12, 14], "shape": 6, "demonstr": [6, 12, 13, 14], "randomli": [6, 8], "shutil": 6, "rmtree": 6, "basic": [6, 9, 10, 11, 12, 13, 14], "multiknapsack": [6, 8], "ignore_error": 6, "6": [6, 8, 9, 11, 12, 13, 14], "100": [6, 7, 8, 11, 12, 13, 14], "25": [6, 7, 8, 9, 11, 12, 13, 14], "95": [6, 8, 9], "75": [6, 7, 8, 13], "ext": 6, "": [6, 7, 8, 10, 11], "1531": 6, "24308771": 6, "350": [6, 8], "692": [6, 8], "454": [6, 8], "709": [6, 8], "605": [6, 8], "543": [6, 8], "321": [6, 8], "674": [6, 8], "571": [6, 8], "341": [6, 8], "53124309e": 6, "03": [6, 8, 9, 11, 12, 13, 14], "50000000e": 6, "02": [6, 8, 9, 11, 12, 13, 14], "00000000e": 6, "00": [6, 8, 9, 11, 12, 13, 14], "9": [6, 8, 11, 12, 13, 14], "43468018e": 6, "01": [6, 8, 9, 11, 12, 13, 14], "92000000e": 6, "51703322e": 6, "54000000e": 6, "8": [6, 8, 11, 12, 13, 14], "25504150e": 6, "7": [6, 8, 11, 12, 13, 14], "09000000e": 6, "11373022e": 6, "05000000e": 6, "26055283e": 6, "43000000e": 6, "68693771e": 6, "21000000e": 6, "07488781e": 6, "74000000e": 6, "82293701e": 6, "71000000e": 6, "41129074e": 6, "41000000e": 6, "28830120e": 6, "3100000e": 6, "5978307e": 6, "0000000e": 6, "8800000e": 6, "2881632e": 6, "0040000e": 6, "0601316e": 6, "2690000e": 6, "3659772e": 6, "0070000e": 6, "8800571e": 6, "warn": [6, 8, 11, 12, 13, 14], "illustr": [6, 7, 9, 11, 12, 13, 14], "while": [6, 7, 8, 9, 11], "would": [6, 7], "vari": 6, "unabl": 6, "concaten": 6, "tree": [6, 11], "mimick": 6, "40": [6, 8, 9, 12, 13, 14], "out": [6, 12, 13, 14], "64": [6, 8], "outsid": 6, "defint": 6, "design": [6, 8, 12, 13, 14], "irrelev": 6, "row": [6, 8, 9, 11, 12, 13, 14], "column": [6, 8, 9, 11, 12, 13, 14], "permut": 6, "paper": 6, "alvlouweh2017": [6, 7, 9], "00e": [6, 8, 9, 11, 12, 13, 14], "75e": 6, "10e": 6, "30e": 6, "40e": 6, "80e": 6, "70e": 6, "60e": 6, "alejandro": 6, "marco": 6, "theoret": [6, 7], "synergi": 6, "2016": 6, "univers": [6, 8], "li\u00e8g": 6, "quentin": 6, "loui": 6, "assign": [7, 8, 12, 13, 14], "qualiti": [7, 10], "benefici": 7, "prune": 7, "portion": 7, "search": 7, "space": [7, 8], "altern": 7, "proof": 7, "doubl": 7, "feasibl": [7, 8, 10, 11], "both": [7, 11, 12, 13, 14], "pattern": [7, 10, 12, 13, 14], "emploi": [7, 8], "highli": 7, "configur": [7, 8, 10, 11], "accept": [7, 8, 9, 12, 13, 14], "depend": 7, "whether": [7, 12, 13, 14], "befor": [7, 11, 12, 13, 14], "present": 7, "themselv": 7, "discuss": [7, 9, 12, 13, 14], "three": [7, 9, 12, 13, 14], "approach": [7, 10], "benefit": [7, 8, 10], "limit": [7, 8, 11, 12, 13, 14], "main": [7, 9, 10], "maintain": 7, "guarante": 7, "signific": [7, 10, 11, 12, 13, 14], "abl": [7, 8, 10, 11, 12, 13, 14], "possibl": 7, "case": [7, 12, 13, 14], "evalu": [7, 8], "discard": 7, "infeas": [7, 8], "ones": [7, 11, 12, 13, 14], "proce": [7, 12, 14], "disadvantag": 7, "next": [7, 8, 11, 12, 13, 14], "modest": 7, "speedup": 7, "mani": [7, 8, 11], "accur": 7, "restrict": [7, 8, 9, 11, 12, 14], "small": [7, 11, 12, 13, 14], "fraction": [7, 11, 12, 13, 14], "find": [7, 8, 10, 11, 12, 13, 14], "scratch": 7, "lose": 7, "found": [7, 8, 9, 11, 12, 13, 14], "longer": 7, "global": 7, "suffici": 7, "might": 7, "were": [7, 8, 10], "third": [7, 8], "tri": 7, "strike": 7, "balanc": 7, "enforc": [7, 8, 11], "proxim": 7, "precis": 7, "bar": 7, "_1": 7, "ldot": [7, 8, 12, 13, 14], "_n": 7, "x_1": 7, "x_n": 7, "add": [7, 11, 13, 14], "sum_": [7, 8, 12, 13, 14], "_i": [7, 12, 13, 14], "x_i": [7, 8, 12, 13, 14], "leq": [7, 8, 12, 13, 14], "defin": [7, 11, 12, 13, 14], "indic": [7, 8, 12, 13, 14], "deviat": 7, "suggest": [7, 12, 13, 14], "toler": [7, 8, 9, 11, 12, 13, 14], "Its": 7, "lead": 7, "smaller": 7, "larger": [7, 11, 12, 13, 14], "distinct": 7, "try": 7, "infer": 7, "those": 7, "like": 7, "good": [7, 8, 12, 13, 14], "promis": 7, "variat": [7, 8], "fact": 7, "let": [7, 8, 12, 13, 14], "i_1": 7, "i_n": 7, "i_": 7, "expect": 7, "through": [7, 8, 11, 12, 13, 14], "scikit": [7, 13], "score": 7, "delta_i": 7, "higher": 7, "highest": 7, "suppos": [7, 12, 13, 14], "regressor": 7, "sequenc": [7, 11], "appear": 7, "ti": 7, "being": [7, 8], "broken": 7, "arbitrarili": 7, "keep": [7, 8, 12, 13, 14], "i_k": 7, "x_l": 7, "tild": 7, "_l": 7, "frac": [7, 8], "i_j": 7, "y_j": [7, 8], "text": [7, 8, 12, 13, 14], "le": 7, "theta_0": 7, "ge": [7, 8], "theta_1": 7, "squar": [7, 8, 11], "undefin": 7, "meant": 7, "simpler": 7, "literatur": 7, "post": 7, "dummyclassifi": 7, "anoth": 7, "similar": [7, 8, 11, 12, 13, 14], "itself": [7, 11, 12, 13, 14], "kneighborsclassifi": [7, 11, 12, 13, 14], "cours": 7, "dummi": 7, "neighbor": [7, 11, 12, 13, 14], "mem": [7, 11, 12, 14], "dummyextractor": 7, "comp1": [7, 9], "1_000_000": 7, "closest": 7, "assum": [7, 8, 10, 11, 12, 13, 14], "comp2": [7, 9], "n_neighbor": [7, 11, 12, 13, 14], "comp3": [7, 9], "natur": 7, "directli": [7, 11], "novel": 7, "never": 7, "observ": [7, 10], "jointli": 7, "x_j": [7, 8], "copi": 7, "1_j": 7, "n_j": 7, "label": 7, "aris": [7, 8], "practic": [7, 8, 11, 12, 13, 14], "certain": [7, 10], "frequent": [7, 8, 12, 13, 14], "pose": 7, "standard": [7, 12, 13, 14], "sinc": 7, "do": [7, 11, 13], "wrapper": [7, 12, 13, 14], "It": [7, 8, 9, 11, 12, 13, 14], "reliabl": 7, "accuraci": 7, "situat": [7, 8, 12, 13, 14], "confid": 7, "purpos": [7, 8, 12, 13, 14], "ask": [7, 8, 11], "eras": 7, "whose": [7, 8], "suitabl": [7, 8], "handl": [7, 12, 13, 14], "overrid": 7, "linear_model": [7, 9], "logisticregress": [7, 9], "minprob": [7, 9], "singleclass": [7, 9], "indep": [7, 9], "than": [7, 8], "99": [7, 8], "comp": [7, 12, 13, 14], "subsect": 7, "straightforwad": 7, "simpli": [7, 11], "feed": 7, "forward": 7, "neural": 7, "indeped": 7, "common": 7, "multioutput": 7, "chain": [7, 8], "alon": 7, "plu": 7, "classifierchain": 7, "neural_network": 7, "mlpclassifi": 7, "feedforward": 7, "spend": [7, 12, 13, 14], "effort": [7, 11], "tweak": 7, "usual": [7, 8, 12, 13, 14], "idea": 7, "what": [7, 8, 11], "impact": 7, "simplifi": [7, 11, 12, 13, 14], "task": [7, 8], "benchmark": [7, 10], "pre": 7, "miplib": 8, "tsplib": 8, "shortcom": 8, "howev": [8, 11, 12, 13, 14], "enhanc": [8, 10], "hundr": 8, "thousand": 8, "magnitud": 8, "homogen": 8, "buch": 8, "tackl": 8, "wide": 8, "classic": 8, "techniqu": [8, 10, 11], "measur": [8, 11], "As": [8, 11], "nine": 8, "customiz": 8, "flexibl": [8, 9], "divers": 8, "characterist": 8, "belong": 8, "algorithm": 8, "subject": [8, 12, 13, 14], "your": [8, 10, 12, 13, 14], "written": 8, "exponenti": 8, "mp": [8, 12, 13, 14], "trial": 8, "primal": [8, 9, 10, 11, 12, 14], "combinatori": 8, "finit": 8, "indivis": 8, "hard": 8, "warehous": 8, "manag": 8, "determin": 8, "transport": 8, "equal": [8, 12, 13, 14], "everi": [8, 11], "pair": [8, 11], "x_": 8, "ij": 8, "align": [8, 12, 13, 14], "foral": 8, "y_i": [8, 12, 13, 14], "binpack": 8, "ten": 8, "47": 8, "26": [8, 12, 13, 14], "19": [8, 11], "52": 8, "14": [8, 9], "65": 8, "21": [8, 11, 12, 13, 14], "76": 8, "82": 8, "16": [8, 9, 11, 13], "96": 8, "102": 8, "24": 8, "69": [8, 11], "22": 8, "78": [8, 11], "17": 8, "81": [8, 11], "83": 8, "12": [8, 11, 14], "67": 8, "46": 8, "05": [8, 12], "13": [8, 12, 13, 14], "66": [8, 11, 13], "18": [8, 11], "08": [8, 12, 13, 14], "93": [8, 11], "41": [8, 9], "55": 8, "15": [8, 9, 11, 12, 13, 14], "89": 8, "59": 8, "51": [8, 11], "68": [8, 11], "62": 8, "92": 8, "94": 8, "23": 8, "85": 8, "107": [8, 11], "77": [8, 11], "79": [8, 11], "06": [8, 12, 13, 14], "44": 8, "28": [8, 9, 11, 12, 14], "98": 8, "43": 8, "104": 8, "58": [8, 11], "87": 8, "74": 8, "61": 8, "07": [8, 12, 13, 14], "37": 8, "91": 8, "57": 8, "56": [8, 9], "97": 8, "09": [8, 12, 13, 14], "licens": [8, 9, 11, 12, 13, 14], "product": [8, 9, 11, 12, 13, 14], "expir": [8, 9, 11, 12, 14], "2024": [8, 9, 11, 12, 14], "gurobi": [8, 9, 10, 11, 12, 13, 14], "v10": [8, 9, 11, 12, 13, 14], "3rc0": [8, 9, 11, 12, 14], "linux64": [8, 9, 11, 12, 13, 14], "cpu": [8, 9, 11, 12, 13, 14], "13th": [8, 9, 11, 12, 14], "gen": [8, 9, 11, 12, 14], "intel": [8, 9, 11, 12, 14], "core": [8, 9, 11, 12, 13, 14], "tm": [8, 9, 11, 12, 14], "i7": [8, 9, 11, 12, 14], "13800h": [8, 9, 11, 12, 14], "instruct": [8, 9, 11, 12, 13, 14], "sse2": [8, 9, 11, 12, 13, 14], "avx": [8, 9, 11, 12, 13, 14], "avx2": [8, 9, 11, 12, 13, 14], "thread": [8, 9, 11, 12, 13, 14], "count": [8, 9, 11, 12, 13, 14], "physic": [8, 9, 11, 12, 13, 14], "logic": [8, 9, 11, 12, 13, 14], "processor": [8, 9, 11, 12, 13, 14], "110": [8, 9], "210": 8, "nonzero": [8, 9, 11, 12, 13, 14], "fingerprint": [8, 9, 11, 12, 13, 14], "0x1ff9913f": 8, "1e": [8, 9, 11, 12, 13, 14], "heurist": [8, 11, 12, 13, 14], "0000000": [8, 11, 12, 13, 14], "presolv": [8, 9, 11, 12, 13, 14], "root": [8, 9, 11, 12, 13, 14], "274844e": 8, "38": 8, "iter": [8, 9, 11, 12, 13, 14], "expl": [8, 9, 11, 12, 13, 14], "unexpl": [8, 9, 11, 12, 13, 14], "depth": [8, 9, 11, 12, 13, 14], "intinf": [8, 9, 11, 12, 13, 14], "incumb": [8, 9, 11, 12, 13, 14], "bestbd": [8, 9, 11, 12, 13, 14], "27484": 8, "h": [8, 11, 12, 13, 14], "simplex": [8, 9, 11, 12, 13, 14], "04": [8, 9, 11, 12, 13, 14], "000000000000e": 8, "0000": [8, 9, 11, 12, 13, 14], "involv": [8, 9], "place": 8, "total": [8, 12, 13, 14], "maxim": 8, "exceed": 8, "repres": [8, 11], "resourc": 8, "must": [8, 11], "satisfi": 8, "p_j": 8, "w_": 8, "b_i": 8, "alpha_i": 8, "u_j": 8, "gamma_": 8, "frevil": 8, "arnaud": 8, "g\u00e9rard": 8, "plateau": 8, "preprocess": 8, "procedur": [8, 11, 12, 13, 14], "multidimension": 8, "discret": [8, 10], "mathemat": [8, 12, 13, 14], "49": 8, "1994": 8, "189": 8, "212": 8, "fr\u00e9vill": 8, "european": 8, "155": 8, "2004": 8, "five": 8, "around": [8, 12, 13, 14], "392": 8, "977": 8, "764": [8, 13], "622": 8, "158": 8, "163": 8, "840": 8, "574": 8, "696": 8, "948": 8, "860": 8, "209": 8, "178": 8, "184": 8, "293": 8, "541": 8, "414": 8, "305": 8, "629": 8, "135": 8, "278": 8, "378": 8, "466": 8, "803": 8, "205": 8, "492": 8, "584": 8, "45": [8, 9], "630": 8, "173": 8, "907": 8, "947": 8, "794": 8, "312": 8, "711": 8, "439": 8, "117": 8, "506": 8, "35": [8, 11, 13], "915": 8, "266": 8, "662": 8, "516": 8, "521": 8, "1310": 8, "988": 8, "1004": 8, "1269": 8, "50": [8, 9, 11, 12, 13, 14], "0xaf3ac15": 8, "2e": [8, 9, 11, 12, 13, 14], "3e": [8, 12, 14], "7e": [8, 12, 13, 14], "804": 8, "remov": [8, 11, 12, 13, 14], "34": 8, "428726e": 8, "1428": 8, "7265": 8, "1279": 8, "000000": [8, 11], "plane": [8, 9, 11, 12, 14], "No": 8, "better": 8, "279000000000e": 8, "serv": 8, "goal": 8, "suppli": 8, "d_i": 8, "furthermor": 8, "c_j": 8, "pmedian": 8, "100x100": 8, "250": 8, "32": [8, 13], "33": 8, "86": 8, "88": 8, "72": 8, "80": 8, "39": [8, 9, 12], "71": 8, "70": [8, 12, 13, 14], "30": [8, 12, 13, 14], "73": 8, "101": 8, "63": 8, "54": 8, "111": 8, "27": 8, "151": 8, "237": 8, "241": 8, "202": 8, "171": 8, "220": 8, "0x8d8d9346": 8, "5e": 8, "4e": [8, 9], "368": 8, "7900000": 8, "245": 8, "6400000": 8, "000000e": [8, 9, 11, 12, 14], "64000": 8, "1900000": 8, "148": 8, "6300000": 8, "14595": 8, "113": 8, "1800000": 8, "84": 8, "18000": 8, "5000000": 8, "3900000": 8, "9800000": 8, "28872": 8, "31": 8, "98000": 8, "9200000": 8, "06884": 8, "92000": 8, "2300000": 8, "23000": 8, "123000000000e": 8, "aim": 8, "overlap": 8, "real": [8, 12, 13, 14], "scenario": 8, "schedul": [8, 11], "alloc": 8, "s_1": 8, "s_m": 8, "union": [8, 12, 14], "w_j": 8, "s_j": 8, "geq": [8, 12, 13, 14], "setcovergener": 8, "n_element": 8, "n_set": 8, "incid": 8, "densiti": 8, "d": [8, 12, 13, 14], "entri": 8, "bernoulli": 8, "least": 8, "identifi": [8, 10, 11], "uniformli": 8, "modifi": 8, "denot": [8, 12, 13, 14], "fix_set": 8, "costs_jitt": 8, "setcov": 8, "build_setcover_model_gurobipi": 8, "interv": 8, "1044": 8, "850": 8, "1014": 8, "944": 8, "697": 8, "971": 8, "213": 8, "425": 8, "0xe5c2d4fa": 8, "4900000": 8, "134900000000e": 8, "disjoint": 8, "within": 8, "airlin": [8, 11], "flight": 8, "crew": 8, "setpackgener": 8, "detail": [8, 11], "document": 8, "setpack": 8, "build_setpack_model": [], "0x4ee91388": 8, "1265": 8, "560000": 8, "1986": 8, "986370000000e": 8, "theori": 8, "vertic": 8, "adjac": [8, 11], "simultan": [8, 12, 13, 14], "conflict": 8, "v": 8, "undirect": 8, "x_v": 8, "x_u": 8, "recal": [8, 11], "edg": [8, 11], "probabilti": 8, "stab": 8, "build_stab_model_gurobipi": [8, 11], "60": [8, 11, 12, 13, 14], "48": 8, "precrush": [8, 9, 11], "0x3240ea4a": 8, "6e": [8, 12, 13, 14], "219": 8, "1400000": 8, "205650e": 8, "14000": 8, "191400000000e": 8, "callback": [8, 9, 11, 13], "299": 8, "sec": [8, 9, 11, 13], "citi": [8, 11], "shortest": [8, 11], "visit": [8, 11], "hamiltonian": [8, 11], "path": [8, 11, 12, 13, 14], "karp": [8, 11], "deliveri": [8, 11], "truck": [8, 11], "d_e": 8, "x_e": 8, "delta": 8, "subsetneq": 8, "neq": 8, "emptyset": 8, "extrem": 8, "setminu": 8, "inequ": 8, "initi": [8, 11, 12, 13, 14], "d_": 8, "sqrt": 8, "1000x1000": 8, "box": 8, "513": 8, "762": 8, "358": 8, "325": 8, "374": 8, "932": 8, "731": 8, "391": 8, "634": 8, "726": 8, "765": 8, "754": 8, "409": 8, "719": 8, "446": 8, "400": 8, "780": 8, "756": 8, "744": 8, "656": 8, "383": 8, "334": 8, "549": 8, "925": 8, "702": 8, "422": 8, "728": 8, "663": 8, "526": 8, "708": 8, "377": 8, "462": 8, "1072": 8, "802": 8, "501": 8, "853": 8, "654": 8, "603": 8, "433": 8, "381": 8, "255": 8, "287": 8, "493": 8, "900": 8, "354": 8, "323": 8, "367": 8, "841": 8, "727": 8, "444": 8, "668": 8, "690": 8, "687": 8, "175": 8, "725": 8, "398": 8, "666": 8, "827": 8, "736": 8, "371": [8, 13], "317": 8, "570": 8, "1090": 8, "712": 8, "648": 8, "655": 8, "650": 8, "356": 8, "469": 8, "1146": 8, "779": 8, "476": 8, "752": 8, "681": 8, "565": [8, 12, 14], "394": 8, "286": 8, "274": 8, "lazyconstraint": [8, 9, 11], "0x719675e5": 8, "921000e": 8, "921000000000e": 8, "106": 8, "power": [8, 10, 12, 13, 14], "turn": [8, 12, 13, 14], "off": 8, "meet": 8, "electr": [8, 12, 13, 14], "lowest": 8, "ramp": 8, "prevent": 8, "output": [8, 13], "level": [8, 11], "too": 8, "quickli": 8, "minimum": 8, "switch": 8, "system": [8, 10, 12, 13, 14], "plan": 8, "doe": [8, 11, 12, 13, 14], "trajectori": 8, "piecewis": 8, "curv": 8, "transmiss": 8, "secur": 8, "realist": [8, 12, 13, 14], "unitcommit": 8, "jl": 8, "t": 8, "d_t": 8, "mw": [8, 12, 13, 14], "max_g": 8, "min_g": 8, "l_g": 8, "regardless": 8, "var": [8, 10, 12, 13, 14], "gt": 8, "p_": 8, "_g": 8, "gk": 8, "cannot": [8, 14], "symmetr": 8, "fourth": 8, "period": 8, "fifth": 8, "sixth": 8, "quantiti": 8, "unitcommitmentgener": 8, "n_unit": 8, "n_period": 8, "valid": 8, "rather": 8, "startup": 8, "peak": 8, "4c": 8, "8c": 8, "cost_jitt": 8, "demand_jitt": 8, "fix_unit": 8, "uc": [8, 12, 13, 14], "450": [8, 11, 12, 13, 14], "10_000": 8, "1_000": 8, "f": [8, 11], "271": 8, "207": 8, "218": 8, "477": 8, "379": 8, "319": 8, "120": 8, "3042": 8, "5247": 8, "4319": 8, "2912": 8, "6118": 8, "53": 8, "199": 8, "514": 8, "592": 8, "607": 8, "905": 8, "1166": 8, "1212": 8, "1127": 8, "953": 8, "796": 8, "783": 8, "866": 8, "768": 8, "899": 8, "946": 8, "1087": 8, "1048": 8, "992": 8, "750": 8, "691": 8, "606": 8, "658": 8, "809": 8, "2458": 8, "6200": 8, "4585": 8, "2666": 8, "4783": 8, "196": 8, "416": 8, "626": 8, "981": 8, "1095": 8, "1102": 8, "1088": 8, "863": 8, "848": 8, "761": 8, "828": 8, "775": 8, "834": 8, "959": 8, "865": 8, "1193": 8, "985": 8, "893": 8, "962": 8, "781": 8, "723": 8, "639": 8, "602": 8, "787": 8, "578": 8, "360": 8, "2128": 8, "0x4dc1c661": 8, "240": 8, "244": 8, "131": 8, "229": 8, "842": 8, "116": 8, "440662": 8, "46430": 8, "429461": 8, "97680": 8, "374043": 8, "64040": 8, "361348e": 8, "142": 8, "336134": 8, "820": 8, "640": 8, "368600": 8, "14450": 8, "364721": 8, "76610": 8, "cutoff": [8, 9], "766": 8, "gomori": [8, 11, 12, 14], "cliqu": 8, "222": [8, 11], "mir": [8, 11, 12, 14], "flow": [8, 12, 14], "rlt": 8, "lift": 8, "234": 8, "364722": 8, "374044": 8, "647217661000e": 8, "connect": 8, "bioinformat": 8, "w_g": 8, "minweightvertexcovergener": 8, "behav": 8, "vertexcov": 8, "build_vertexcover_model": [], "0x2d2d1390": 8, "301": 8, "995750e": 8, "010000000000e": 8, "individu": [9, 12, 13, 14], "integr": 9, "aforement": 9, "cohes": 9, "whole": 9, "conclud": 9, "runnabl": 9, "compos": 9, "target": 9, "part": [9, 11], "architectur": 9, "enabl": 9, "equival": 9, "tradit": 9, "sequenti": 9, "could": [9, 12, 13, 14], "achiev": [9, 12, 13, 14], "dictionari": [9, 11], "train_data": [9, 11, 12, 13, 14], "test_data": [9, 11, 12, 13, 14], "action": [9, 10, 12, 13, 14], "all_data": 9, "split": 9, "0x6ddcd141": 9, "inf": [9, 11, 12, 14], "3600000e": 9, "700000e": [9, 11], "7610000e": 9, "761000000e": 9, "0x74ca3d0a": 9, "2796": 9, "761000e": 9, "2761": 9, "796000000000e": 9, "extens": 10, "state": [10, 12, 13, 14], "art": [10, 12, 13, 14], "cplex": [10, 11, 12, 13, 14], "xpress": [10, 11, 12, 13, 14], "unlik": [10, 11], "prove": 10, "full": 10, "happen": 10, "famili": 10, "redund": 10, "pyomo": [10, 11, 12, 13], "jump": [10, 11, 12, 14], "overview": 10, "cover": [10, 12, 14], "vertex": 10, "joint": 10, "expert": 10, "exampl": [10, 11, 12, 13, 14], "helper": 10, "alinson": 10, "xavier": 10, "argonn": 10, "laboratori": 10, "feng": 10, "qiu": 10, "xiaoyi": 10, "gu": 10, "georgia": 10, "institut": 10, "technologi": 10, "berkai": 10, "becu": 10, "santanu": 10, "dei": 10, "upon": 10, "direct": 10, "ldrd": 10, "fund": 10, "director": 10, "offic": 10, "scienc": 10, "depart": 10, "energi": 10, "grid": [10, 12, 13, 14], "kindli": 10, "packag": [10, 11, 12, 13, 14], "zenodo": 10, "2023": 10, "5281": 10, "4287567": 10, "shabbir": 10, "ahm": 10, "2020": 10, "1287": 10, "ijoc": 10, "0976": 10, "tighten": 11, "region": 11, "elimin": 11, "thu": 11, "formul": [11, 12, 13, 14], "omit": 11, "success": 11, "tutori": [11, 12, 13, 14], "subtour": 11, "correctli": 11, "instal": 11, "compat": [11, 12, 13, 14], "julia": [11, 12, 13, 14], "sourc": [11, 12, 13, 14], "code": [11, 12, 13, 14], "build_tsp_model_pyomo": 11, "build_tsp_model_jump": 11, "gurobi_persist": [11, 14], "pr": 11, "persist": [11, 14], "welcom": [11, 12, 13, 14], "scip": 11, "glpk": 11, "cbc": 11, "newer": [11, 12, 13, 14], "further": 11, "becom": 11, "log": [11, 12, 13, 14], "basicconfig": 11, "getlogg": 11, "setlevel": 11, "critic": [], "memorizingcutscompon": [], "expertcutscompon": [], "expertlazycompon": [], "memorizinglazycompon": 11, "500": [11, 12, 13, 14], "info": 11, "225": [], "1225": 11, "2450": 11, "0x04d7bec1": 11, "0600000e": 11, "5880000e": 11, "588000000e": 11, "ahead": 11, "6091": 11, "0x09bd34d6": 11, "29853": 11, "139000e": 11, "6139": 11, "6390": 11, "6165": 11, "50000": 11, "6198": 11, "6219": 11, "half": 11, "219000000000e": 11, "143": 8, "aot": [], "0x77a94572": 11, "29695": 11, "588000e": 11, "5588": 11, "27241": 11, "5898": 11, "6066": 11, "6128": 11, "6368": 11, "6154": 11, "75000": 11, "6204": 11, "224": 11, "togeth": [12, 13, 14], "broader": [12, 14], "earli": [12, 13, 14], "stage": [12, 13, 14], "bug": [12, 13, 14], "submit": [12, 13, 14], "report": [12, 13, 14], "our": [11, 12, 13, 14], "github": [12, 13, 14], "repositori": [12, 13, 14], "comment": [12, 13, 14], "pull": [12, 13, 14], "languag": [12, 13, 14], "offici": [12, 13, 14], "websit": [12, 13, 14], "pip": [12, 14], "commerci": [12, 13, 14], "milp": [12, 13, 14], "demo": [12, 14], "possibli": [12, 13, 14], "incompat": [12, 13, 14], "releas": [12, 13, 14], "recommend": [11, 12, 13, 14], "project": [12, 13, 14], "simplif": [12, 13, 14], "daili": [12, 13, 14], "compani": [12, 13, 14], "onlin": [12, 13, 14], "hour": [12, 13, 14], "dai": [12, 13, 14], "own": [12, 13, 14], "g_1": [12, 13, 14], "g_n": [12, 13, 14], "offlin": [12, 13, 14], "g_i": [12, 13, 14], "megawatt": [12, 13, 14], "noth": [12, 13, 14], "quad": [12, 13, 14], "hold": [11, 12, 13, 14], "dataclass": [11, 12, 14], "pmin": [12, 13, 14], "pmax": [12, 13, 14], "cfix": [12, 13, 14], "cvar": [12, 13, 14], "structur": [12, 13, 14], "gp": [11, 12], "grb": [11, 12], "quicksum": [11, 12], "def": [11, 12, 14], "isinst": [11, 12, 14], "len": [11, 12, 14], "_x": 12, "addvar": [11, 12], "vtype": [11, 12], "_y": 12, "setobject": [11, 12], "addconstr": [11, 12], "At": [12, 13, 14], "700": [12, 13, 14], "600": [12, 13, 14], "objval": 12, "0x58dfdd53": 12, "1400": [12, 13, 14], "035000e": [12, 13, 14], "1035": [12, 13, 14], "1105": [12, 13, 14], "71429": [12, 13, 14], "1320": [12, 13, 14], "320000000000e": [12, 13, 14], "thin": [12, 13, 14], "agnost": [11, 12, 13, 14], "control": [12, 13, 14], "queri": [11, 12, 13, 14], "consist": 12, "slower": [12, 13, 14], "upfront": [12, 13, 14], "histor": [12, 13, 14], "random_uc_data": [12, 13, 14], "100_000": [12, 13, 14], "400_000": [12, 14], "rv": [12, 14], "simplic": [12, 13, 14], "now": [12, 13, 14], "impract": [12, 13, 14], "export": [12, 13, 14], "With": [12, 13, 14], "agre": [12, 13, 14], "unanim": [12, 13, 14], "solver_ml": [12, 13, 14], "1001": [12, 13, 14], "2500": [12, 13, 14], "0xa8b70287": 12, "6166537e": [12, 14], "648803e": [12, 14], "2906219e": [12, 14], "290621916e": [12, 14], "0xcf27855a": 12, "29153e": [12, 14], "290622e": [12, 14], "512": [12, 14], "2906e": [12, 14], "2915e": [12, 14], "2907e": [12, 14], "291528276179e": [12, 14], "290733258025e": [12, 14], "0096": [12, 14], "482": 12, "examin": [12, 13, 14], "line": [12, 13, 14], "repeat": [12, 13, 14], "solver_baselin": [12, 13, 14], "0x4cbbf7c7": 12, "757128e": [12, 14], "7571e": [12, 14], "298273e": [12, 14], "2983e": [12, 14], "293980e": [12, 14], "2940e": [12, 14], "2908e": [12, 14], "291465e": [12, 14], "1031": 12, "29147e": [12, 14], "29398e": [12, 14], "29827e": [12, 14], "75713e": [12, 14], "291465302389e": [12, 14], "290781665333e": [12, 14], "0082": [12, 14], "miss": [12, 13, 14], "had": [12, 13, 14], "significantli": [12, 13, 14], "inferior": [12, 13, 14], "intern": [12, 13, 14], "almost": [12, 13, 14], "term": [12, 13, 14], "0x19042f12": 12, "5917580e": [12, 14], "627453e": [12, 14], "2535968e": [12, 14], "253596777e": [12, 14], "0xf97cde91": 12, "25814e": [12, 14], "25512e": [12, 14], "25483e": [12, 14], "25459e": [12, 14], "253597e": [12, 14], "2536e": [12, 14], "2546e": [12, 14], "2537e": [12, 14], "2538e": [12, 14], "strongcg": [12, 14], "575": [12, 14], "254590409970e": [12, 14], "253768093811e": [12, 14], "0100": [12, 14], "8254590409": [12, 14], "969726": 12, "935662": [12, 14], "0949262811": [12, 14], "1604270": [12, 14], "0218116897": [12, 14], "launch": 13, "repl": 13, "enter": 13, "pkg": 13, "pycal": 13, "suppressor": 13, "cleaner": 13, "replac": [13, 14], "struct": 13, "float64": 13, "jld2": 13, "isa": 13, "read_jld2": 13, "length": 13, "eq_max_pow": [13, 14], "eq_min_pow": [13, 14], "eq_demand": [13, 14], "jumpmodel": 13, "objective_valu": 13, "1rc0": 13, "amd": 13, "ryzen": 13, "7950x": 13, "avx512": 13, "0x55e33a07": 13, "0e": 13, "500_000": 13, "125": 13, "00002": 13, "write_jld2": 13, "451": 13, "suppress_out": 13, "knn": 13, "pyimport": 13, "0xd2378195": 13, "02165e": 13, "021568e": 13, "510": 13, "0216e": 13, "0217e": 13, "021651058978e": 13, "021567971257e": 13, "0081": 13, "169": 13, "0xb45c0594": 13, "071463e": 13, "0715e": 13, "025162e": 13, "0252e": 13, "023090e": 13, "022335e": 13, "022281e": 13, "021753e": 13, "021752e": 13, "0218e": 13, "021651e": 13, "02175e": 13, "02228e": 13, "07146e": 13, "021573363741e": 13, "0076": 13, "204": 13, "0x974a7fba": 13, "86729e": 13, "86675e": 13, "86654e": 13, "8661e": 13, "865344e": 13, "8653e": 13, "866096485614e": 13, "865343669936e": 13, "182": 13, "866096485613789e9": 13, "environ": 14, "pe": 14, "pyomomodel": 14, "concretemodel": 14, "domain": 14, "nonnegativer": 14, "expr": 14, "constraintlist": 14, "qcpdual": 14, "0x15c7a953": 14, "cplex_persist": 14, "xpress_persist": 14, "0x5e67c6e": 14, "0x4a7cfe2b": 14, "0x8a0f9587": 14, "1025": 14, "0x2dfe4e1c": 14, "0x0f0924a1": 14, "96973": 14, "1369560": 14, "835229226": 14, "602828": 14, "5321028307": 14, "ndar": [], "rai": [], "travelingsalesmandgener": 11, "actual": 11, "annot": 11, "tuplelist": 11, "nx": 11, "build_tsp_model_gurobipy_simplifi": 11, "eq_degre": 11, "x_val": 11, "cbgetsolut": 11, "selected_edg": 11, "add_edges_from": 11, "connected_compon": 11, "cut_edg": 11, "append": 11, "regular": 11, "uniqu": 11, "responsbl": 11, "serial": 11, "tupl": 11, "0x9904dd15": [], "8600000e": [], "6185000e": [], "618500000e": [], "112": [], "12482": [], "0xd7027ec2": [], "237500e": [], "6237": [], "6577": [], "6314": [], "6250": [], "6287": [], "6311": [], "232": [], "314000000000e": [], "145": [], "wherea": 11, "necessari": 11, "verifi": 11, "major": 11, "increas": 11, "so": 11, "0xdcf5ae58": [], "29294": [], "618500e": [], "5618": [], "26112": [], "6110": [], "6583": [], "6243": [], "6278": [], "6581": [], "6519": [], "6283": [], "6313": [], "306": [], "223": [], "There": 11, "12411": [], "0x48832a9e": [], "6939": [], "6147": [], "6194": [], "147": [], "172": [], "focus": 11, "cbgetnoderel": 11, "build_stab_model_pyomo": 11, "build_stab_model_jump": 11, "144": [], "141": 11, "170": 11, "build_binpack_model_gurobipi": [3, 8], "build_multiknapsack_model_gurobipi": [3, 6, 8], "build_pmedian_model_gurobipi": [3, 8], "build_uc_model_gurobipi": [3, 8], "490": 8, "190": 8, "build_setpack_model_gurobipi": 8, "238": 8, "677": 8, "build_vertexcover_model_gurobipi": 8, "326": 8}, "objects": {"miplearn.classifiers": [[0, 0, 0, "-", "minprob"], [0, 0, 0, "-", "singleclass"]], "miplearn.classifiers.minprob": [[0, 1, 1, "", "MinProbabilityClassifier"]], "miplearn.classifiers.minprob.MinProbabilityClassifier": [[0, 2, 1, "", "fit"], [0, 2, 1, "", "predict"], [0, 2, 1, "", "set_fit_request"], [0, 2, 1, "", "set_predict_request"]], "miplearn.classifiers.singleclass": [[0, 1, 1, "", "SingleClassFix"]], "miplearn.classifiers.singleclass.SingleClassFix": [[0, 2, 1, "", "fit"], [0, 2, 1, "", "predict"], [0, 2, 1, "", "set_fit_request"], [0, 2, 1, "", "set_predict_request"]], "miplearn.collectors": [[0, 0, 0, "-", "basic"]], "miplearn.collectors.basic": [[0, 1, 1, "", "BasicCollector"]], "miplearn.collectors.basic.BasicCollector": [[0, 2, 1, "", "collect"]], "miplearn.components.primal": [[1, 0, 0, "-", "actions"], [1, 0, 0, "-", "expert"], [1, 0, 0, "-", "indep"], [1, 0, 0, "-", "joint"], [1, 0, 0, "-", "mem"]], "miplearn.components.primal.actions": [[1, 1, 1, "", "EnforceProximity"], [1, 1, 1, "", "FixVariables"], [1, 1, 1, "", "PrimalComponentAction"], [1, 1, 1, "", "SetWarmStart"]], "miplearn.components.primal.actions.EnforceProximity": [[1, 2, 1, "", "perform"]], "miplearn.components.primal.actions.FixVariables": [[1, 2, 1, "", "perform"]], "miplearn.components.primal.actions.PrimalComponentAction": [[1, 2, 1, "", "perform"]], "miplearn.components.primal.actions.SetWarmStart": [[1, 2, 1, "", "perform"]], "miplearn.components.primal.expert": [[1, 1, 1, "", "ExpertPrimalComponent"]], "miplearn.components.primal.expert.ExpertPrimalComponent": [[1, 2, 1, "", "before_mip"], [1, 2, 1, "", "fit"]], "miplearn.components.primal.indep": [[1, 1, 1, "", "IndependentVarsPrimalComponent"]], "miplearn.components.primal.indep.IndependentVarsPrimalComponent": [[1, 2, 1, "", "before_mip"], [1, 2, 1, "", "fit"]], "miplearn.components.primal.joint": [[1, 1, 1, "", "JointVarsPrimalComponent"]], "miplearn.components.primal.joint.JointVarsPrimalComponent": [[1, 2, 1, "", "before_mip"], [1, 2, 1, "", "fit"]], "miplearn.components.primal.mem": [[1, 1, 1, "", "MemorizingPrimalComponent"], [1, 1, 1, "", "MergeTopSolutions"], [1, 1, 1, "", "SelectTopSolutions"], [1, 1, 1, "", "SolutionConstructor"]], "miplearn.components.primal.mem.MemorizingPrimalComponent": [[1, 2, 1, "", "before_mip"], [1, 2, 1, "", "fit"]], "miplearn.components.primal.mem.MergeTopSolutions": [[1, 2, 1, "", "construct"]], "miplearn.components.primal.mem.SelectTopSolutions": [[1, 2, 1, "", "construct"]], "miplearn.components.primal.mem.SolutionConstructor": [[1, 2, 1, "", "construct"]], "miplearn.extractors": [[0, 0, 0, "-", "AlvLouWeh2017"], [0, 0, 0, "-", "fields"]], "miplearn.extractors.AlvLouWeh2017": [[0, 1, 1, "", "AlvLouWeh2017Extractor"]], "miplearn.extractors.AlvLouWeh2017.AlvLouWeh2017Extractor": [[0, 2, 1, "", "get_constr_features"], [0, 2, 1, "", "get_instance_features"], [0, 2, 1, "", "get_var_features"]], "miplearn.extractors.fields": [[0, 1, 1, "", "H5FieldsExtractor"]], "miplearn.extractors.fields.H5FieldsExtractor": [[0, 2, 1, "", "get_constr_features"], [0, 2, 1, "", "get_instance_features"], [0, 2, 1, "", "get_var_features"]], "miplearn": [[2, 0, 0, "-", "h5"], [2, 0, 0, "-", "io"]], "miplearn.h5": [[2, 1, 1, "", "H5File"]], "miplearn.h5.H5File": [[2, 2, 1, "", "close"], [2, 2, 1, "", "get_array"], [2, 2, 1, "", "get_bytes"], [2, 2, 1, "", "get_scalar"], [2, 2, 1, "", "get_sparse"], [2, 2, 1, "", "put_array"], [2, 2, 1, "", "put_bytes"], [2, 2, 1, "", "put_scalar"], [2, 2, 1, "", "put_sparse"]], "miplearn.io": [[2, 3, 1, "", "gzip"], [2, 3, 1, "", "read_pkl_gz"], [2, 3, 1, "", "write_pkl_gz"]], "miplearn.problems": [[3, 0, 0, "-", "binpack"], [3, 0, 0, "-", "multiknapsack"], [3, 0, 0, "-", "pmedian"], [3, 0, 0, "-", "setcover"], [3, 0, 0, "-", "setpack"], [3, 0, 0, "-", "stab"], [3, 0, 0, "-", "tsp"], [3, 0, 0, "-", "uc"], [3, 0, 0, "-", "vertexcover"]], "miplearn.problems.binpack": [[3, 1, 1, "", "BinPackData"], [3, 1, 1, "", "BinPackGenerator"], [3, 3, 1, "", "build_binpack_model_gurobipy"]], "miplearn.problems.binpack.BinPackGenerator": [[3, 2, 1, "", "generate"]], "miplearn.problems.multiknapsack": [[3, 1, 1, "", "MultiKnapsackData"], [3, 1, 1, "", "MultiKnapsackGenerator"], [3, 3, 1, "", "build_multiknapsack_model_gurobipy"]], "miplearn.problems.pmedian": [[3, 1, 1, "", "PMedianData"], [3, 1, 1, "", "PMedianGenerator"], [3, 3, 1, "", "build_pmedian_model_gurobipy"]], "miplearn.problems.setcover": [[3, 1, 1, "", "SetCoverData"]], "miplearn.problems.setpack": [[3, 1, 1, "", "SetPackData"]], "miplearn.problems.stab": [[3, 1, 1, "", "MaxWeightStableSetData"], [3, 1, 1, "", "MaxWeightStableSetGenerator"]], "miplearn.problems.tsp": [[3, 1, 1, "", "TravelingSalesmanData"], [3, 1, 1, "", "TravelingSalesmanGenerator"]], "miplearn.problems.uc": [[3, 1, 1, "", "UnitCommitmentData"], [3, 3, 1, "", "build_uc_model_gurobipy"]], "miplearn.problems.vertexcover": [[3, 1, 1, "", "MinWeightVertexCoverData"]], "miplearn.solvers": [[4, 0, 0, "-", "abstract"], [4, 0, 0, "-", "gurobi"], [4, 0, 0, "-", "learning"]], "miplearn.solvers.abstract": [[4, 1, 1, "", "AbstractModel"]], "miplearn.solvers.abstract.AbstractModel": [[4, 4, 1, "", "WHERE_CUTS"], [4, 4, 1, "", "WHERE_DEFAULT"], [4, 4, 1, "", "WHERE_LAZY"], [4, 2, 1, "", "add_constrs"], [4, 2, 1, "", "extract_after_load"], [4, 2, 1, "", "extract_after_lp"], [4, 2, 1, "", "extract_after_mip"], [4, 2, 1, "", "fix_variables"], [4, 2, 1, "", "lazy_enforce"], [4, 2, 1, "", "optimize"], [4, 2, 1, "", "relax"], [4, 2, 1, "", "set_cuts"], [4, 2, 1, "", "set_warm_starts"], [4, 2, 1, "", "write"]], "miplearn.solvers.gurobi": [[4, 1, 1, "", "GurobiModel"]], "miplearn.solvers.gurobi.GurobiModel": [[4, 2, 1, "", "add_constr"], [4, 2, 1, "", "add_constrs"], [4, 2, 1, "", "extract_after_load"], [4, 2, 1, "", "extract_after_lp"], [4, 2, 1, "", "extract_after_mip"], [4, 2, 1, "", "fix_variables"], [4, 2, 1, "", "optimize"], [4, 2, 1, "", "relax"], [4, 2, 1, "", "set_time_limit"], [4, 2, 1, "", "set_warm_starts"], [4, 2, 1, "", "write"]], "miplearn.solvers.learning": [[4, 1, 1, "", "LearningSolver"]], "miplearn.solvers.learning.LearningSolver": [[4, 2, 1, "", "fit"], [4, 2, 1, "", "optimize"]]}, "objtypes": {"0": "py:module", "1": "py:class", "2": "py:method", "3": "py:function", "4": "py:attribute"}, "objnames": {"0": ["py", "module", "Python module"], "1": ["py", "class", "Python class"], "2": ["py", "method", "Python method"], "3": ["py", "function", "Python function"], "4": ["py", "attribute", "Python attribute"]}, "titleterms": {"collector": [0, 5], "extractor": [0, 6], "miplearn": [0, 1, 2, 3, 4, 10], "classifi": 0, "minprob": 0, "singleclass": 0, "basic": [0, 5], "field": [0, 5], "alvlouweh2017": 0, "compon": [1, 7], "primal": [1, 7], "action": [1, 7], "expert": [1, 7], "indep": 1, "joint": [1, 7], "mem": 1, "helper": 2, "io": 2, "h5": 2, "benchmark": [3, 8], "problem": [3, 8, 11, 12, 13, 14], "binpack": 3, "multiknapsack": 3, "pmedian": 3, "setcov": 3, "setpack": 3, "stab": 3, "tsp": 3, "uc": 3, "vertexcov": 3, "solver": [4, 9], "abstract": 4, "gurobi": 4, "learn": [4, 9, 11], "train": [5, 9, 11, 12, 13, 14], "data": [5, 11, 12, 13, 14], "overview": [5, 6, 8], "hdf5": 5, "format": 5, "exampl": [5, 6, 7, 8, 9], "featur": 6, "h5fieldsextractor": 6, "alvlouweh2017extractor": 6, "memor": 7, "independ": 7, "var": 7, "bin": 8, "pack": 8, "formul": 8, "random": 8, "instanc": [8, 9, 11, 12, 13, 14], "gener": [8, 11, 12, 13, 14], "multi": 8, "dimension": 8, "knapsack": 8, "capacit": 8, "p": 8, "median": 8, "set": 8, "cover": 8, "stabl": 8, "travel": [8, 11], "salesman": [8, 11], "unit": 8, "commit": 8, "vertex": 8, "configur": 9, "solv": [9, 11, 12, 13, 14], "new": [9, 11], "complet": 9, "content": 10, "tutori": 10, "user": [10, 11], "guid": 10, "python": 10, "api": 10, "refer": 10, "author": 10, "acknowledg": 10, "cite": 10, "cut": 11, "lazi": 11, "constraint": 11, "The": [], "get": [12, 13, 14], "start": [12, 13, 14], "gurobipi": 12, "introduct": [12, 13, 14], "instal": [12, 13, 14], "model": [11, 12, 13, 14], "simpl": [12, 13, 14], "optim": [12, 13, 14], "test": [12, 13, 14], "access": [12, 13, 14], "solut": [12, 13, 14], "jump": 13, "pyomo": 14}, "envversion": {"sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "nbsphinx": 4, "sphinx": 60}, "alltitles": {"Helpers": [[2, "helpers"]], "miplearn.io": [[2, "module-miplearn.io"]], "miplearn.h5": [[2, "module-miplearn.h5"]], "Example": [[7, "Example"], [5, "Example"], [5, "id1"], [6, "Example"], [6, "id1"], [8, "Example"], [8, "id3"], [8, "id6"], [8, "id9"], [8, "id12"], [8, "id15"], [8, "id18"], [8, "id21"], [8, "id24"]], "Primal Components": [[7, "Primal-Components"]], "Primal component actions": [[7, "Primal-component-actions"]], "Memorizing primal component": [[7, "Memorizing-primal-component"]], "Examples": [[7, "Examples"], [7, "id1"], [7, "id2"]], "Independent vars primal component": [[7, "Independent-vars-primal-component"]], "Joint vars primal component": [[7, "Joint-vars-primal-component"]], "Expert primal component": [[7, "Expert-primal-component"]], "MIPLearn": [[10, "miplearn"]], "Contents": [[10, "contents"]], "Tutorials": [[10, null]], "User Guide": [[10, null]], "Python API Reference": [[10, null]], "Authors": [[10, "authors"]], "Acknowledgments": [[10, "acknowledgments"]], "Citing MIPLearn": [[10, "citing-miplearn"]], "Generating training data": [[12, "Generating-training-data"], [13, "Generating-training-data"], [14, "Generating-training-data"], [11, "Generating-training-data"]], "Getting started (Gurobipy)": [[12, "Getting-started-(Gurobipy)"]], "Introduction": [[12, "Introduction"], [13, "Introduction"], [14, "Introduction"]], "Installation": [[12, "Installation"], [13, "Installation"], [14, "Installation"]], "Modeling a simple optimization problem": [[12, "Modeling-a-simple-optimization-problem"], [13, "Modeling-a-simple-optimization-problem"], [14, "Modeling-a-simple-optimization-problem"]], "Training and solving test instances": [[12, "Training-and-solving-test-instances"], [13, "Training-and-solving-test-instances"], [14, "Training-and-solving-test-instances"]], "Accessing the solution": [[12, "Accessing-the-solution"], [13, "Accessing-the-solution"], [14, "Accessing-the-solution"]], "Getting started (JuMP)": [[13, "Getting-started-(JuMP)"]], "Getting started (Pyomo)": [[14, "Getting-started-(Pyomo)"]], "Collectors & Extractors": [[0, "collectors-extractors"]], "miplearn.classifiers.minprob": [[0, "module-miplearn.classifiers.minprob"]], "miplearn.classifiers.singleclass": [[0, "module-miplearn.classifiers.singleclass"]], "miplearn.collectors.basic": [[0, "module-miplearn.collectors.basic"]], "miplearn.extractors.fields": [[0, "module-miplearn.extractors.fields"]], "miplearn.extractors.AlvLouWeh2017": [[0, "module-miplearn.extractors.AlvLouWeh2017"]], "Components": [[1, "components"]], "miplearn.components.primal.actions": [[1, "module-miplearn.components.primal.actions"]], "miplearn.components.primal.expert": [[1, "module-miplearn.components.primal.expert"]], "miplearn.components.primal.indep": [[1, "module-miplearn.components.primal.indep"]], "miplearn.components.primal.joint": [[1, "module-miplearn.components.primal.joint"]], "miplearn.components.primal.mem": [[1, "module-miplearn.components.primal.mem"]], "Benchmark Problems": [[3, "benchmark-problems"], [8, "Benchmark-Problems"]], "miplearn.problems.binpack": [[3, "module-miplearn.problems.binpack"]], "miplearn.problems.multiknapsack": [[3, "module-miplearn.problems.multiknapsack"]], "miplearn.problems.pmedian": [[3, "module-miplearn.problems.pmedian"]], "miplearn.problems.setcover": [[3, "module-miplearn.problems.setcover"]], "miplearn.problems.setpack": [[3, "module-miplearn.problems.setpack"]], "miplearn.problems.stab": [[3, "module-miplearn.problems.stab"]], "miplearn.problems.tsp": [[3, "module-miplearn.problems.tsp"]], "miplearn.problems.uc": [[3, "module-miplearn.problems.uc"]], "miplearn.problems.vertexcover": [[3, "module-miplearn.problems.vertexcover"]], "Solvers": [[4, "solvers"]], "miplearn.solvers.abstract": [[4, "module-miplearn.solvers.abstract"]], "miplearn.solvers.gurobi": [[4, "module-miplearn.solvers.gurobi"]], "miplearn.solvers.learning": [[4, "module-miplearn.solvers.learning"]], "Training Data Collectors": [[5, "Training-Data-Collectors"]], "Overview": [[5, "Overview"], [6, "Overview"], [8, "Overview"]], "HDF5 Format": [[5, "HDF5-Format"]], "Basic collector": [[5, "Basic-collector"]], "Data fields": [[5, "Data-fields"]], "Feature Extractors": [[6, "Feature-Extractors"]], "H5FieldsExtractor": [[6, "H5FieldsExtractor"]], "AlvLouWeh2017Extractor": [[6, "AlvLouWeh2017Extractor"]], "Bin Packing": [[8, "Bin-Packing"]], "Formulation": [[8, "Formulation"], [8, "id1"], [8, "id4"], [8, "id7"], [8, "id10"], [8, "id13"], [8, "id16"], [8, "id19"], [8, "id22"]], "Random instance generator": [[8, "Random-instance-generator"], [8, "id2"], [8, "id5"], [8, "id8"], [8, "id11"], [8, "id14"], [8, "id17"], [8, "id20"], [8, "id23"]], "Multi-Dimensional Knapsack": [[8, "Multi-Dimensional-Knapsack"]], "Capacitated P-Median": [[8, "Capacitated-P-Median"]], "Set cover": [[8, "Set-cover"]], "Set Packing": [[8, "Set-Packing"]], "Stable Set": [[8, "Stable-Set"]], "Traveling Salesman": [[8, "Traveling-Salesman"]], "Unit Commitment": [[8, "Unit-Commitment"]], "Vertex Cover": [[8, "Vertex-Cover"]], "Learning Solver": [[9, "Learning-Solver"]], "Configuring the solver": [[9, "Configuring-the-solver"]], "Training and solving new instances": [[9, "Training-and-solving-new-instances"], [11, "Training-and-solving-new-instances"]], "Complete example": [[9, "Complete-example"]], "User cuts and lazy constraints": [[11, "User-cuts-and-lazy-constraints"]], "Modeling the traveling salesman problem": [[11, "Modeling-the-traveling-salesman-problem"]], "Learning user cuts": [[11, "Learning-user-cuts"]]}, "indexentries": {"alvlouweh2017extractor (class in miplearn.extractors.alvlouweh2017)": [[0, "miplearn.extractors.AlvLouWeh2017.AlvLouWeh2017Extractor"]], "basiccollector (class in miplearn.collectors.basic)": [[0, "miplearn.collectors.basic.BasicCollector"]], "h5fieldsextractor (class in miplearn.extractors.fields)": [[0, "miplearn.extractors.fields.H5FieldsExtractor"]], "minprobabilityclassifier (class in miplearn.classifiers.minprob)": [[0, "miplearn.classifiers.minprob.MinProbabilityClassifier"]], "singleclassfix (class in miplearn.classifiers.singleclass)": [[0, "miplearn.classifiers.singleclass.SingleClassFix"]], "collect() (miplearn.collectors.basic.basiccollector method)": [[0, "miplearn.collectors.basic.BasicCollector.collect"]], "fit() (miplearn.classifiers.minprob.minprobabilityclassifier method)": [[0, "miplearn.classifiers.minprob.MinProbabilityClassifier.fit"]], "fit() (miplearn.classifiers.singleclass.singleclassfix method)": [[0, "miplearn.classifiers.singleclass.SingleClassFix.fit"]], "get_constr_features() (miplearn.extractors.alvlouweh2017.alvlouweh2017extractor method)": [[0, "miplearn.extractors.AlvLouWeh2017.AlvLouWeh2017Extractor.get_constr_features"]], "get_constr_features() (miplearn.extractors.fields.h5fieldsextractor method)": [[0, "miplearn.extractors.fields.H5FieldsExtractor.get_constr_features"]], "get_instance_features() (miplearn.extractors.alvlouweh2017.alvlouweh2017extractor method)": [[0, "miplearn.extractors.AlvLouWeh2017.AlvLouWeh2017Extractor.get_instance_features"]], "get_instance_features() (miplearn.extractors.fields.h5fieldsextractor method)": [[0, "miplearn.extractors.fields.H5FieldsExtractor.get_instance_features"]], "get_var_features() (miplearn.extractors.alvlouweh2017.alvlouweh2017extractor method)": [[0, "miplearn.extractors.AlvLouWeh2017.AlvLouWeh2017Extractor.get_var_features"]], "get_var_features() (miplearn.extractors.fields.h5fieldsextractor method)": [[0, "miplearn.extractors.fields.H5FieldsExtractor.get_var_features"]], "miplearn.classifiers.minprob": [[0, "module-miplearn.classifiers.minprob"]], "miplearn.classifiers.singleclass": [[0, "module-miplearn.classifiers.singleclass"]], "miplearn.collectors.basic": [[0, "module-miplearn.collectors.basic"]], "miplearn.extractors.alvlouweh2017": [[0, "module-miplearn.extractors.AlvLouWeh2017"]], "miplearn.extractors.fields": [[0, "module-miplearn.extractors.fields"]], "module": [[0, "module-miplearn.classifiers.minprob"], [0, "module-miplearn.classifiers.singleclass"], [0, "module-miplearn.collectors.basic"], [0, "module-miplearn.extractors.AlvLouWeh2017"], [0, "module-miplearn.extractors.fields"], [1, "module-miplearn.components.primal.actions"], [1, "module-miplearn.components.primal.expert"], [1, "module-miplearn.components.primal.indep"], [1, "module-miplearn.components.primal.joint"], [1, "module-miplearn.components.primal.mem"], [3, "module-miplearn.problems.binpack"], [3, "module-miplearn.problems.multiknapsack"], [3, "module-miplearn.problems.pmedian"], [3, "module-miplearn.problems.setcover"], [3, "module-miplearn.problems.setpack"], [3, "module-miplearn.problems.stab"], [3, "module-miplearn.problems.tsp"], [3, "module-miplearn.problems.uc"], [3, "module-miplearn.problems.vertexcover"], [4, "module-miplearn.solvers.abstract"], [4, "module-miplearn.solvers.gurobi"], [4, "module-miplearn.solvers.learning"]], "predict() (miplearn.classifiers.minprob.minprobabilityclassifier method)": [[0, "miplearn.classifiers.minprob.MinProbabilityClassifier.predict"]], "predict() (miplearn.classifiers.singleclass.singleclassfix method)": [[0, "miplearn.classifiers.singleclass.SingleClassFix.predict"]], "set_fit_request() (miplearn.classifiers.minprob.minprobabilityclassifier method)": [[0, "miplearn.classifiers.minprob.MinProbabilityClassifier.set_fit_request"]], "set_fit_request() (miplearn.classifiers.singleclass.singleclassfix method)": [[0, "miplearn.classifiers.singleclass.SingleClassFix.set_fit_request"]], "set_predict_request() (miplearn.classifiers.minprob.minprobabilityclassifier method)": [[0, "miplearn.classifiers.minprob.MinProbabilityClassifier.set_predict_request"]], "set_predict_request() (miplearn.classifiers.singleclass.singleclassfix method)": [[0, "miplearn.classifiers.singleclass.SingleClassFix.set_predict_request"]], "enforceproximity (class in miplearn.components.primal.actions)": [[1, "miplearn.components.primal.actions.EnforceProximity"]], "expertprimalcomponent (class in miplearn.components.primal.expert)": [[1, "miplearn.components.primal.expert.ExpertPrimalComponent"]], "fixvariables (class in miplearn.components.primal.actions)": [[1, "miplearn.components.primal.actions.FixVariables"]], "independentvarsprimalcomponent (class in miplearn.components.primal.indep)": [[1, "miplearn.components.primal.indep.IndependentVarsPrimalComponent"]], "jointvarsprimalcomponent (class in miplearn.components.primal.joint)": [[1, "miplearn.components.primal.joint.JointVarsPrimalComponent"]], "memorizingprimalcomponent (class in miplearn.components.primal.mem)": [[1, "miplearn.components.primal.mem.MemorizingPrimalComponent"]], "mergetopsolutions (class in miplearn.components.primal.mem)": [[1, "miplearn.components.primal.mem.MergeTopSolutions"]], "primalcomponentaction (class in miplearn.components.primal.actions)": [[1, "miplearn.components.primal.actions.PrimalComponentAction"]], "selecttopsolutions (class in miplearn.components.primal.mem)": [[1, "miplearn.components.primal.mem.SelectTopSolutions"]], "setwarmstart (class in miplearn.components.primal.actions)": [[1, "miplearn.components.primal.actions.SetWarmStart"]], "solutionconstructor (class in miplearn.components.primal.mem)": [[1, "miplearn.components.primal.mem.SolutionConstructor"]], "before_mip() (miplearn.components.primal.expert.expertprimalcomponent method)": [[1, "miplearn.components.primal.expert.ExpertPrimalComponent.before_mip"]], "before_mip() (miplearn.components.primal.indep.independentvarsprimalcomponent method)": [[1, "miplearn.components.primal.indep.IndependentVarsPrimalComponent.before_mip"]], "before_mip() (miplearn.components.primal.joint.jointvarsprimalcomponent method)": [[1, "miplearn.components.primal.joint.JointVarsPrimalComponent.before_mip"]], "before_mip() (miplearn.components.primal.mem.memorizingprimalcomponent method)": [[1, "miplearn.components.primal.mem.MemorizingPrimalComponent.before_mip"]], "construct() (miplearn.components.primal.mem.mergetopsolutions method)": [[1, "miplearn.components.primal.mem.MergeTopSolutions.construct"]], "construct() (miplearn.components.primal.mem.selecttopsolutions method)": [[1, "miplearn.components.primal.mem.SelectTopSolutions.construct"]], "construct() (miplearn.components.primal.mem.solutionconstructor method)": [[1, "miplearn.components.primal.mem.SolutionConstructor.construct"]], "fit() (miplearn.components.primal.expert.expertprimalcomponent method)": [[1, "miplearn.components.primal.expert.ExpertPrimalComponent.fit"]], "fit() (miplearn.components.primal.indep.independentvarsprimalcomponent method)": [[1, "miplearn.components.primal.indep.IndependentVarsPrimalComponent.fit"]], "fit() (miplearn.components.primal.joint.jointvarsprimalcomponent method)": [[1, "miplearn.components.primal.joint.JointVarsPrimalComponent.fit"]], "fit() (miplearn.components.primal.mem.memorizingprimalcomponent method)": [[1, "miplearn.components.primal.mem.MemorizingPrimalComponent.fit"]], "miplearn.components.primal.actions": [[1, "module-miplearn.components.primal.actions"]], "miplearn.components.primal.expert": [[1, "module-miplearn.components.primal.expert"]], "miplearn.components.primal.indep": [[1, "module-miplearn.components.primal.indep"]], "miplearn.components.primal.joint": [[1, "module-miplearn.components.primal.joint"]], "miplearn.components.primal.mem": [[1, "module-miplearn.components.primal.mem"]], "perform() (miplearn.components.primal.actions.enforceproximity method)": [[1, "miplearn.components.primal.actions.EnforceProximity.perform"]], "perform() (miplearn.components.primal.actions.fixvariables method)": [[1, "miplearn.components.primal.actions.FixVariables.perform"]], "perform() (miplearn.components.primal.actions.primalcomponentaction method)": [[1, "miplearn.components.primal.actions.PrimalComponentAction.perform"]], "perform() (miplearn.components.primal.actions.setwarmstart method)": [[1, "miplearn.components.primal.actions.SetWarmStart.perform"]], "binpackdata (class in miplearn.problems.binpack)": [[3, "miplearn.problems.binpack.BinPackData"]], "binpackgenerator (class in miplearn.problems.binpack)": [[3, "miplearn.problems.binpack.BinPackGenerator"]], "maxweightstablesetdata (class in miplearn.problems.stab)": [[3, "miplearn.problems.stab.MaxWeightStableSetData"]], "maxweightstablesetgenerator (class in miplearn.problems.stab)": [[3, "miplearn.problems.stab.MaxWeightStableSetGenerator"]], "minweightvertexcoverdata (class in miplearn.problems.vertexcover)": [[3, "miplearn.problems.vertexcover.MinWeightVertexCoverData"]], "multiknapsackdata (class in miplearn.problems.multiknapsack)": [[3, "miplearn.problems.multiknapsack.MultiKnapsackData"]], "multiknapsackgenerator (class in miplearn.problems.multiknapsack)": [[3, "miplearn.problems.multiknapsack.MultiKnapsackGenerator"]], "pmediandata (class in miplearn.problems.pmedian)": [[3, "miplearn.problems.pmedian.PMedianData"]], "pmediangenerator (class in miplearn.problems.pmedian)": [[3, "miplearn.problems.pmedian.PMedianGenerator"]], "setcoverdata (class in miplearn.problems.setcover)": [[3, "miplearn.problems.setcover.SetCoverData"]], "setpackdata (class in miplearn.problems.setpack)": [[3, "miplearn.problems.setpack.SetPackData"]], "travelingsalesmandata (class in miplearn.problems.tsp)": [[3, "miplearn.problems.tsp.TravelingSalesmanData"]], "travelingsalesmangenerator (class in miplearn.problems.tsp)": [[3, "miplearn.problems.tsp.TravelingSalesmanGenerator"]], "unitcommitmentdata (class in miplearn.problems.uc)": [[3, "miplearn.problems.uc.UnitCommitmentData"]], "build_binpack_model_gurobipy() (in module miplearn.problems.binpack)": [[3, "miplearn.problems.binpack.build_binpack_model_gurobipy"]], "build_multiknapsack_model_gurobipy() (in module miplearn.problems.multiknapsack)": [[3, "miplearn.problems.multiknapsack.build_multiknapsack_model_gurobipy"]], "build_pmedian_model_gurobipy() (in module miplearn.problems.pmedian)": [[3, "miplearn.problems.pmedian.build_pmedian_model_gurobipy"]], "build_uc_model_gurobipy() (in module miplearn.problems.uc)": [[3, "miplearn.problems.uc.build_uc_model_gurobipy"]], "generate() (miplearn.problems.binpack.binpackgenerator method)": [[3, "miplearn.problems.binpack.BinPackGenerator.generate"]], "miplearn.problems.binpack": [[3, "module-miplearn.problems.binpack"]], "miplearn.problems.multiknapsack": [[3, "module-miplearn.problems.multiknapsack"]], "miplearn.problems.pmedian": [[3, "module-miplearn.problems.pmedian"]], "miplearn.problems.setcover": [[3, "module-miplearn.problems.setcover"]], "miplearn.problems.setpack": [[3, "module-miplearn.problems.setpack"]], "miplearn.problems.stab": [[3, "module-miplearn.problems.stab"]], "miplearn.problems.tsp": [[3, "module-miplearn.problems.tsp"]], "miplearn.problems.uc": [[3, "module-miplearn.problems.uc"]], "miplearn.problems.vertexcover": [[3, "module-miplearn.problems.vertexcover"]], "abstractmodel (class in miplearn.solvers.abstract)": [[4, "miplearn.solvers.abstract.AbstractModel"]], "gurobimodel (class in miplearn.solvers.gurobi)": [[4, "miplearn.solvers.gurobi.GurobiModel"]], "learningsolver (class in miplearn.solvers.learning)": [[4, "miplearn.solvers.learning.LearningSolver"]], "where_cuts (miplearn.solvers.abstract.abstractmodel attribute)": [[4, "miplearn.solvers.abstract.AbstractModel.WHERE_CUTS"]], "where_default (miplearn.solvers.abstract.abstractmodel attribute)": [[4, "miplearn.solvers.abstract.AbstractModel.WHERE_DEFAULT"]], "where_lazy (miplearn.solvers.abstract.abstractmodel attribute)": [[4, "miplearn.solvers.abstract.AbstractModel.WHERE_LAZY"]], "add_constr() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.add_constr"]], "add_constrs() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.add_constrs"]], "add_constrs() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.add_constrs"]], "extract_after_load() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.extract_after_load"]], "extract_after_load() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.extract_after_load"]], "extract_after_lp() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.extract_after_lp"]], "extract_after_lp() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.extract_after_lp"]], "extract_after_mip() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.extract_after_mip"]], "extract_after_mip() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.extract_after_mip"]], "fit() (miplearn.solvers.learning.learningsolver method)": [[4, "miplearn.solvers.learning.LearningSolver.fit"]], "fix_variables() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.fix_variables"]], "fix_variables() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.fix_variables"]], "lazy_enforce() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.lazy_enforce"]], "miplearn.solvers.abstract": [[4, "module-miplearn.solvers.abstract"]], "miplearn.solvers.gurobi": [[4, "module-miplearn.solvers.gurobi"]], "miplearn.solvers.learning": [[4, "module-miplearn.solvers.learning"]], "optimize() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.optimize"]], "optimize() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.optimize"]], "optimize() (miplearn.solvers.learning.learningsolver method)": [[4, "miplearn.solvers.learning.LearningSolver.optimize"]], "relax() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.relax"]], "relax() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.relax"]], "set_cuts() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.set_cuts"]], "set_time_limit() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.set_time_limit"]], "set_warm_starts() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.set_warm_starts"]], "set_warm_starts() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.set_warm_starts"]], "write() (miplearn.solvers.abstract.abstractmodel method)": [[4, "miplearn.solvers.abstract.AbstractModel.write"]], "write() (miplearn.solvers.gurobi.gurobimodel method)": [[4, "miplearn.solvers.gurobi.GurobiModel.write"]]}}) \ No newline at end of file diff --git a/0.4/tutorials/cuts-gurobipy.ipynb b/0.4/tutorials/cuts-gurobipy.ipynb new file mode 100644 index 00000000..ffdc13db --- /dev/null +++ b/0.4/tutorials/cuts-gurobipy.ipynb @@ -0,0 +1,541 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "b4bd8bd6-3ce9-4932-852f-f98a44120a3e", + "metadata": {}, + "source": [ + "# User cuts and lazy constraints\n", + "\n", + "User cuts and lazy constraints are two advanced mixed-integer programming techniques that can accelerate solver performance. User cuts are additional constraints, derived from the constraints already in the model, that can tighten the feasible region and eliminate fractional solutions, thus reducing the size of the branch-and-bound tree. Lazy constraints, on the other hand, are constraints that are potentially part of the problem formulation but are omitted from the initial model to reduce its size; these constraints are added to the formulation only once the solver finds a solution that violates them. While both techniques have been successful, significant computational effort may still be required to generate strong user cuts and to identify violated lazy constraints, which can reduce their effectiveness.\n", + "\n", + "MIPLearn is able to predict which user cuts and which lazy constraints to enforce at the beginning of the optimization process, using machine learning. In this tutorial, we will use the framework to predict subtour elimination constraints for the **traveling salesman problem** using Gurobipy. We assume that MIPLearn has already been correctly installed.\n", + "\n", + "
\n", + "\n", + "Solver Compatibility\n", + "\n", + "User cuts and lazy constraints are also supported in the Python/Pyomo and Julia/JuMP versions of the package. See the source code of build_tsp_model_pyomo and build_tsp_model_jump for more details. Note, however, the following limitations:\n", + "\n", + "- Python/Pyomo: Only `gurobi_persistent` is currently supported. PRs implementing callbacks for other persistent solvers are welcome.\n", + "- Julia/JuMP: Only solvers supporting solver-independent callbacks are supported. As of JuMP 1.19, this includes Gurobi, CPLEX, XPRESS, SCIP and GLPK. Note that HiGHS and Cbc are not supported. As newer versions of JuMP implement further callback support, MIPLearn should become automatically compatible with these solvers.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "72229e1f-cbd8-43f0-82ee-17d6ec9c3b7d", + "metadata": {}, + "source": [ + "## Modeling the traveling salesman problem\n", + "\n", + "Given a list of cities and the distances between them, the **traveling salesman problem (TSP)** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.\n", + "\n", + "To describe an instance of TSP, we need to specify the number of cities $n$, and an $n \\times n$ matrix of distances. The class `TravelingSalesmanData`, in the `miplearn.problems.tsp` package, can hold this data:" + ] + }, + { + "cell_type": "markdown", + "id": "4598a1bc-55b6-48cc-a050-2262786c203a", + "metadata": {}, + "source": [ + "```python\n", + "@dataclass\r\n", + "class TravelingSalesmanData:\r\n", + " n_cities: int\r\n", + " distances: np.ndarray\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "3a43cc12-1207-4247-bdb2-69a6a2910738", + "metadata": {}, + "source": [ + "MIPLearn also provides `TravelingSalesmandGenerator`, a random generator for TSP instances, and `build_tsp_model_gurobipy`, a function which converts `TravelingSalesmanData` into an actual gurobipy optimization model, and which uses lazy constraints to enforce subtour elimination.\n", + "\n", + "The example below is a simplified and annotated version of `build_tsp_model_gurobipy`, illustrating the usage of callbacks with MIPLearn. Compared the the previous tutorial examples, note that, in addition to defining the variables, objective function and constraints of our problem, we also define two callback functions `lazy_separate` and `lazy_enforce`." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "e4712a85-0327-439c-8889-933e1ff714e7", + "metadata": {}, + "outputs": [], + "source": [ + "import gurobipy as gp\n", + "from gurobipy import quicksum, GRB, tuplelist\n", + "from miplearn.solvers.gurobi import GurobiModel\n", + "import networkx as nx\n", + "import numpy as np\n", + "from miplearn.problems.tsp import (\n", + " TravelingSalesmanData,\n", + " TravelingSalesmanGenerator,\n", + ")\n", + "from scipy.stats import uniform, randint\n", + "from miplearn.io import write_pkl_gz, read_pkl_gz\n", + "from miplearn.collectors.basic import BasicCollector\n", + "from miplearn.solvers.learning import LearningSolver\n", + "from miplearn.components.lazy.mem import MemorizingLazyComponent\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "from sklearn.neighbors import KNeighborsClassifier\n", + "\n", + "# Set up random seed to make example more reproducible\n", + "np.random.seed(42)\n", + "\n", + "# Set up Python logging\n", + "import logging\n", + "\n", + "logging.basicConfig(level=logging.WARNING)\n", + "\n", + "\n", + "def build_tsp_model_gurobipy_simplified(data):\n", + " # Read data from file if a filename is provided\n", + " if isinstance(data, str):\n", + " data = read_pkl_gz(data)\n", + "\n", + " # Create empty gurobipy model\n", + " model = gp.Model()\n", + "\n", + " # Create set of edges between every pair of cities, for convenience\n", + " edges = tuplelist(\n", + " (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)\n", + " )\n", + "\n", + " # Add binary variable x[e] for each edge e\n", + " x = model.addVars(edges, vtype=GRB.BINARY, name=\"x\")\n", + "\n", + " # Add objective function\n", + " model.setObjective(quicksum(x[(i, j)] * data.distances[i, j] for (i, j) in edges))\n", + "\n", + " # Add constraint: must choose two edges adjacent to each city\n", + " model.addConstrs(\n", + " (\n", + " quicksum(x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)\n", + " == 2\n", + " for i in range(data.n_cities)\n", + " ),\n", + " name=\"eq_degree\",\n", + " )\n", + "\n", + " def lazy_separate(m: GurobiModel):\n", + " \"\"\"\n", + " Callback function that finds subtours in the current solution.\n", + " \"\"\"\n", + " # Query current value of the x variables\n", + " x_val = m.inner.cbGetSolution(x)\n", + "\n", + " # Initialize empty set of violations\n", + " violations = []\n", + "\n", + " # Build set of edges we have currently selected\n", + " selected_edges = [e for e in edges if x_val[e] > 0.5]\n", + "\n", + " # Build a graph containing the selected edges, using networkx\n", + " graph = nx.Graph()\n", + " graph.add_edges_from(selected_edges)\n", + "\n", + " # For each component of the graph\n", + " for component in list(nx.connected_components(graph)):\n", + "\n", + " # If the component is not the entire graph, we found a\n", + " # subtour. Add the edge cut to the list of violations.\n", + " if len(component) < data.n_cities:\n", + " cut_edges = [\n", + " [e[0], e[1]]\n", + " for e in edges\n", + " if (e[0] in component and e[1] not in component)\n", + " or (e[0] not in component and e[1] in component)\n", + " ]\n", + " violations.append(cut_edges)\n", + "\n", + " # Return the list of violations\n", + " return violations\n", + "\n", + " def lazy_enforce(m: GurobiModel, violations) -> None:\n", + " \"\"\"\n", + " Callback function that, given a list of subtours, adds lazy\n", + " constraints to remove them from the feasible region.\n", + " \"\"\"\n", + " print(f\"Enforcing {len(violations)} subtour elimination constraints\")\n", + " for violation in violations:\n", + " m.add_constr(quicksum(x[e[0], e[1]] for e in violation) >= 2)\n", + "\n", + " return GurobiModel(\n", + " model,\n", + " lazy_separate=lazy_separate,\n", + " lazy_enforce=lazy_enforce,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "58875042-d6ac-4f93-b3cc-9a5822b11dad", + "metadata": {}, + "source": [ + "The `lazy_separate` function starts by querying the current fractional solution value through `m.inner.cbGetSolution` (recall that `m.inner` is a regular gurobipy model), then finds the set of violated lazy constraints. Unlike a regular lazy constraint solver callback, note that `lazy_separate` does not add the violated constraints to the model; it simply returns a list of objects that uniquely identifies the set of lazy constraints that should be generated. Enforcing the constraints is the responsbility of the second callback function, `lazy_enforce`. This function takes as input the model and the list of violations found by `lazy_separate`, converts them into actual constraints, and adds them to the model through `m.add_constr`.\n", + "\n", + "During training data generation, MIPLearn calls `lazy_separate` and `lazy_enforce` in sequence, inside a regular solver callback. However, once the machine learning models are trained, MIPLearn calls `lazy_enforce` directly, before the optimization process starts, with a list of **predicted** violations, as we will see in the example below." + ] + }, + { + "cell_type": "markdown", + "id": "5839728e-406c-4be2-ba81-83f2b873d4b2", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Constraint Representation\n", + "\n", + "How should user cuts and lazy constraints be represented is a decision that the user can make; MIPLearn is representation agnostic. The objects returned by `lazy_separate`, however, are serialized as JSON and stored in the HDF5 training data files. Therefore, it is recommended to use only simple objects, such as lists, tuples and dictionaries.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "847ae32e-fad7-406a-8797-0d79065a07fd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "To test the callback defined above, we generate a small set of TSP instances, using the provided random instance generator. As in the previous tutorial, we generate some test instances and some training instances, then solve them using `BasicCollector`. Input problem data is stored in `tsp/train/00000.pkl.gz, ...`, whereas solver training data (including list of required lazy constraints) is stored in `tsp/train/00000.h5, ...`." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "eb63154a-1fa6-4eac-aa46-6838b9c201f6", + "metadata": {}, + "outputs": [], + "source": [ + "# Configure generator to produce instances with 50 cities located\n", + "# in the 1000 x 1000 square, and with slightly perturbed distances.\n", + "gen = TravelingSalesmanGenerator(\n", + " x=uniform(loc=0.0, scale=1000.0),\n", + " y=uniform(loc=0.0, scale=1000.0),\n", + " n=randint(low=50, high=51),\n", + " gamma=uniform(loc=1.0, scale=0.25),\n", + " fix_cities=True,\n", + " round=True,\n", + ")\n", + "\n", + "# Generate 500 instances and store input data file to .pkl.gz files\n", + "data = gen.generate(500)\n", + "train_data = write_pkl_gz(data[0:450], \"tsp/train\")\n", + "test_data = write_pkl_gz(data[450:500], \"tsp/test\")\n", + "\n", + "# Solve the training instances in parallel, collecting the required lazy\n", + "# constraints, in addition to other information, such as optimal solution.\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_tsp_model_gurobipy_simplified, n_jobs=10)" + ] + }, + { + "cell_type": "markdown", + "id": "6903c26c-dbe0-4a2e-bced-fdbf93513dde", + "metadata": {}, + "source": [ + "## Training and solving new instances" + ] + }, + { + "cell_type": "markdown", + "id": "57cd724a-2d27-4698-a1e6-9ab8345ef31f", + "metadata": {}, + "source": [ + "After producing the training dataset, we can train the machine learning models to predict which lazy constraints are necessary. In this tutorial, we use the following ML strategy: given a new instance, find the 50 most similar ones in the training dataset and verify how often each lazy constraint was required. If a lazy constraint was required for the majority of the 50 most-similar instances, enforce it ahead-of-time for the current instance. To measure instance similarity, use the objective function only. This ML strategy can be implemented using `MemorizingLazyComponent` with `H5FieldsExtractor` and `KNeighborsClassifier`, as shown below." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "43779e3d-4174-4189-bc75-9f564910e212", + "metadata": {}, + "outputs": [], + "source": [ + "solver = LearningSolver(\n", + " components=[\n", + " MemorizingLazyComponent(\n", + " extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n", + " clf=KNeighborsClassifier(n_neighbors=100),\n", + " ),\n", + " ],\n", + ")\n", + "solver.fit(train_data)" + ] + }, + { + "cell_type": "markdown", + "id": "12480712-9d3d-4cbc-a6d7-d6c1e2f950f4", + "metadata": {}, + "source": [ + "Next, we solve one of the test instances using the trained solver. In the run below, we can see that MIPLearn adds many lazy constraints ahead-of-time, before the optimization starts. During the optimization process itself, some additional lazy constraints are required, but very few." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "23f904ad-f1a8-4b5a-81ae-c0b9e813a4b2", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter Threads to value 1\n", + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n", + "Model fingerprint: 0x04d7bec1\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 50 rows, 1225 columns, 2450 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n", + " 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 66 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 5.588000000e+03\n", + "\n", + "User-callback calls 107, time in user-callback 0.00 sec\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:miplearn.components.cuts.mem:Predicting violated lazy constraints...\n", + "INFO:miplearn.components.lazy.mem:Enforcing 19 constraints ahead-of-time...\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Enforcing 19 subtour elimination constraints\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 69 rows, 1225 columns and 6091 nonzeros\n", + "Model fingerprint: 0x09bd34d6\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Found heuristic solution: objective 29853.000000\n", + "Presolve time: 0.00s\n", + "Presolved: 69 rows, 1225 columns, 6091 nonzeros\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "\n", + "Root relaxation: objective 6.139000e+03, 93 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 6139.00000 0 6 29853.0000 6139.00000 79.4% - 0s\n", + "H 0 0 6390.0000000 6139.00000 3.93% - 0s\n", + " 0 0 6165.50000 0 10 6390.00000 6165.50000 3.51% - 0s\n", + "Enforcing 3 subtour elimination constraints\n", + " 0 0 6165.50000 0 6 6390.00000 6165.50000 3.51% - 0s\n", + " 0 0 6198.50000 0 16 6390.00000 6198.50000 3.00% - 0s\n", + "* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 11\n", + " MIR: 1\n", + " Zero half: 4\n", + " Lazy constraints: 3\n", + "\n", + "Explored 1 nodes (222 simplex iterations) in 0.03 seconds (0.02 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 3: 6219 6390 29853 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 141, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "# Increase log verbosity, so that we can see what is MIPLearn doing\n", + "logging.getLogger(\"miplearn\").setLevel(logging.INFO)\n", + "\n", + "# Solve a new test instance\n", + "solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);" + ] + }, + { + "cell_type": "markdown", + "id": "79cc3e61-ee2b-4f18-82cb-373d55d67de6", + "metadata": {}, + "source": [ + "Finally, we solve the same instance, but using a regular solver, without ML prediction. We can see that a much larger number of lazy constraints are added during the optimization process itself. Additionally, the solver requires a larger number of iterations to find the optimal solution. There is not a significant difference in running time because of the small size of these instances." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "a015c51c-091a-43b6-b761-9f3577fc083e", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n", + "Model fingerprint: 0x04d7bec1\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Presolve time: 0.00s\n", + "Presolved: 50 rows, 1225 columns, 2450 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n", + " 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 66 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 5.588000000e+03\n", + "\n", + "User-callback calls 107, time in user-callback 0.00 sec\n", + "Set parameter PreCrush to value 1\n", + "Set parameter LazyConstraints to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n", + "\n", + "Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n", + "Model fingerprint: 0x77a94572\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+00]\n", + " Objective range [1e+01, 1e+03]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [2e+00, 2e+00]\n", + "Found heuristic solution: objective 29695.000000\n", + "Presolve time: 0.00s\n", + "Presolved: 50 rows, 1225 columns, 2450 nonzeros\n", + "Variable types: 0 continuous, 1225 integer (1225 binary)\n", + "\n", + "Root relaxation: objective 5.588000e+03, 68 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 5588.00000 0 12 29695.0000 5588.00000 81.2% - 0s\n", + "Enforcing 9 subtour elimination constraints\n", + "Enforcing 11 subtour elimination constraints\n", + "H 0 0 27241.000000 5588.00000 79.5% - 0s\n", + " 0 0 5898.00000 0 8 27241.0000 5898.00000 78.3% - 0s\n", + "Enforcing 4 subtour elimination constraints\n", + "Enforcing 3 subtour elimination constraints\n", + " 0 0 6066.00000 0 - 27241.0000 6066.00000 77.7% - 0s\n", + "Enforcing 2 subtour elimination constraints\n", + " 0 0 6128.00000 0 - 27241.0000 6128.00000 77.5% - 0s\n", + " 0 0 6139.00000 0 6 27241.0000 6139.00000 77.5% - 0s\n", + "H 0 0 6368.0000000 6139.00000 3.60% - 0s\n", + " 0 0 6154.75000 0 15 6368.00000 6154.75000 3.35% - 0s\n", + "Enforcing 2 subtour elimination constraints\n", + " 0 0 6154.75000 0 6 6368.00000 6154.75000 3.35% - 0s\n", + " 0 0 6165.75000 0 11 6368.00000 6165.75000 3.18% - 0s\n", + "Enforcing 3 subtour elimination constraints\n", + " 0 0 6204.00000 0 6 6368.00000 6204.00000 2.58% - 0s\n", + "* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 5\n", + " MIR: 1\n", + " Zero half: 4\n", + " Lazy constraints: 4\n", + "\n", + "Explored 1 nodes (224 simplex iterations) in 0.10 seconds (0.03 work units)\n", + "Thread count was 1 (of 20 available processors)\n", + "\n", + "Solution count 4: 6219 6368 27241 29695 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 170, time in user-callback 0.01 sec\n" + ] + } + ], + "source": [ + "solver = LearningSolver(components=[]) # empty set of ML components\n", + "solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);" + ] + }, + { + "cell_type": "markdown", + "id": "432c99b2-67fe-409b-8224-ccef91de96d1", + "metadata": {}, + "source": [ + "## Learning user cuts\n", + "\n", + "The example above focused on lazy constraints. To enforce user cuts instead, the procedure is very similar, with the following changes:\n", + "\n", + "- Instead of `lazy_separate` and `lazy_enforce`, use `cuts_separate` and `cuts_enforce`\n", + "- Instead of `m.inner.cbGetSolution`, use `m.inner.cbGetNodeRel`\n", + "\n", + "For a complete example, see `build_stab_model_gurobipy`, `build_stab_model_pyomo` and `build_stab_model_jump`, which solves the maximum-weight stable set problem using user cut callbacks." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e6cb694d-8c43-410f-9a13-01bf9e0763b7", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/tutorials/cuts-gurobipy/index.html b/0.4/tutorials/cuts-gurobipy/index.html new file mode 100644 index 00000000..a3393772 --- /dev/null +++ b/0.4/tutorials/cuts-gurobipy/index.html @@ -0,0 +1,714 @@ + + + + + + + + 4. User cuts and lazy constraints — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ +
+
+ +
+ + + + + + + + + + + + + + +
+ + + +
+
+
+
+ +
+ +
+

4. User cuts and lazy constraints

+

User cuts and lazy constraints are two advanced mixed-integer programming techniques that can accelerate solver performance. User cuts are additional constraints, derived from the constraints already in the model, that can tighten the feasible region and eliminate fractional solutions, thus reducing the size of the branch-and-bound tree. Lazy constraints, on the other hand, are constraints that are potentially part of the problem formulation but are omitted from the initial model to reduce its +size; these constraints are added to the formulation only once the solver finds a solution that violates them. While both techniques have been successful, significant computational effort may still be required to generate strong user cuts and to identify violated lazy constraints, which can reduce their effectiveness.

+

MIPLearn is able to predict which user cuts and which lazy constraints to enforce at the beginning of the optimization process, using machine learning. In this tutorial, we will use the framework to predict subtour elimination constraints for the traveling salesman problem using Gurobipy. We assume that MIPLearn has already been correctly installed.

+
+

Solver Compatibility

+

User cuts and lazy constraints are also supported in the Python/Pyomo and Julia/JuMP versions of the package. See the source code of build_tsp_model_pyomo and build_tsp_model_jump for more details. Note, however, the following limitations:

+
    +
  • Python/Pyomo: Only gurobi_persistent is currently supported. PRs implementing callbacks for other persistent solvers are welcome.

  • +
  • Julia/JuMP: Only solvers supporting solver-independent callbacks are supported. As of JuMP 1.19, this includes Gurobi, CPLEX, XPRESS, SCIP and GLPK. Note that HiGHS and Cbc are not supported. As newer versions of JuMP implement further callback support, MIPLearn should become automatically compatible with these solvers.

  • +
+
+
+

4.1. Modeling the traveling salesman problem

+

Given a list of cities and the distances between them, the traveling salesman problem (TSP) asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp’s 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.

+

To describe an instance of TSP, we need to specify the number of cities \(n\), and an \(n \times n\) matrix of distances. The class TravelingSalesmanData, in the miplearn.problems.tsp package, can hold this data:

+
@dataclass
+class TravelingSalesmanData:
+    n_cities: int
+    distances: np.ndarray
+
+
+

MIPLearn also provides TravelingSalesmandGenerator, a random generator for TSP instances, and build_tsp_model_gurobipy, a function which converts TravelingSalesmanData into an actual gurobipy optimization model, and which uses lazy constraints to enforce subtour elimination.

+

The example below is a simplified and annotated version of build_tsp_model_gurobipy, illustrating the usage of callbacks with MIPLearn. Compared the the previous tutorial examples, note that, in addition to defining the variables, objective function and constraints of our problem, we also define two callback functions lazy_separate and lazy_enforce.

+
+
[1]:
+
+
+
import gurobipy as gp
+from gurobipy import quicksum, GRB, tuplelist
+from miplearn.solvers.gurobi import GurobiModel
+import networkx as nx
+import numpy as np
+from miplearn.problems.tsp import (
+    TravelingSalesmanData,
+    TravelingSalesmanGenerator,
+)
+from scipy.stats import uniform, randint
+from miplearn.io import write_pkl_gz, read_pkl_gz
+from miplearn.collectors.basic import BasicCollector
+from miplearn.solvers.learning import LearningSolver
+from miplearn.components.lazy.mem import MemorizingLazyComponent
+from miplearn.extractors.fields import H5FieldsExtractor
+from sklearn.neighbors import KNeighborsClassifier
+
+# Set up random seed to make example more reproducible
+np.random.seed(42)
+
+# Set up Python logging
+import logging
+
+logging.basicConfig(level=logging.WARNING)
+
+
+def build_tsp_model_gurobipy_simplified(data):
+    # Read data from file if a filename is provided
+    if isinstance(data, str):
+        data = read_pkl_gz(data)
+
+    # Create empty gurobipy model
+    model = gp.Model()
+
+    # Create set of edges between every pair of cities, for convenience
+    edges = tuplelist(
+        (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)
+    )
+
+    # Add binary variable x[e] for each edge e
+    x = model.addVars(edges, vtype=GRB.BINARY, name="x")
+
+    # Add objective function
+    model.setObjective(quicksum(x[(i, j)] * data.distances[i, j] for (i, j) in edges))
+
+    # Add constraint: must choose two edges adjacent to each city
+    model.addConstrs(
+        (
+            quicksum(x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)
+            == 2
+            for i in range(data.n_cities)
+        ),
+        name="eq_degree",
+    )
+
+    def lazy_separate(m: GurobiModel):
+        """
+        Callback function that finds subtours in the current solution.
+        """
+        # Query current value of the x variables
+        x_val = m.inner.cbGetSolution(x)
+
+        # Initialize empty set of violations
+        violations = []
+
+        # Build set of edges we have currently selected
+        selected_edges = [e for e in edges if x_val[e] > 0.5]
+
+        # Build a graph containing the selected edges, using networkx
+        graph = nx.Graph()
+        graph.add_edges_from(selected_edges)
+
+        # For each component of the graph
+        for component in list(nx.connected_components(graph)):
+
+            # If the component is not the entire graph, we found a
+            # subtour. Add the edge cut to the list of violations.
+            if len(component) < data.n_cities:
+                cut_edges = [
+                    [e[0], e[1]]
+                    for e in edges
+                    if (e[0] in component and e[1] not in component)
+                    or (e[0] not in component and e[1] in component)
+                ]
+                violations.append(cut_edges)
+
+        # Return the list of violations
+        return violations
+
+    def lazy_enforce(m: GurobiModel, violations) -> None:
+        """
+        Callback function that, given a list of subtours, adds lazy
+        constraints to remove them from the feasible region.
+        """
+        print(f"Enforcing {len(violations)} subtour elimination constraints")
+        for violation in violations:
+            m.add_constr(quicksum(x[e[0], e[1]] for e in violation) >= 2)
+
+    return GurobiModel(
+        model,
+        lazy_separate=lazy_separate,
+        lazy_enforce=lazy_enforce,
+    )
+
+
+
+

The lazy_separate function starts by querying the current fractional solution value through m.inner.cbGetSolution (recall that m.inner is a regular gurobipy model), then finds the set of violated lazy constraints. Unlike a regular lazy constraint solver callback, note that lazy_separate does not add the violated constraints to the model; it simply returns a list of objects that uniquely identifies the set of lazy constraints that should be generated. Enforcing the constraints is +the responsbility of the second callback function, lazy_enforce. This function takes as input the model and the list of violations found by lazy_separate, converts them into actual constraints, and adds them to the model through m.add_constr.

+

During training data generation, MIPLearn calls lazy_separate and lazy_enforce in sequence, inside a regular solver callback. However, once the machine learning models are trained, MIPLearn calls lazy_enforce directly, before the optimization process starts, with a list of predicted violations, as we will see in the example below.

+
+

Constraint Representation

+

How should user cuts and lazy constraints be represented is a decision that the user can make; MIPLearn is representation agnostic. The objects returned by lazy_separate, however, are serialized as JSON and stored in the HDF5 training data files. Therefore, it is recommended to use only simple objects, such as lists, tuples and dictionaries.

+
+
+
+

4.2. Generating training data

+

To test the callback defined above, we generate a small set of TSP instances, using the provided random instance generator. As in the previous tutorial, we generate some test instances and some training instances, then solve them using BasicCollector. Input problem data is stored in tsp/train/00000.pkl.gz, ..., whereas solver training data (including list of required lazy constraints) is stored in tsp/train/00000.h5, ....

+
+
[2]:
+
+
+
# Configure generator to produce instances with 50 cities located
+# in the 1000 x 1000 square, and with slightly perturbed distances.
+gen = TravelingSalesmanGenerator(
+    x=uniform(loc=0.0, scale=1000.0),
+    y=uniform(loc=0.0, scale=1000.0),
+    n=randint(low=50, high=51),
+    gamma=uniform(loc=1.0, scale=0.25),
+    fix_cities=True,
+    round=True,
+)
+
+# Generate 500 instances and store input data file to .pkl.gz files
+data = gen.generate(500)
+train_data = write_pkl_gz(data[0:450], "tsp/train")
+test_data = write_pkl_gz(data[450:500], "tsp/test")
+
+# Solve the training instances in parallel, collecting the required lazy
+# constraints, in addition to other information, such as optimal solution.
+bc = BasicCollector()
+bc.collect(train_data, build_tsp_model_gurobipy_simplified, n_jobs=10)
+
+
+
+
+
+

4.3. Training and solving new instances

+

After producing the training dataset, we can train the machine learning models to predict which lazy constraints are necessary. In this tutorial, we use the following ML strategy: given a new instance, find the 50 most similar ones in the training dataset and verify how often each lazy constraint was required. If a lazy constraint was required for the majority of the 50 most-similar instances, enforce it ahead-of-time for the current instance. To measure instance similarity, use the objective +function only. This ML strategy can be implemented using MemorizingLazyComponent with H5FieldsExtractor and KNeighborsClassifier, as shown below.

+
+
[3]:
+
+
+
solver = LearningSolver(
+    components=[
+        MemorizingLazyComponent(
+            extractor=H5FieldsExtractor(instance_fields=["static_var_obj_coeffs"]),
+            clf=KNeighborsClassifier(n_neighbors=100),
+        ),
+    ],
+)
+solver.fit(train_data)
+
+
+
+

Next, we solve one of the test instances using the trained solver. In the run below, we can see that MIPLearn adds many lazy constraints ahead-of-time, before the optimization starts. During the optimization process itself, some additional lazy constraints are required, but very few.

+
+
[4]:
+
+
+
# Increase log verbosity, so that we can see what is MIPLearn doing
+logging.getLogger("miplearn").setLevel(logging.INFO)
+
+# Solve a new test instance
+solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);
+
+
+
+
+
+
+
+
+Set parameter Threads to value 1
+Restricted license - for non-production use only - expires 2024-10-28
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 1 threads
+
+Optimize a model with 50 rows, 1225 columns and 2450 nonzeros
+Model fingerprint: 0x04d7bec1
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [1e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [2e+00, 2e+00]
+Presolve time: 0.00s
+Presolved: 50 rows, 1225 columns, 2450 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    4.0600000e+02   9.700000e+01   0.000000e+00      0s
+      66    5.5880000e+03   0.000000e+00   0.000000e+00      0s
+
+Solved in 66 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  5.588000000e+03
+
+User-callback calls 107, time in user-callback 0.00 sec
+
+
+
+
+
+
+
+INFO:miplearn.components.cuts.mem:Predicting violated lazy constraints...
+INFO:miplearn.components.lazy.mem:Enforcing 19 constraints ahead-of-time...
+
+
+
+
+
+
+
+Enforcing 19 subtour elimination constraints
+Set parameter PreCrush to value 1
+Set parameter LazyConstraints to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 1 threads
+
+Optimize a model with 69 rows, 1225 columns and 6091 nonzeros
+Model fingerprint: 0x09bd34d6
+Variable types: 0 continuous, 1225 integer (1225 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [1e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [2e+00, 2e+00]
+Found heuristic solution: objective 29853.000000
+Presolve time: 0.00s
+Presolved: 69 rows, 1225 columns, 6091 nonzeros
+Variable types: 0 continuous, 1225 integer (1225 binary)
+
+Root relaxation: objective 6.139000e+03, 93 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 6139.00000    0    6 29853.0000 6139.00000  79.4%     -    0s
+H    0     0                    6390.0000000 6139.00000  3.93%     -    0s
+     0     0 6165.50000    0   10 6390.00000 6165.50000  3.51%     -    0s
+Enforcing 3 subtour elimination constraints
+     0     0 6165.50000    0    6 6390.00000 6165.50000  3.51%     -    0s
+     0     0 6198.50000    0   16 6390.00000 6198.50000  3.00%     -    0s
+*    0     0               0    6219.0000000 6219.00000  0.00%     -    0s
+
+Cutting planes:
+  Gomory: 11
+  MIR: 1
+  Zero half: 4
+  Lazy constraints: 3
+
+Explored 1 nodes (222 simplex iterations) in 0.03 seconds (0.02 work units)
+Thread count was 1 (of 20 available processors)
+
+Solution count 3: 6219 6390 29853
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%
+
+User-callback calls 141, time in user-callback 0.00 sec
+
+
+

Finally, we solve the same instance, but using a regular solver, without ML prediction. We can see that a much larger number of lazy constraints are added during the optimization process itself. Additionally, the solver requires a larger number of iterations to find the optimal solution. There is not a significant difference in running time because of the small size of these instances.

+
+
[5]:
+
+
+
solver = LearningSolver(components=[])  # empty set of ML components
+solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 1 threads
+
+Optimize a model with 50 rows, 1225 columns and 2450 nonzeros
+Model fingerprint: 0x04d7bec1
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [1e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [2e+00, 2e+00]
+Presolve time: 0.00s
+Presolved: 50 rows, 1225 columns, 2450 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    4.0600000e+02   9.700000e+01   0.000000e+00      0s
+      66    5.5880000e+03   0.000000e+00   0.000000e+00      0s
+
+Solved in 66 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  5.588000000e+03
+
+User-callback calls 107, time in user-callback 0.00 sec
+Set parameter PreCrush to value 1
+Set parameter LazyConstraints to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 1 threads
+
+Optimize a model with 50 rows, 1225 columns and 2450 nonzeros
+Model fingerprint: 0x77a94572
+Variable types: 0 continuous, 1225 integer (1225 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+00]
+  Objective range  [1e+01, 1e+03]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [2e+00, 2e+00]
+Found heuristic solution: objective 29695.000000
+Presolve time: 0.00s
+Presolved: 50 rows, 1225 columns, 2450 nonzeros
+Variable types: 0 continuous, 1225 integer (1225 binary)
+
+Root relaxation: objective 5.588000e+03, 68 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 5588.00000    0   12 29695.0000 5588.00000  81.2%     -    0s
+Enforcing 9 subtour elimination constraints
+Enforcing 11 subtour elimination constraints
+H    0     0                    27241.000000 5588.00000  79.5%     -    0s
+     0     0 5898.00000    0    8 27241.0000 5898.00000  78.3%     -    0s
+Enforcing 4 subtour elimination constraints
+Enforcing 3 subtour elimination constraints
+     0     0 6066.00000    0    - 27241.0000 6066.00000  77.7%     -    0s
+Enforcing 2 subtour elimination constraints
+     0     0 6128.00000    0    - 27241.0000 6128.00000  77.5%     -    0s
+     0     0 6139.00000    0    6 27241.0000 6139.00000  77.5%     -    0s
+H    0     0                    6368.0000000 6139.00000  3.60%     -    0s
+     0     0 6154.75000    0   15 6368.00000 6154.75000  3.35%     -    0s
+Enforcing 2 subtour elimination constraints
+     0     0 6154.75000    0    6 6368.00000 6154.75000  3.35%     -    0s
+     0     0 6165.75000    0   11 6368.00000 6165.75000  3.18%     -    0s
+Enforcing 3 subtour elimination constraints
+     0     0 6204.00000    0    6 6368.00000 6204.00000  2.58%     -    0s
+*    0     0               0    6219.0000000 6219.00000  0.00%     -    0s
+
+Cutting planes:
+  Gomory: 5
+  MIR: 1
+  Zero half: 4
+  Lazy constraints: 4
+
+Explored 1 nodes (224 simplex iterations) in 0.10 seconds (0.03 work units)
+Thread count was 1 (of 20 available processors)
+
+Solution count 4: 6219 6368 27241 29695
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%
+
+User-callback calls 170, time in user-callback 0.01 sec
+
+
+
+
+

4.4. Learning user cuts

+

The example above focused on lazy constraints. To enforce user cuts instead, the procedure is very similar, with the following changes:

+
    +
  • Instead of lazy_separate and lazy_enforce, use cuts_separate and cuts_enforce

  • +
  • Instead of m.inner.cbGetSolution, use m.inner.cbGetNodeRel

  • +
+

For a complete example, see build_stab_model_gurobipy, build_stab_model_pyomo and build_stab_model_jump, which solves the maximum-weight stable set problem using user cut callbacks.

+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/tutorials/getting-started-gurobipy.ipynb b/0.4/tutorials/getting-started-gurobipy.ipynb new file mode 100644 index 00000000..110e3f43 --- /dev/null +++ b/0.4/tutorials/getting-started-gurobipy.ipynb @@ -0,0 +1,837 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6b8983b1", + "metadata": { + "tags": [] + }, + "source": [ + "# Getting started (Gurobipy)\n", + "\n", + "## Introduction\n", + "\n", + "**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n", + "\n", + "1. Install the Python/Gurobipy version of MIPLearn\n", + "2. Model a simple optimization problem using Gurobipy\n", + "3. Generate training data and train the ML models\n", + "4. Use the ML models together Gurobi to solve new instances\n", + "\n", + "
\n", + "Note\n", + " \n", + "The Python/Gurobipy version of MIPLearn is only compatible with the Gurobi Optimizer. For broader solver compatibility, see the Python/Pyomo and Julia/JuMP versions of the package.\n", + "
\n", + "\n", + "
\n", + "Warning\n", + " \n", + "MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n", + " \n", + "
\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "02f0a927", + "metadata": {}, + "source": [ + "## Installation\n", + "\n", + "MIPLearn is available in two versions:\n", + "\n", + "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", + "- Julia version, compatible with the JuMP modeling language.\n", + "\n", + "In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n", + "\n", + "```\n", + "$ pip install MIPLearn==0.3\n", + "```\n", + "\n", + "In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.\n", + "\n", + "```\n", + "$ pip install 'gurobipy>=10,<10.1'\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a14e4550", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + " \n", + "In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.\n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "16b86823", + "metadata": {}, + "source": [ + "## Modeling a simple optimization problem\n", + "\n", + "To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n", + "\n", + "Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n", + "\n", + "This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:" + ] + }, + { + "cell_type": "markdown", + "id": "f12c3702", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align}\n", + "\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n", + "\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n", + "& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n", + "& \\sum_{i=1}^n y_i = d \\\\\n", + "& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n", + "& y_i \\geq 0 & i=1,\\ldots,n\n", + "\\end{align}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "be3989ed", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Note\n", + "\n", + "We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "a5fd33f6", + "metadata": {}, + "source": [ + "Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class `UnitCommitmentData`, which holds all the input data." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "22a67170-10b4-43d3-8708-014d91141e73", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:18:25.442346786Z", + "start_time": "2023-06-06T20:18:25.329017476Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "from dataclasses import dataclass\n", + "from typing import List\n", + "\n", + "import numpy as np\n", + "\n", + "\n", + "@dataclass\n", + "class UnitCommitmentData:\n", + " demand: float\n", + " pmin: List[float]\n", + " pmax: List[float]\n", + " cfix: List[float]\n", + " cvar: List[float]" + ] + }, + { + "cell_type": "markdown", + "id": "29f55efa-0751-465a-9b0a-a821d46a3d40", + "metadata": {}, + "source": [ + "Next, we write a `build_uc_model` function, which converts the input data into a concrete Pyomo model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a compressed pickle file containing this data." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "2f67032f-0d74-4317-b45c-19da0ec859e9", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:48:05.953902842Z", + "start_time": "2023-06-06T20:48:05.909747925Z" + } + }, + "outputs": [], + "source": [ + "import gurobipy as gp\n", + "from gurobipy import GRB, quicksum\n", + "from typing import Union\n", + "from miplearn.io import read_pkl_gz\n", + "from miplearn.solvers.gurobi import GurobiModel\n", + "\n", + "\n", + "def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:\n", + " if isinstance(data, str):\n", + " data = read_pkl_gz(data)\n", + "\n", + " model = gp.Model()\n", + " n = len(data.pmin)\n", + " x = model._x = model.addVars(n, vtype=GRB.BINARY, name=\"x\")\n", + " y = model._y = model.addVars(n, name=\"y\")\n", + " model.setObjective(\n", + " quicksum(data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n))\n", + " )\n", + " model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))\n", + " model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))\n", + " model.addConstr(quicksum(y[i] for i in range(n)) == data.demand)\n", + " return GurobiModel(model)" + ] + }, + { + "cell_type": "markdown", + "id": "c22714a3", + "metadata": {}, + "source": [ + "At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "2a896f47", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:14.266758244Z", + "start_time": "2023-06-06T20:49:14.223514806Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", + "Model fingerprint: 0x58dfdd53\n", + "Variable types: 3 continuous, 3 integer (3 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 7e+01]\n", + " Objective range [2e+00, 7e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+02, 1e+02]\n", + "Presolve removed 2 rows and 1 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 5 columns, 13 nonzeros\n", + "Variable types: 0 continuous, 5 integer (3 binary)\n", + "Found heuristic solution: objective 1400.0000000\n", + "\n", + "Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n", + " 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n", + "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 2: 1320 1400 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", + "obj = 1320.0\n", + "x = [-0.0, 1.0, 1.0]\n", + "y = [0.0, 60.0, 40.0]\n" + ] + } + ], + "source": [ + "model = build_uc_model(\n", + " UnitCommitmentData(\n", + " demand=100.0,\n", + " pmin=[10, 20, 30],\n", + " pmax=[50, 60, 70],\n", + " cfix=[700, 600, 500],\n", + " cvar=[1.5, 2.0, 2.5],\n", + " )\n", + ")\n", + "\n", + "model.optimize()\n", + "print(\"obj =\", model.inner.objVal)\n", + "print(\"x =\", [model.inner._x[i].x for i in range(3)])\n", + "print(\"y =\", [model.inner._y[i].x for i in range(3)])" + ] + }, + { + "cell_type": "markdown", + "id": "41b03bbc", + "metadata": {}, + "source": [ + "Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power." + ] + }, + { + "cell_type": "markdown", + "id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + "\n", + "- In the example above, `GurobiModel` is just a thin wrapper around a standard Gurobi model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original Gurobi model can be accessed through `model.inner`, as illustrated above.\n", + "- To ensure training data consistency, MIPLearn requires all decision variables to have names.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cf60c1dd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n", + "\n", + "In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "5eb09fab", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:22.758192368Z", + "start_time": "2023-06-06T20:49:22.724784572Z" + } + }, + "outputs": [], + "source": [ + "from scipy.stats import uniform\n", + "from typing import List\n", + "import random\n", + "\n", + "\n", + "def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:\n", + " random.seed(seed)\n", + " np.random.seed(seed)\n", + " pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)\n", + " pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)\n", + " cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)\n", + " cvar = uniform(loc=1.25, scale=0.25).rvs(n)\n", + " return [\n", + " UnitCommitmentData(\n", + " demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),\n", + " pmin=pmin,\n", + " pmax=pmax,\n", + " cfix=cfix,\n", + " cvar=cvar,\n", + " )\n", + " for _ in range(samples)\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "id": "3a03a7ac", + "metadata": {}, + "source": [ + "In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n", + "\n", + "Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00000.pkl.gz`, `uc/train/00001.pkl.gz`, etc., which contain the input data in compressed (gzipped) pickle format." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "6156752c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:24.811192929Z", + "start_time": "2023-06-06T20:49:24.575639142Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.io import write_pkl_gz\n", + "\n", + "data = random_uc_data(samples=500, n=500)\n", + "train_data = write_pkl_gz(data[0:450], \"uc/train\")\n", + "test_data = write_pkl_gz(data[450:500], \"uc/test\")" + ] + }, + { + "cell_type": "markdown", + "id": "b17af877", + "metadata": {}, + "source": [ + "Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00000.h5`, `uc/train/00001.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00000.mps.gz`, `uc/train/00001.mps.gz`, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7623f002", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:34.936729253Z", + "start_time": "2023-06-06T20:49:25.936126612Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.collectors.basic import BasicCollector\n", + "\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_uc_model, n_jobs=4)" + ] + }, + { + "cell_type": "markdown", + "id": "c42b1be1-9723-4827-82d8-974afa51ef9f", + "metadata": {}, + "source": [ + "## Training and solving test instances" + ] + }, + { + "cell_type": "markdown", + "id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c", + "metadata": {}, + "source": [ + "With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n", + "\n", + "1. Memorize the optimal solutions of all training instances;\n", + "2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n", + "3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n", + "4. Provide this partial solution to the solver as a warm start.\n", + "\n", + "This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:38.997939600Z", + "start_time": "2023-06-06T20:49:38.968261432Z" + } + }, + "outputs": [], + "source": [ + "from sklearn.neighbors import KNeighborsClassifier\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "from miplearn.components.primal.mem import (\n", + " MemorizingPrimalComponent,\n", + " MergeTopSolutions,\n", + ")\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "\n", + "comp = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=25),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_constr_rhs\"],\n", + " ),\n", + " constructor=MergeTopSolutions(25, [0.0, 1.0]),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94", + "metadata": {}, + "source": [ + "Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:42.072345411Z", + "start_time": "2023-06-06T20:49:41.294040974Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xa8b70287\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.01s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xcf27855a\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.29153e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2915e+09 8.2907e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 1\n", + " Flow cover: 2\n", + "\n", + "Explored 1 nodes (565 simplex iterations) in 0.03 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 8.29153e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n" + ] + }, + { + "data": { + "text/plain": [ + "{'WS: Count': 1, 'WS: Number of variables set': 482.0}" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from miplearn.solvers.learning import LearningSolver\n", + "\n", + "solver_ml = LearningSolver(components=[comp])\n", + "solver_ml.fit(train_data)\n", + "solver_ml.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0", + "metadata": {}, + "source": [ + "By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "2ff391ed-e855-4228-aa09-a7641d8c2893", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:49:44.012782276Z", + "start_time": "2023-06-06T20:49:43.813974362Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xa8b70287\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x4cbbf7c7\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Found heuristic solution: objective 9.757128e+09\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 9.7571e+09 8.2906e+09 15.0% - 0s\n", + "H 0 0 8.298273e+09 8.2906e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + "H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + "H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 2\n", + " MIR: 1\n", + "\n", + "Explored 1 nodes (1031 simplex iterations) in 0.15 seconds (0.03 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n" + ] + }, + { + "data": { + "text/plain": [ + "{}" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "solver_baseline = LearningSolver(components=[])\n", + "solver_baseline.fit(train_data)\n", + "solver_baseline.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266", + "metadata": {}, + "source": [ + "In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems." + ] + }, + { + "cell_type": "markdown", + "id": "eec97f06", + "metadata": { + "tags": [] + }, + "source": [ + "## Accessing the solution\n", + "\n", + "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "67a6cd18", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:50:12.869892930Z", + "start_time": "2023-06-06T20:50:12.509410473Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x19042f12\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n", + " 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.253596777e+09\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xf97cde91\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.25814e+09 (0.00s)\n", + "User MIP start produced solution with objective 8.25512e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.25459e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Cover: 1\n", + " MIR: 2\n", + " StrongCG: 1\n", + " Flow cover: 1\n", + "\n", + "Explored 1 nodes (575 simplex iterations) in 0.05 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n", + "obj = 8254590409.969726\n", + "x = [1.0, 1.0, 0.0]\n", + "y = [935662.0949262811, 1604270.0218116897, 0.0]\n" + ] + } + ], + "source": [ + "data = random_uc_data(samples=1, n=500)[0]\n", + "model = build_uc_model(data)\n", + "solver_ml.optimize(model)\n", + "print(\"obj =\", model.inner.objVal)\n", + "print(\"x =\", [model.inner._x[i].x for i in range(3)])\n", + "print(\"y =\", [model.inner._y[i].x for i in range(3)])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5593d23a-83bd-4e16-8253-6300f5e3f63b", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/tutorials/getting-started-gurobipy/index.html b/0.4/tutorials/getting-started-gurobipy/index.html new file mode 100644 index 00000000..dc6702cf --- /dev/null +++ b/0.4/tutorials/getting-started-gurobipy/index.html @@ -0,0 +1,888 @@ + + + + + + + + 2. Getting started (Gurobipy) — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ + +
+
+ +
+ +
+

2. Getting started (Gurobipy)

+
+

2.1. Introduction

+

MIPLearn is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:

+
    +
  1. Install the Python/Gurobipy version of MIPLearn

  2. +
  3. Model a simple optimization problem using Gurobipy

  4. +
  5. Generate training data and train the ML models

  6. +
  7. Use the ML models together Gurobi to solve new instances

  8. +
+
+

Note

+

The Python/Gurobipy version of MIPLearn is only compatible with the Gurobi Optimizer. For broader solver compatibility, see the Python/Pyomo and Julia/JuMP versions of the package.

+
+
+

Warning

+

MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!

+
+
+
+

2.2. Installation

+

MIPLearn is available in two versions:

+
    +
  • Python version, compatible with the Pyomo and Gurobipy modeling languages,

  • +
  • Julia version, compatible with the JuMP modeling language.

  • +
+

In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.8+ in your computer. See the official Python website for more instructions. After Python is installed, we proceed to install MIPLearn using pip:

+
$ pip install MIPLearn==0.3
+
+
+

In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.

+
$ pip install 'gurobipy>=10,<10.1'
+
+
+
+

Note

+

In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.

+
+
+
+

2.3. Modeling a simple optimization problem

+

To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the unit commitment problem, a practical optimization problem solved daily by electric grid operators around the world.

+

Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns \(n\) generators, denoted by \(g_1, \ldots, g_n\). Each generator can either be online or offline. An online generator \(g_i\) can produce between \(p^\text{min}_i\) to \(p^\text{max}_i\) megawatts of power, and it costs the company +\(c^\text{fix}_i + c^\text{var}_i y_i\), where \(y_i\) is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand \(d\) (in megawatts).

+

This simple problem can be modeled as a mixed-integer linear optimization problem as follows. For each generator \(g_i\), let \(x_i \in \{0,1\}\) be a decision variable indicating whether \(g_i\) is online, and let \(y_i \geq 0\) be a decision variable indicating how much power does \(g_i\) produce. The problem is then given by:

+
+\[\begin{split}\begin{align} +\text{minimize } \quad & \sum_{i=1}^n \left( c^\text{fix}_i x_i + c^\text{var}_i y_i \right) \\ +\text{subject to } \quad & y_i \leq p^\text{max}_i x_i & i=1,\ldots,n \\ +& y_i \geq p^\text{min}_i x_i & i=1,\ldots,n \\ +& \sum_{i=1}^n y_i = d \\ +& x_i \in \{0,1\} & i=1,\ldots,n \\ +& y_i \geq 0 & i=1,\ldots,n +\end{align}\end{split}\]
+
+

Note

+

We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.

+
+

Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class UnitCommitmentData, which holds all the input data.

+
+
[1]:
+
+
+
from dataclasses import dataclass
+from typing import List
+
+import numpy as np
+
+
+@dataclass
+class UnitCommitmentData:
+    demand: float
+    pmin: List[float]
+    pmax: List[float]
+    cfix: List[float]
+    cvar: List[float]
+
+
+
+

Next, we write a build_uc_model function, which converts the input data into a concrete Pyomo model. The function accepts UnitCommitmentData, the data structure we previously defined, or the path to a compressed pickle file containing this data.

+
+
[2]:
+
+
+
import gurobipy as gp
+from gurobipy import GRB, quicksum
+from typing import Union
+from miplearn.io import read_pkl_gz
+from miplearn.solvers.gurobi import GurobiModel
+
+
+def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:
+    if isinstance(data, str):
+        data = read_pkl_gz(data)
+
+    model = gp.Model()
+    n = len(data.pmin)
+    x = model._x = model.addVars(n, vtype=GRB.BINARY, name="x")
+    y = model._y = model.addVars(n, name="y")
+    model.setObjective(
+        quicksum(data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n))
+    )
+    model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))
+    model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))
+    model.addConstr(quicksum(y[i] for i in range(n)) == data.demand)
+    return GurobiModel(model)
+
+
+
+

At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:

+
+
[3]:
+
+
+
model = build_uc_model(
+    UnitCommitmentData(
+        demand=100.0,
+        pmin=[10, 20, 30],
+        pmax=[50, 60, 70],
+        cfix=[700, 600, 500],
+        cvar=[1.5, 2.0, 2.5],
+    )
+)
+
+model.optimize()
+print("obj =", model.inner.objVal)
+print("x =", [model.inner._x[i].x for i in range(3)])
+print("y =", [model.inner._y[i].x for i in range(3)])
+
+
+
+
+
+
+
+
+Restricted license - for non-production use only - expires 2024-10-28
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 7 rows, 6 columns and 15 nonzeros
+Model fingerprint: 0x58dfdd53
+Variable types: 3 continuous, 3 integer (3 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 7e+01]
+  Objective range  [2e+00, 7e+02]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+02, 1e+02]
+Presolve removed 2 rows and 1 columns
+Presolve time: 0.00s
+Presolved: 5 rows, 5 columns, 13 nonzeros
+Variable types: 0 continuous, 5 integer (3 binary)
+Found heuristic solution: objective 1400.0000000
+
+Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 1035.00000    0    1 1400.00000 1035.00000  26.1%     -    0s
+     0     0 1105.71429    0    1 1400.00000 1105.71429  21.0%     -    0s
+*    0     0               0    1320.0000000 1320.00000  0.00%     -    0s
+
+Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 2: 1320 1400
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%
+obj = 1320.0
+x = [-0.0, 1.0, 1.0]
+y = [0.0, 60.0, 40.0]
+
+
+

Running the code above, we found that the optimal solution for our small problem instance costs $1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power.

+
+

Note

+
    +
  • In the example above, GurobiModel is just a thin wrapper around a standard Gurobi model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as optimize. For more control, and to query the solution, the original Gurobi model can be accessed through model.inner, as illustrated above.

  • +
  • To ensure training data consistency, MIPLearn requires all decision variables to have names.

  • +
+
+
+
+

2.4. Generating training data

+

Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a trained solver, which can optimize new instances (similar to the ones it was trained on) faster.

+

In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a +random instance generator:

+
+
[4]:
+
+
+
from scipy.stats import uniform
+from typing import List
+import random
+
+
+def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:
+    random.seed(seed)
+    np.random.seed(seed)
+    pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)
+    pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)
+    cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)
+    cvar = uniform(loc=1.25, scale=0.25).rvs(n)
+    return [
+        UnitCommitmentData(
+            demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),
+            pmin=pmin,
+            pmax=pmax,
+            cfix=cfix,
+            cvar=cvar,
+        )
+        for _ in range(samples)
+    ]
+
+
+
+

In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.

+

Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple +machines. The code below generates the files uc/train/00000.pkl.gz, uc/train/00001.pkl.gz, etc., which contain the input data in compressed (gzipped) pickle format.

+
+
[5]:
+
+
+
from miplearn.io import write_pkl_gz
+
+data = random_uc_data(samples=500, n=500)
+train_data = write_pkl_gz(data[0:450], "uc/train")
+test_data = write_pkl_gz(data[450:500], "uc/test")
+
+
+
+

Finally, we use BasicCollector to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files uc/train/00000.h5, uc/train/00001.h5, etc. The optimization models are also exported to compressed MPS files uc/train/00000.mps.gz, uc/train/00001.mps.gz, etc.

+
+
[6]:
+
+
+
from miplearn.collectors.basic import BasicCollector
+
+bc = BasicCollector()
+bc.collect(train_data, build_uc_model, n_jobs=4)
+
+
+
+
+
+

2.5. Training and solving test instances

+

With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using \(k\)-nearest neighbors. More specifically, the strategy is to:

+
    +
  1. Memorize the optimal solutions of all training instances;

  2. +
  3. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;

  4. +
  5. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.

  6. +
  7. Provide this partial solution to the solver as a warm start.

  8. +
+

This simple strategy can be implemented as shown below, using MemorizingPrimalComponent. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide.

+
+
[7]:
+
+
+
from sklearn.neighbors import KNeighborsClassifier
+from miplearn.components.primal.actions import SetWarmStart
+from miplearn.components.primal.mem import (
+    MemorizingPrimalComponent,
+    MergeTopSolutions,
+)
+from miplearn.extractors.fields import H5FieldsExtractor
+
+comp = MemorizingPrimalComponent(
+    clf=KNeighborsClassifier(n_neighbors=25),
+    extractor=H5FieldsExtractor(
+        instance_fields=["static_constr_rhs"],
+    ),
+    constructor=MergeTopSolutions(25, [0.0, 1.0]),
+    action=SetWarmStart(),
+)
+
+
+
+

Having defined the ML strategy, we next construct LearningSolver, train the ML component and optimize one of the test instances.

+
+
[8]:
+
+
+
from miplearn.solvers.learning import LearningSolver
+
+solver_ml = LearningSolver(components=[comp])
+solver_ml.fit(train_data)
+solver_ml.optimize(test_data[0], build_uc_model)
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0xa8b70287
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve removed 1000 rows and 500 columns
+Presolve time: 0.01s
+Presolved: 1 rows, 500 columns, 500 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    6.6166537e+09   5.648803e+04   0.000000e+00      0s
+       1    8.2906219e+09   0.000000e+00   0.000000e+00      0s
+
+Solved in 1 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  8.290621916e+09
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0xcf27855a
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+
+User MIP start produced solution with objective 8.29153e+09 (0.01s)
+User MIP start produced solution with objective 8.29153e+09 (0.01s)
+Loaded user MIP start with objective 8.29153e+09
+
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+
+Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 8.2906e+09    0    1 8.2915e+09 8.2906e+09  0.01%     -    0s
+     0     0 8.2907e+09    0    3 8.2915e+09 8.2907e+09  0.01%     -    0s
+     0     0 8.2907e+09    0    1 8.2915e+09 8.2907e+09  0.01%     -    0s
+     0     0 8.2907e+09    0    2 8.2915e+09 8.2907e+09  0.01%     -    0s
+
+Cutting planes:
+  Gomory: 1
+  Flow cover: 2
+
+Explored 1 nodes (565 simplex iterations) in 0.03 seconds (0.01 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 1: 8.29153e+09
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%
+
+
+
+
[8]:
+
+
+
+
+{'WS: Count': 1, 'WS: Number of variables set': 482.0}
+
+
+

By examining the solve log above, specifically the line Loaded user MIP start with objective..., we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided.

+
+
[9]:
+
+
+
solver_baseline = LearningSolver(components=[])
+solver_baseline.fit(train_data)
+solver_baseline.optimize(test_data[0], build_uc_model)
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0xa8b70287
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve removed 1000 rows and 500 columns
+Presolve time: 0.00s
+Presolved: 1 rows, 500 columns, 500 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    6.6166537e+09   5.648803e+04   0.000000e+00      0s
+       1    8.2906219e+09   0.000000e+00   0.000000e+00      0s
+
+Solved in 1 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  8.290621916e+09
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x4cbbf7c7
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+Found heuristic solution: objective 9.757128e+09
+
+Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 8.2906e+09    0    1 9.7571e+09 8.2906e+09  15.0%     -    0s
+H    0     0                    8.298273e+09 8.2906e+09  0.09%     -    0s
+     0     0 8.2907e+09    0    4 8.2983e+09 8.2907e+09  0.09%     -    0s
+     0     0 8.2907e+09    0    1 8.2983e+09 8.2907e+09  0.09%     -    0s
+     0     0 8.2907e+09    0    4 8.2983e+09 8.2907e+09  0.09%     -    0s
+H    0     0                    8.293980e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2907e+09    0    5 8.2940e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2907e+09    0    1 8.2940e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2907e+09    0    2 8.2940e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2908e+09    0    1 8.2940e+09 8.2908e+09  0.04%     -    0s
+     0     0 8.2908e+09    0    4 8.2940e+09 8.2908e+09  0.04%     -    0s
+     0     0 8.2908e+09    0    4 8.2940e+09 8.2908e+09  0.04%     -    0s
+H    0     0                    8.291465e+09 8.2908e+09  0.01%     -    0s
+
+Cutting planes:
+  Gomory: 2
+  MIR: 1
+
+Explored 1 nodes (1031 simplex iterations) in 0.15 seconds (0.03 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%
+
+
+
+
[9]:
+
+
+
+
+{}
+
+
+

In the log above, the MIP start line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems.

+
+
+

2.6. Accessing the solution

+

In the example above, we used LearningSolver.solve together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver.

+
+
[10]:
+
+
+
data = random_uc_data(samples=1, n=500)[0]
+model = build_uc_model(data)
+solver_ml.optimize(model)
+print("obj =", model.inner.objVal)
+print("x =", [model.inner._x[i].x for i in range(3)])
+print("y =", [model.inner._y[i].x for i in range(3)])
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x19042f12
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve removed 1000 rows and 500 columns
+Presolve time: 0.00s
+Presolved: 1 rows, 500 columns, 500 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    6.5917580e+09   5.627453e+04   0.000000e+00      0s
+       1    8.2535968e+09   0.000000e+00   0.000000e+00      0s
+
+Solved in 1 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  8.253596777e+09
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0xf97cde91
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+
+User MIP start produced solution with objective 8.25814e+09 (0.00s)
+User MIP start produced solution with objective 8.25512e+09 (0.01s)
+User MIP start produced solution with objective 8.25483e+09 (0.01s)
+User MIP start produced solution with objective 8.25483e+09 (0.01s)
+User MIP start produced solution with objective 8.25483e+09 (0.01s)
+User MIP start produced solution with objective 8.25459e+09 (0.01s)
+User MIP start produced solution with objective 8.25459e+09 (0.01s)
+Loaded user MIP start with objective 8.25459e+09
+
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+
+Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 8.2536e+09    0    1 8.2546e+09 8.2536e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    3 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    1 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    4 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    4 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2538e+09    0    4 8.2546e+09 8.2538e+09  0.01%     -    0s
+     0     0 8.2538e+09    0    5 8.2546e+09 8.2538e+09  0.01%     -    0s
+     0     0 8.2538e+09    0    6 8.2546e+09 8.2538e+09  0.01%     -    0s
+
+Cutting planes:
+  Cover: 1
+  MIR: 2
+  StrongCG: 1
+  Flow cover: 1
+
+Explored 1 nodes (575 simplex iterations) in 0.05 seconds (0.01 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%
+obj = 8254590409.969726
+x = [1.0, 1.0, 0.0]
+y = [935662.0949262811, 1604270.0218116897, 0.0]
+
+
+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/tutorials/getting-started-jump.ipynb b/0.4/tutorials/getting-started-jump.ipynb new file mode 100644 index 00000000..8dbf587e --- /dev/null +++ b/0.4/tutorials/getting-started-jump.ipynb @@ -0,0 +1,680 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6b8983b1", + "metadata": { + "tags": [] + }, + "source": [ + "# Getting started (JuMP)\n", + "\n", + "## Introduction\n", + "\n", + "**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n", + "\n", + "1. Install the Julia/JuMP version of MIPLearn\n", + "2. Model a simple optimization problem using JuMP\n", + "3. Generate training data and train the ML models\n", + "4. Use the ML models together Gurobi to solve new instances\n", + "\n", + "
\n", + "Warning\n", + " \n", + "MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n", + " \n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "id": "02f0a927", + "metadata": {}, + "source": [ + "## Installation\n", + "\n", + "MIPLearn is available in two versions:\n", + "\n", + "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", + "- Julia version, compatible with the JuMP modeling language.\n", + "\n", + "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n", + "\n", + "```\n", + "pkg> add MIPLearn@0.3\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "e8274543", + "metadata": {}, + "source": [ + "In addition to MIPLearn itself, we will also install:\n", + "\n", + "- the JuMP modeling language\n", + "- Gurobi, a state-of-the-art commercial MILP solver\n", + "- Distributions, to generate random data\n", + "- PyCall, to access ML model from Scikit-Learn\n", + "- Suppressor, to make the output cleaner\n", + "\n", + "```\n", + "pkg> add JuMP@1, Gurobi@1, Distributions@0.25, PyCall@1, Suppressor@0.2\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a14e4550", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + "\n", + "- If you do not have a Gurobi license available, you can also follow the tutorial by installing an open-source solver, such as `HiGHS`, and replacing `Gurobi.Optimizer` by `HiGHS.Optimizer` in all the code examples.\n", + "- In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.\n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "16b86823", + "metadata": {}, + "source": [ + "## Modeling a simple optimization problem\n", + "\n", + "To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n", + "\n", + "Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n", + "\n", + "This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:" + ] + }, + { + "cell_type": "markdown", + "id": "f12c3702", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align}\n", + "\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n", + "\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n", + "& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n", + "& \\sum_{i=1}^n y_i = d \\\\\n", + "& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n", + "& y_i \\geq 0 & i=1,\\ldots,n\n", + "\\end{align}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "be3989ed", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Note\n", + "\n", + "We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "a5fd33f6", + "metadata": {}, + "source": [ + "Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Julia and JuMP. We start by defining a data class `UnitCommitmentData`, which holds all the input data." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "c62ebff1-db40-45a1-9997-d121837f067b", + "metadata": {}, + "outputs": [], + "source": [ + "struct UnitCommitmentData\n", + " demand::Float64\n", + " pmin::Vector{Float64}\n", + " pmax::Vector{Float64}\n", + " cfix::Vector{Float64}\n", + " cvar::Vector{Float64}\n", + "end;" + ] + }, + { + "cell_type": "markdown", + "id": "29f55efa-0751-465a-9b0a-a821d46a3d40", + "metadata": {}, + "source": [ + "Next, we write a `build_uc_model` function, which converts the input data into a concrete JuMP model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a JLD2 file containing this data." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "79ef7775-18ca-4dfa-b438-49860f762ad0", + "metadata": {}, + "outputs": [], + "source": [ + "using MIPLearn\n", + "using JuMP\n", + "using Gurobi\n", + "\n", + "function build_uc_model(data)\n", + " if data isa String\n", + " data = read_jld2(data)\n", + " end\n", + " model = Model(Gurobi.Optimizer)\n", + " G = 1:length(data.pmin)\n", + " @variable(model, x[G], Bin)\n", + " @variable(model, y[G] >= 0)\n", + " @objective(model, Min, sum(data.cfix[g] * x[g] + data.cvar[g] * y[g] for g in G))\n", + " @constraint(model, eq_max_power[g in G], y[g] <= data.pmax[g] * x[g])\n", + " @constraint(model, eq_min_power[g in G], y[g] >= data.pmin[g] * x[g])\n", + " @constraint(model, eq_demand, sum(y[g] for g in G) == data.demand)\n", + " return JumpModel(model)\n", + "end;" + ] + }, + { + "cell_type": "markdown", + "id": "c22714a3", + "metadata": {}, + "source": [ + "At this point, we can already use Gurobi to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "dd828d68-fd43-4d2a-a058-3e2628d99d9e", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:01:10.993801745Z", + "start_time": "2023-06-06T20:01:10.887580927Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", + "Model fingerprint: 0x55e33a07\n", + "Variable types: 3 continuous, 3 integer (3 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 7e+01]\n", + " Objective range [2e+00, 7e+02]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [1e+02, 1e+02]\n", + "Presolve removed 2 rows and 1 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 5 columns, 13 nonzeros\n", + "Variable types: 0 continuous, 5 integer (3 binary)\n", + "Found heuristic solution: objective 1400.0000000\n", + "\n", + "Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n", + " 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n", + "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.00 seconds (0.00 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 2: 1320 1400 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", + "\n", + "User-callback calls 371, time in user-callback 0.00 sec\n", + "objective_value(model.inner) = 1320.0\n", + "Vector(value.(model.inner[:x])) = [-0.0, 1.0, 1.0]\n", + "Vector(value.(model.inner[:y])) = [0.0, 60.0, 40.0]\n" + ] + } + ], + "source": [ + "model = build_uc_model(\n", + " UnitCommitmentData(\n", + " 100.0, # demand\n", + " [10, 20, 30], # pmin\n", + " [50, 60, 70], # pmax\n", + " [700, 600, 500], # cfix\n", + " [1.5, 2.0, 2.5], # cvar\n", + " )\n", + ")\n", + "model.optimize()\n", + "@show objective_value(model.inner)\n", + "@show Vector(value.(model.inner[:x]))\n", + "@show Vector(value.(model.inner[:y]));" + ] + }, + { + "cell_type": "markdown", + "id": "41b03bbc", + "metadata": {}, + "source": [ + "Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power." + ] + }, + { + "cell_type": "markdown", + "id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Notes\n", + " \n", + "- In the example above, `JumpModel` is just a thin wrapper around a standard JuMP model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original JuMP model can be accessed through `model.inner`, as illustrated above.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cf60c1dd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n", + "\n", + "In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "1326efd7-3869-4137-ab6b-df9cb609a7e0", + "metadata": {}, + "outputs": [], + "source": [ + "using Distributions\n", + "using Random\n", + "\n", + "function random_uc_data(; samples::Int, n::Int, seed::Int=42)::Vector\n", + " Random.seed!(seed)\n", + " pmin = rand(Uniform(100_000, 500_000), n)\n", + " pmax = pmin .* rand(Uniform(2, 2.5), n)\n", + " cfix = pmin .* rand(Uniform(100, 125), n)\n", + " cvar = rand(Uniform(1.25, 1.50), n)\n", + " return [\n", + " UnitCommitmentData(\n", + " sum(pmax) * rand(Uniform(0.5, 0.75)),\n", + " pmin,\n", + " pmax,\n", + " cfix,\n", + " cvar,\n", + " )\n", + " for _ in 1:samples\n", + " ]\n", + "end;" + ] + }, + { + "cell_type": "markdown", + "id": "3a03a7ac", + "metadata": {}, + "source": [ + "In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n", + "\n", + "Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00001.jld2`, `uc/train/00002.jld2`, etc., which contain the input data in JLD2 format." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "6156752c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:04.782830561Z", + "start_time": "2023-06-06T20:03:04.530421396Z" + } + }, + "outputs": [], + "source": [ + "data = random_uc_data(samples=500, n=500)\n", + "train_data = write_jld2(data[1:450], \"uc/train\")\n", + "test_data = write_jld2(data[451:500], \"uc/test\");" + ] + }, + { + "cell_type": "markdown", + "id": "b17af877", + "metadata": {}, + "source": [ + "Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00001.h5`, `uc/train/00002.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00001.mps.gz`, `uc/train/00002.mps.gz`, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7623f002", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:35.571497019Z", + "start_time": "2023-06-06T20:03:25.804104036Z" + } + }, + "outputs": [], + "source": [ + "using Suppressor\n", + "@suppress_out begin\n", + " bc = BasicCollector()\n", + " bc.collect(train_data, build_uc_model)\n", + "end" + ] + }, + { + "cell_type": "markdown", + "id": "c42b1be1-9723-4827-82d8-974afa51ef9f", + "metadata": {}, + "source": [ + "## Training and solving test instances" + ] + }, + { + "cell_type": "markdown", + "id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c", + "metadata": {}, + "source": [ + "With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n", + "\n", + "1. Memorize the optimal solutions of all training instances;\n", + "2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n", + "3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n", + "4. Provide this partial solution to the solver as a warm start.\n", + "\n", + "This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:20.497772794Z", + "start_time": "2023-06-06T20:05:20.484821405Z" + } + }, + "outputs": [], + "source": [ + "# Load kNN classifier from Scikit-Learn\n", + "using PyCall\n", + "KNeighborsClassifier = pyimport(\"sklearn.neighbors\").KNeighborsClassifier\n", + "\n", + "# Build the MIPLearn component\n", + "comp = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=25),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_constr_rhs\"],\n", + " ),\n", + " constructor=MergeTopSolutions(25, [0.0, 1.0]),\n", + " action=SetWarmStart(),\n", + ");" + ] + }, + { + "cell_type": "markdown", + "id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94", + "metadata": {}, + "source": [ + "Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:22.672002339Z", + "start_time": "2023-06-06T20:05:21.447466634Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xd2378195\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [2e+08, 2e+08]\n", + "\n", + "User MIP start produced solution with objective 1.02165e+10 (0.00s)\n", + "Loaded user MIP start with objective 1.02165e+10\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1.0216e+10 0 1 1.0217e+10 1.0216e+10 0.01% - 0s\n", + "\n", + "Explored 1 nodes (510 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 1: 1.02165e+10 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.021651058978e+10, best bound 1.021567971257e+10, gap 0.0081%\n", + "\n", + "User-callback calls 169, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "solver_ml = LearningSolver(components=[comp])\n", + "solver_ml.fit(train_data)\n", + "solver_ml.optimize(test_data[1], build_uc_model);" + ] + }, + { + "cell_type": "markdown", + "id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0", + "metadata": {}, + "source": [ + "By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "2ff391ed-e855-4228-aa09-a7641d8c2893", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:46.969575966Z", + "start_time": "2023-06-06T20:05:46.420803286Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0xb45c0594\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [2e+08, 2e+08]\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Found heuristic solution: objective 1.071463e+10\n", + "\n", + "Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1.0216e+10 0 1 1.0715e+10 1.0216e+10 4.66% - 0s\n", + "H 0 0 1.025162e+10 1.0216e+10 0.35% - 0s\n", + " 0 0 1.0216e+10 0 1 1.0252e+10 1.0216e+10 0.35% - 0s\n", + "H 0 0 1.023090e+10 1.0216e+10 0.15% - 0s\n", + "H 0 0 1.022335e+10 1.0216e+10 0.07% - 0s\n", + "H 0 0 1.022281e+10 1.0216e+10 0.07% - 0s\n", + "H 0 0 1.021753e+10 1.0216e+10 0.02% - 0s\n", + "H 0 0 1.021752e+10 1.0216e+10 0.02% - 0s\n", + " 0 0 1.0216e+10 0 3 1.0218e+10 1.0216e+10 0.02% - 0s\n", + " 0 0 1.0216e+10 0 1 1.0218e+10 1.0216e+10 0.02% - 0s\n", + "H 0 0 1.021651e+10 1.0216e+10 0.01% - 0s\n", + "\n", + "Explored 1 nodes (764 simplex iterations) in 0.03 seconds (0.02 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 7: 1.02165e+10 1.02175e+10 1.02228e+10 ... 1.07146e+10\n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.021651058978e+10, best bound 1.021573363741e+10, gap 0.0076%\n", + "\n", + "User-callback calls 204, time in user-callback 0.00 sec\n" + ] + } + ], + "source": [ + "solver_baseline = LearningSolver(components=[])\n", + "solver_baseline.fit(train_data)\n", + "solver_baseline.optimize(test_data[1], build_uc_model);" + ] + }, + { + "cell_type": "markdown", + "id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266", + "metadata": {}, + "source": [ + "In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems." + ] + }, + { + "cell_type": "markdown", + "id": "eec97f06", + "metadata": { + "tags": [] + }, + "source": [ + "## Accessing the solution\n", + "\n", + "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "67a6cd18", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:06:26.913448568Z", + "start_time": "2023-06-06T20:06:26.169047914Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", + "\n", + "CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", + "Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x974a7fba\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 1e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [0e+00, 0e+00]\n", + " RHS range [2e+08, 2e+08]\n", + "\n", + "User MIP start produced solution with objective 9.86729e+09 (0.00s)\n", + "User MIP start produced solution with objective 9.86675e+09 (0.00s)\n", + "User MIP start produced solution with objective 9.86654e+09 (0.01s)\n", + "User MIP start produced solution with objective 9.8661e+09 (0.01s)\n", + "Loaded user MIP start with objective 9.8661e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 9.865344e+09, 510 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 9.8653e+09 0 1 9.8661e+09 9.8653e+09 0.01% - 0s\n", + "\n", + "Explored 1 nodes (510 simplex iterations) in 0.02 seconds (0.01 work units)\n", + "Thread count was 32 (of 32 available processors)\n", + "\n", + "Solution count 4: 9.8661e+09 9.86654e+09 9.86675e+09 9.86729e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 9.866096485614e+09, best bound 9.865343669936e+09, gap 0.0076%\n", + "\n", + "User-callback calls 182, time in user-callback 0.00 sec\n", + "objective_value(model.inner) = 9.866096485613789e9\n" + ] + } + ], + "source": [ + "data = random_uc_data(samples=1, n=500)[1]\n", + "model = build_uc_model(data)\n", + "solver_ml.optimize(model)\n", + "@show objective_value(model.inner);" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Julia 1.9.0", + "language": "julia", + "name": "julia-1.9" + }, + "language_info": { + "file_extension": ".jl", + "mimetype": "application/julia", + "name": "julia", + "version": "1.9.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/tutorials/getting-started-jump/index.html b/0.4/tutorials/getting-started-jump/index.html new file mode 100644 index 00000000..877fa2b0 --- /dev/null +++ b/0.4/tutorials/getting-started-jump/index.html @@ -0,0 +1,755 @@ + + + + + + + + 3. Getting started (JuMP) — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ + +
+
+ +
+ +
+

3. Getting started (JuMP)

+
+

3.1. Introduction

+

MIPLearn is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:

+
    +
  1. Install the Julia/JuMP version of MIPLearn

  2. +
  3. Model a simple optimization problem using JuMP

  4. +
  5. Generate training data and train the ML models

  6. +
  7. Use the ML models together Gurobi to solve new instances

  8. +
+
+

Warning

+

MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!

+
+
+
+

3.2. Installation

+

MIPLearn is available in two versions:

+
    +
  • Python version, compatible with the Pyomo and Gurobipy modeling languages,

  • +
  • Julia version, compatible with the JuMP modeling language.

  • +
+

In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the official Julia website for more instructions. After Julia is installed, launch the Julia REPL, type ] to enter package mode, then install MIPLearn:

+
pkg> add MIPLearn@0.3
+
+
+

In addition to MIPLearn itself, we will also install:

+
    +
  • the JuMP modeling language

  • +
  • Gurobi, a state-of-the-art commercial MILP solver

  • +
  • Distributions, to generate random data

  • +
  • PyCall, to access ML model from Scikit-Learn

  • +
  • Suppressor, to make the output cleaner

  • +
+
pkg> add JuMP@1, Gurobi@1, Distributions@0.25, PyCall@1, Suppressor@0.2
+
+
+
+

Note

+
    +
  • If you do not have a Gurobi license available, you can also follow the tutorial by installing an open-source solver, such as HiGHS, and replacing Gurobi.Optimizer by HiGHS.Optimizer in all the code examples.

  • +
  • In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.

  • +
+
+
+
+

3.3. Modeling a simple optimization problem

+

To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the unit commitment problem, a practical optimization problem solved daily by electric grid operators around the world.

+

Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns \(n\) generators, denoted by \(g_1, \ldots, g_n\). Each generator can either be online or offline. An online generator \(g_i\) can produce between \(p^\text{min}_i\) to \(p^\text{max}_i\) megawatts of power, and it costs the company +\(c^\text{fix}_i + c^\text{var}_i y_i\), where \(y_i\) is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand \(d\) (in megawatts).

+

This simple problem can be modeled as a mixed-integer linear optimization problem as follows. For each generator \(g_i\), let \(x_i \in \{0,1\}\) be a decision variable indicating whether \(g_i\) is online, and let \(y_i \geq 0\) be a decision variable indicating how much power does \(g_i\) produce. The problem is then given by:

+
+\[\begin{split}\begin{align} +\text{minimize } \quad & \sum_{i=1}^n \left( c^\text{fix}_i x_i + c^\text{var}_i y_i \right) \\ +\text{subject to } \quad & y_i \leq p^\text{max}_i x_i & i=1,\ldots,n \\ +& y_i \geq p^\text{min}_i x_i & i=1,\ldots,n \\ +& \sum_{i=1}^n y_i = d \\ +& x_i \in \{0,1\} & i=1,\ldots,n \\ +& y_i \geq 0 & i=1,\ldots,n +\end{align}\end{split}\]
+
+

Note

+

We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.

+
+

Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Julia and JuMP. We start by defining a data class UnitCommitmentData, which holds all the input data.

+
+
[1]:
+
+
+
struct UnitCommitmentData
+    demand::Float64
+    pmin::Vector{Float64}
+    pmax::Vector{Float64}
+    cfix::Vector{Float64}
+    cvar::Vector{Float64}
+end;
+
+
+
+

Next, we write a build_uc_model function, which converts the input data into a concrete JuMP model. The function accepts UnitCommitmentData, the data structure we previously defined, or the path to a JLD2 file containing this data.

+
+
[2]:
+
+
+
using MIPLearn
+using JuMP
+using Gurobi
+
+function build_uc_model(data)
+    if data isa String
+        data = read_jld2(data)
+    end
+    model = Model(Gurobi.Optimizer)
+    G = 1:length(data.pmin)
+    @variable(model, x[G], Bin)
+    @variable(model, y[G] >= 0)
+    @objective(model, Min, sum(data.cfix[g] * x[g] + data.cvar[g] * y[g] for g in G))
+    @constraint(model, eq_max_power[g in G], y[g] <= data.pmax[g] * x[g])
+    @constraint(model, eq_min_power[g in G], y[g] >= data.pmin[g] * x[g])
+    @constraint(model, eq_demand, sum(y[g] for g in G) == data.demand)
+    return JumpModel(model)
+end;
+
+
+
+

At this point, we can already use Gurobi to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:

+
+
[3]:
+
+
+
model = build_uc_model(
+    UnitCommitmentData(
+        100.0,  # demand
+        [10, 20, 30],  # pmin
+        [50, 60, 70],  # pmax
+        [700, 600, 500],  # cfix
+        [1.5, 2.0, 2.5],  # cvar
+    )
+)
+model.optimize()
+@show objective_value(model.inner)
+@show Vector(value.(model.inner[:x]))
+@show Vector(value.(model.inner[:y]));
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
+
+CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
+Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
+
+Optimize a model with 7 rows, 6 columns and 15 nonzeros
+Model fingerprint: 0x55e33a07
+Variable types: 3 continuous, 3 integer (3 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 7e+01]
+  Objective range  [2e+00, 7e+02]
+  Bounds range     [0e+00, 0e+00]
+  RHS range        [1e+02, 1e+02]
+Presolve removed 2 rows and 1 columns
+Presolve time: 0.00s
+Presolved: 5 rows, 5 columns, 13 nonzeros
+Variable types: 0 continuous, 5 integer (3 binary)
+Found heuristic solution: objective 1400.0000000
+
+Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 1035.00000    0    1 1400.00000 1035.00000  26.1%     -    0s
+     0     0 1105.71429    0    1 1400.00000 1105.71429  21.0%     -    0s
+*    0     0               0    1320.0000000 1320.00000  0.00%     -    0s
+
+Explored 1 nodes (5 simplex iterations) in 0.00 seconds (0.00 work units)
+Thread count was 32 (of 32 available processors)
+
+Solution count 2: 1320 1400
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%
+
+User-callback calls 371, time in user-callback 0.00 sec
+objective_value(model.inner) = 1320.0
+Vector(value.(model.inner[:x])) = [-0.0, 1.0, 1.0]
+Vector(value.(model.inner[:y])) = [0.0, 60.0, 40.0]
+
+
+

Running the code above, we found that the optimal solution for our small problem instance costs $1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power.

+
+

Notes

+
    +
  • In the example above, JumpModel is just a thin wrapper around a standard JuMP model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as optimize. For more control, and to query the solution, the original JuMP model can be accessed through model.inner, as illustrated above.

  • +
+
+
+
+

3.4. Generating training data

+

Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a trained solver, which can optimize new instances (similar to the ones it was trained on) faster.

+

In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a +random instance generator:

+
+
[4]:
+
+
+
using Distributions
+using Random
+
+function random_uc_data(; samples::Int, n::Int, seed::Int=42)::Vector
+    Random.seed!(seed)
+    pmin = rand(Uniform(100_000, 500_000), n)
+    pmax = pmin .* rand(Uniform(2, 2.5), n)
+    cfix = pmin .* rand(Uniform(100, 125), n)
+    cvar = rand(Uniform(1.25, 1.50), n)
+    return [
+        UnitCommitmentData(
+            sum(pmax) * rand(Uniform(0.5, 0.75)),
+            pmin,
+            pmax,
+            cfix,
+            cvar,
+        )
+        for _ in 1:samples
+    ]
+end;
+
+
+
+

In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.

+

Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple +machines. The code below generates the files uc/train/00001.jld2, uc/train/00002.jld2, etc., which contain the input data in JLD2 format.

+
+
[5]:
+
+
+
data = random_uc_data(samples=500, n=500)
+train_data = write_jld2(data[1:450], "uc/train")
+test_data = write_jld2(data[451:500], "uc/test");
+
+
+
+

Finally, we use BasicCollector to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files uc/train/00001.h5, uc/train/00002.h5, etc. The optimization models are also exported to compressed MPS files uc/train/00001.mps.gz, uc/train/00002.mps.gz, etc.

+
+
[6]:
+
+
+
using Suppressor
+@suppress_out begin
+    bc = BasicCollector()
+    bc.collect(train_data, build_uc_model)
+end
+
+
+
+
+
+

3.5. Training and solving test instances

+

With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using \(k\)-nearest neighbors. More specifically, the strategy is to:

+
    +
  1. Memorize the optimal solutions of all training instances;

  2. +
  3. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;

  4. +
  5. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.

  6. +
  7. Provide this partial solution to the solver as a warm start.

  8. +
+

This simple strategy can be implemented as shown below, using MemorizingPrimalComponent. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide.

+
+
[7]:
+
+
+
# Load kNN classifier from Scikit-Learn
+using PyCall
+KNeighborsClassifier = pyimport("sklearn.neighbors").KNeighborsClassifier
+
+# Build the MIPLearn component
+comp = MemorizingPrimalComponent(
+    clf=KNeighborsClassifier(n_neighbors=25),
+    extractor=H5FieldsExtractor(
+        instance_fields=["static_constr_rhs"],
+    ),
+    constructor=MergeTopSolutions(25, [0.0, 1.0]),
+    action=SetWarmStart(),
+);
+
+
+
+

Having defined the ML strategy, we next construct LearningSolver, train the ML component and optimize one of the test instances.

+
+
[8]:
+
+
+
solver_ml = LearningSolver(components=[comp])
+solver_ml.fit(train_data)
+solver_ml.optimize(test_data[1], build_uc_model);
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
+
+CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
+Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0xd2378195
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [0e+00, 0e+00]
+  RHS range        [2e+08, 2e+08]
+
+User MIP start produced solution with objective 1.02165e+10 (0.00s)
+Loaded user MIP start with objective 1.02165e+10
+
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+
+Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 1.0216e+10    0    1 1.0217e+10 1.0216e+10  0.01%     -    0s
+
+Explored 1 nodes (510 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 32 (of 32 available processors)
+
+Solution count 1: 1.02165e+10
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 1.021651058978e+10, best bound 1.021567971257e+10, gap 0.0081%
+
+User-callback calls 169, time in user-callback 0.00 sec
+
+
+

By examining the solve log above, specifically the line Loaded user MIP start with objective..., we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided.

+
+
[9]:
+
+
+
solver_baseline = LearningSolver(components=[])
+solver_baseline.fit(train_data)
+solver_baseline.optimize(test_data[1], build_uc_model);
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
+
+CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
+Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0xb45c0594
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [0e+00, 0e+00]
+  RHS range        [2e+08, 2e+08]
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+Found heuristic solution: objective 1.071463e+10
+
+Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 1.0216e+10    0    1 1.0715e+10 1.0216e+10  4.66%     -    0s
+H    0     0                    1.025162e+10 1.0216e+10  0.35%     -    0s
+     0     0 1.0216e+10    0    1 1.0252e+10 1.0216e+10  0.35%     -    0s
+H    0     0                    1.023090e+10 1.0216e+10  0.15%     -    0s
+H    0     0                    1.022335e+10 1.0216e+10  0.07%     -    0s
+H    0     0                    1.022281e+10 1.0216e+10  0.07%     -    0s
+H    0     0                    1.021753e+10 1.0216e+10  0.02%     -    0s
+H    0     0                    1.021752e+10 1.0216e+10  0.02%     -    0s
+     0     0 1.0216e+10    0    3 1.0218e+10 1.0216e+10  0.02%     -    0s
+     0     0 1.0216e+10    0    1 1.0218e+10 1.0216e+10  0.02%     -    0s
+H    0     0                    1.021651e+10 1.0216e+10  0.01%     -    0s
+
+Explored 1 nodes (764 simplex iterations) in 0.03 seconds (0.02 work units)
+Thread count was 32 (of 32 available processors)
+
+Solution count 7: 1.02165e+10 1.02175e+10 1.02228e+10 ... 1.07146e+10
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 1.021651058978e+10, best bound 1.021573363741e+10, gap 0.0076%
+
+User-callback calls 204, time in user-callback 0.00 sec
+
+
+

In the log above, the MIP start line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems.

+
+
+

3.6. Accessing the solution

+

In the example above, we used LearningSolver.solve together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver.

+
+
[10]:
+
+
+
data = random_uc_data(samples=1, n=500)[1]
+model = build_uc_model(data)
+solver_ml.optimize(model)
+@show objective_value(model.inner);
+
+
+
+
+
+
+
+
+Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
+
+CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
+Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x974a7fba
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 1e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [0e+00, 0e+00]
+  RHS range        [2e+08, 2e+08]
+
+User MIP start produced solution with objective 9.86729e+09 (0.00s)
+User MIP start produced solution with objective 9.86675e+09 (0.00s)
+User MIP start produced solution with objective 9.86654e+09 (0.01s)
+User MIP start produced solution with objective 9.8661e+09 (0.01s)
+Loaded user MIP start with objective 9.8661e+09
+
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+
+Root relaxation: objective 9.865344e+09, 510 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 9.8653e+09    0    1 9.8661e+09 9.8653e+09  0.01%     -    0s
+
+Explored 1 nodes (510 simplex iterations) in 0.02 seconds (0.01 work units)
+Thread count was 32 (of 32 available processors)
+
+Solution count 4: 9.8661e+09 9.86654e+09 9.86675e+09 9.86729e+09
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 9.866096485614e+09, best bound 9.865343669936e+09, gap 0.0076%
+
+User-callback calls 182, time in user-callback 0.00 sec
+objective_value(model.inner) = 9.866096485613789e9
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/0.4/tutorials/getting-started-pyomo.ipynb b/0.4/tutorials/getting-started-pyomo.ipynb new file mode 100644 index 00000000..e109ddb5 --- /dev/null +++ b/0.4/tutorials/getting-started-pyomo.ipynb @@ -0,0 +1,858 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6b8983b1", + "metadata": { + "tags": [] + }, + "source": [ + "# Getting started (Pyomo)\n", + "\n", + "## Introduction\n", + "\n", + "**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n", + "\n", + "1. Install the Python/Pyomo version of MIPLearn\n", + "2. Model a simple optimization problem using Pyomo\n", + "3. Generate training data and train the ML models\n", + "4. Use the ML models together Gurobi to solve new instances\n", + "\n", + "
\n", + "Note\n", + " \n", + "The Python/Pyomo version of MIPLearn is currently only compatible with Pyomo persistent solvers (Gurobi, CPLEX and XPRESS). For broader solver compatibility, see the Julia/JuMP version of the package.\n", + "
\n", + "\n", + "
\n", + "Warning\n", + " \n", + "MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n", + " \n", + "
\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "02f0a927", + "metadata": {}, + "source": [ + "## Installation\n", + "\n", + "MIPLearn is available in two versions:\n", + "\n", + "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", + "- Julia version, compatible with the JuMP modeling language.\n", + "\n", + "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n", + "\n", + "```\n", + "$ pip install MIPLearn==0.3\n", + "```\n", + "\n", + "In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.\n", + "\n", + "```\n", + "$ pip install 'gurobipy>=10,<10.1'\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a14e4550", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Note\n", + " \n", + "In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.\n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "16b86823", + "metadata": {}, + "source": [ + "## Modeling a simple optimization problem\n", + "\n", + "To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n", + "\n", + "Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n", + "\n", + "This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:" + ] + }, + { + "cell_type": "markdown", + "id": "f12c3702", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{align}\n", + "\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n", + "\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n", + "& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n", + "& \\sum_{i=1}^n y_i = d \\\\\n", + "& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n", + "& y_i \\geq 0 & i=1,\\ldots,n\n", + "\\end{align}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "be3989ed", + "metadata": {}, + "source": [ + "
\n", + "\n", + "Note\n", + "\n", + "We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "a5fd33f6", + "metadata": {}, + "source": [ + "Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class `UnitCommitmentData`, which holds all the input data." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "22a67170-10b4-43d3-8708-014d91141e73", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:00:03.278853343Z", + "start_time": "2023-06-06T20:00:03.123324067Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "from dataclasses import dataclass\n", + "from typing import List\n", + "\n", + "import numpy as np\n", + "\n", + "\n", + "@dataclass\n", + "class UnitCommitmentData:\n", + " demand: float\n", + " pmin: List[float]\n", + " pmax: List[float]\n", + " cfix: List[float]\n", + " cvar: List[float]" + ] + }, + { + "cell_type": "markdown", + "id": "29f55efa-0751-465a-9b0a-a821d46a3d40", + "metadata": {}, + "source": [ + "Next, we write a `build_uc_model` function, which converts the input data into a concrete Pyomo model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a compressed pickle file containing this data." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "2f67032f-0d74-4317-b45c-19da0ec859e9", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:00:45.890126754Z", + "start_time": "2023-06-06T20:00:45.637044282Z" + } + }, + "outputs": [], + "source": [ + "import pyomo.environ as pe\n", + "from typing import Union\n", + "from miplearn.io import read_pkl_gz\n", + "from miplearn.solvers.pyomo import PyomoModel\n", + "\n", + "\n", + "def build_uc_model(data: Union[str, UnitCommitmentData]) -> PyomoModel:\n", + " if isinstance(data, str):\n", + " data = read_pkl_gz(data)\n", + "\n", + " model = pe.ConcreteModel()\n", + " n = len(data.pmin)\n", + " model.x = pe.Var(range(n), domain=pe.Binary)\n", + " model.y = pe.Var(range(n), domain=pe.NonNegativeReals)\n", + " model.obj = pe.Objective(\n", + " expr=sum(\n", + " data.cfix[i] * model.x[i] + data.cvar[i] * model.y[i] for i in range(n)\n", + " )\n", + " )\n", + " model.eq_max_power = pe.ConstraintList()\n", + " model.eq_min_power = pe.ConstraintList()\n", + " for i in range(n):\n", + " model.eq_max_power.add(model.y[i] <= data.pmax[i] * model.x[i])\n", + " model.eq_min_power.add(model.y[i] >= data.pmin[i] * model.x[i])\n", + " model.eq_demand = pe.Constraint(\n", + " expr=sum(model.y[i] for i in range(n)) == data.demand,\n", + " )\n", + " return PyomoModel(model, \"gurobi_persistent\")" + ] + }, + { + "cell_type": "markdown", + "id": "c22714a3", + "metadata": {}, + "source": [ + "At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "2a896f47", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:01:10.993801745Z", + "start_time": "2023-06-06T20:01:10.887580927Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Restricted license - for non-production use only - expires 2024-10-28\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", + "Model fingerprint: 0x15c7a953\n", + "Variable types: 3 continuous, 3 integer (3 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 7e+01]\n", + " Objective range [2e+00, 7e+02]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [1e+02, 1e+02]\n", + "Presolve removed 2 rows and 1 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 5 rows, 5 columns, 13 nonzeros\n", + "Variable types: 0 continuous, 5 integer (3 binary)\n", + "Found heuristic solution: objective 1400.0000000\n", + "\n", + "Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n", + " 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n", + "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", + "\n", + "Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 2: 1320 1400 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n", + "obj = 1320.0\n", + "x = [-0.0, 1.0, 1.0]\n", + "y = [0.0, 60.0, 40.0]\n" + ] + } + ], + "source": [ + "model = build_uc_model(\n", + " UnitCommitmentData(\n", + " demand=100.0,\n", + " pmin=[10, 20, 30],\n", + " pmax=[50, 60, 70],\n", + " cfix=[700, 600, 500],\n", + " cvar=[1.5, 2.0, 2.5],\n", + " )\n", + ")\n", + "\n", + "model.optimize()\n", + "print(\"obj =\", model.inner.obj())\n", + "print(\"x =\", [model.inner.x[i].value for i in range(3)])\n", + "print(\"y =\", [model.inner.y[i].value for i in range(3)])" + ] + }, + { + "cell_type": "markdown", + "id": "41b03bbc", + "metadata": {}, + "source": [ + "Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power." + ] + }, + { + "cell_type": "markdown", + "id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26", + "metadata": {}, + "source": [ + "
\n", + " \n", + "Notes\n", + " \n", + "- In the example above, `PyomoModel` is just a thin wrapper around a standard Pyomo model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original Pyomo model can be accessed through `model.inner`, as illustrated above. \n", + "- To use CPLEX or XPRESS, instead of Gurobi, replace `gurobi_persistent` by `cplex_persistent` or `xpress_persistent` in the `build_uc_model`. Note that only persistent Pyomo solvers are currently supported. Pull requests adding support for other types of solver are very welcome.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "cf60c1dd", + "metadata": {}, + "source": [ + "## Generating training data\n", + "\n", + "Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n", + "\n", + "In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "5eb09fab", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:02:27.324208900Z", + "start_time": "2023-06-06T20:02:26.990044230Z" + } + }, + "outputs": [], + "source": [ + "from scipy.stats import uniform\n", + "from typing import List\n", + "import random\n", + "\n", + "\n", + "def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:\n", + " random.seed(seed)\n", + " np.random.seed(seed)\n", + " pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)\n", + " pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)\n", + " cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)\n", + " cvar = uniform(loc=1.25, scale=0.25).rvs(n)\n", + " return [\n", + " UnitCommitmentData(\n", + " demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),\n", + " pmin=pmin,\n", + " pmax=pmax,\n", + " cfix=cfix,\n", + " cvar=cvar,\n", + " )\n", + " for _ in range(samples)\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "id": "3a03a7ac", + "metadata": {}, + "source": [ + "In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n", + "\n", + "Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00000.pkl.gz`, `uc/train/00001.pkl.gz`, etc., which contain the input data in compressed (gzipped) pickle format." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "6156752c", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:04.782830561Z", + "start_time": "2023-06-06T20:03:04.530421396Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.io import write_pkl_gz\n", + "\n", + "data = random_uc_data(samples=500, n=500)\n", + "train_data = write_pkl_gz(data[0:450], \"uc/train\")\n", + "test_data = write_pkl_gz(data[450:500], \"uc/test\")" + ] + }, + { + "cell_type": "markdown", + "id": "b17af877", + "metadata": {}, + "source": [ + "Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00000.h5`, `uc/train/00001.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00000.mps.gz`, `uc/train/00001.mps.gz`, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7623f002", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:03:35.571497019Z", + "start_time": "2023-06-06T20:03:25.804104036Z" + } + }, + "outputs": [], + "source": [ + "from miplearn.collectors.basic import BasicCollector\n", + "\n", + "bc = BasicCollector()\n", + "bc.collect(train_data, build_uc_model, n_jobs=4)" + ] + }, + { + "cell_type": "markdown", + "id": "c42b1be1-9723-4827-82d8-974afa51ef9f", + "metadata": {}, + "source": [ + "## Training and solving test instances" + ] + }, + { + "cell_type": "markdown", + "id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c", + "metadata": {}, + "source": [ + "With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n", + "\n", + "1. Memorize the optimal solutions of all training instances;\n", + "2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n", + "3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n", + "4. Provide this partial solution to the solver as a warm start.\n", + "\n", + "This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:20.497772794Z", + "start_time": "2023-06-06T20:05:20.484821405Z" + } + }, + "outputs": [], + "source": [ + "from sklearn.neighbors import KNeighborsClassifier\n", + "from miplearn.components.primal.actions import SetWarmStart\n", + "from miplearn.components.primal.mem import (\n", + " MemorizingPrimalComponent,\n", + " MergeTopSolutions,\n", + ")\n", + "from miplearn.extractors.fields import H5FieldsExtractor\n", + "\n", + "comp = MemorizingPrimalComponent(\n", + " clf=KNeighborsClassifier(n_neighbors=25),\n", + " extractor=H5FieldsExtractor(\n", + " instance_fields=[\"static_constr_rhs\"],\n", + " ),\n", + " constructor=MergeTopSolutions(25, [0.0, 1.0]),\n", + " action=SetWarmStart(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94", + "metadata": {}, + "source": [ + "Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:22.672002339Z", + "start_time": "2023-06-06T20:05:21.447466634Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x5e67c6ee\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x4a7cfe2b\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.29153e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2915e+09 8.2907e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 1\n", + " Flow cover: 2\n", + "\n", + "Explored 1 nodes (565 simplex iterations) in 0.04 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 1: 8.29153e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n" + ] + }, + { + "data": { + "text/plain": [ + "{}" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from miplearn.solvers.learning import LearningSolver\n", + "\n", + "solver_ml = LearningSolver(components=[comp])\n", + "solver_ml.fit(train_data)\n", + "solver_ml.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0", + "metadata": {}, + "source": [ + "By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "2ff391ed-e855-4228-aa09-a7641d8c2893", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:05:46.969575966Z", + "start_time": "2023-06-06T20:05:46.420803286Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x5e67c6ee\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", + " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.290621916e+09\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x8a0f9587\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Found heuristic solution: objective 9.757128e+09\n", + "\n", + "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2906e+09 0 1 9.7571e+09 8.2906e+09 15.0% - 0s\n", + "H 0 0 8.298273e+09 8.2906e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2983e+09 8.2907e+09 0.09% - 0s\n", + " 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", + "H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", + "H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Gomory: 2\n", + " MIR: 1\n", + "\n", + "Explored 1 nodes (1025 simplex iterations) in 0.12 seconds (0.03 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n" + ] + }, + { + "data": { + "text/plain": [ + "{}" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "solver_baseline = LearningSolver(components=[])\n", + "solver_baseline.fit(train_data)\n", + "solver_baseline.optimize(test_data[0], build_uc_model)" + ] + }, + { + "cell_type": "markdown", + "id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266", + "metadata": {}, + "source": [ + "In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems." + ] + }, + { + "cell_type": "markdown", + "id": "eec97f06", + "metadata": { + "tags": [] + }, + "source": [ + "## Accessing the solution\n", + "\n", + "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "67a6cd18", + "metadata": { + "ExecuteTime": { + "end_time": "2023-06-06T20:06:26.913448568Z", + "start_time": "2023-06-06T20:06:26.169047914Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x2dfe4e1c\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "Presolve removed 1000 rows and 500 columns\n", + "Presolve time: 0.00s\n", + "Presolved: 1 rows, 500 columns, 500 nonzeros\n", + "\n", + "Iteration Objective Primal Inf. Dual Inf. Time\n", + " 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n", + " 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n", + "\n", + "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", + "Optimal objective 8.253596777e+09\n", + "Set parameter QCPDual to value 1\n", + "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", + "\n", + "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", + "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", + "\n", + "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", + "Model fingerprint: 0x0f0924a1\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "Coefficient statistics:\n", + " Matrix range [1e+00, 2e+06]\n", + " Objective range [1e+00, 6e+07]\n", + " Bounds range [1e+00, 1e+00]\n", + " RHS range [3e+08, 3e+08]\n", + "\n", + "User MIP start produced solution with objective 8.25814e+09 (0.00s)\n", + "User MIP start produced solution with objective 8.25512e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", + "Loaded user MIP start with objective 8.25459e+09\n", + "\n", + "Presolve time: 0.00s\n", + "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", + "Variable types: 500 continuous, 500 integer (500 binary)\n", + "\n", + "Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", + "\n", + " Nodes | Current Node | Objective Bounds | Work\n", + " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", + "\n", + " 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n", + " 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n", + "\n", + "Cutting planes:\n", + " Cover: 1\n", + " MIR: 2\n", + " StrongCG: 1\n", + " Flow cover: 1\n", + "\n", + "Explored 1 nodes (575 simplex iterations) in 0.09 seconds (0.01 work units)\n", + "Thread count was 20 (of 20 available processors)\n", + "\n", + "Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09 \n", + "\n", + "Optimal solution found (tolerance 1.00e-04)\n", + "Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n", + "WARNING: Cannot get reduced costs for MIP.\n", + "WARNING: Cannot get duals for MIP.\n", + "obj = 8254590409.96973\n", + " x = [1.0, 1.0, 0.0, 1.0, 1.0]\n", + " y = [935662.0949262811, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n" + ] + } + ], + "source": [ + "data = random_uc_data(samples=1, n=500)[0]\n", + "model = build_uc_model(data)\n", + "solver_ml.optimize(model)\n", + "print(\"obj =\", model.inner.obj())\n", + "print(\" x =\", [model.inner.x[i].value for i in range(5)])\n", + "print(\" y =\", [model.inner.y[i].value for i in range(5)])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5593d23a-83bd-4e16-8253-6300f5e3f63b", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/0.4/tutorials/getting-started-pyomo/index.html b/0.4/tutorials/getting-started-pyomo/index.html new file mode 100644 index 00000000..da776dfe --- /dev/null +++ b/0.4/tutorials/getting-started-pyomo/index.html @@ -0,0 +1,909 @@ + + + + + + + + 1. Getting started (Pyomo) — MIPLearn 0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + + + + +
+ + +
+
+ +
+ +
+

1. Getting started (Pyomo)

+
+

1.1. Introduction

+

MIPLearn is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:

+
    +
  1. Install the Python/Pyomo version of MIPLearn

  2. +
  3. Model a simple optimization problem using Pyomo

  4. +
  5. Generate training data and train the ML models

  6. +
  7. Use the ML models together Gurobi to solve new instances

  8. +
+
+

Note

+

The Python/Pyomo version of MIPLearn is currently only compatible with Pyomo persistent solvers (Gurobi, CPLEX and XPRESS). For broader solver compatibility, see the Julia/JuMP version of the package.

+
+
+

Warning

+

MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!

+
+
+
+

1.2. Installation

+

MIPLearn is available in two versions:

+
    +
  • Python version, compatible with the Pyomo and Gurobipy modeling languages,

  • +
  • Julia version, compatible with the JuMP modeling language.

  • +
+

In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.8+ in your computer. See the official Python website for more instructions. After Python is installed, we proceed to install MIPLearn using pip:

+
$ pip install MIPLearn==0.3
+
+
+

In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.

+
$ pip install 'gurobipy>=10,<10.1'
+
+
+
+

Note

+

In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.

+
+
+
+

1.3. Modeling a simple optimization problem

+

To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the unit commitment problem, a practical optimization problem solved daily by electric grid operators around the world.

+

Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns \(n\) generators, denoted by \(g_1, \ldots, g_n\). Each generator can either be online or offline. An online generator \(g_i\) can produce between \(p^\text{min}_i\) to \(p^\text{max}_i\) megawatts of power, and it costs the company +\(c^\text{fix}_i + c^\text{var}_i y_i\), where \(y_i\) is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand \(d\) (in megawatts).

+

This simple problem can be modeled as a mixed-integer linear optimization problem as follows. For each generator \(g_i\), let \(x_i \in \{0,1\}\) be a decision variable indicating whether \(g_i\) is online, and let \(y_i \geq 0\) be a decision variable indicating how much power does \(g_i\) produce. The problem is then given by:

+
+\[\begin{split}\begin{align} +\text{minimize } \quad & \sum_{i=1}^n \left( c^\text{fix}_i x_i + c^\text{var}_i y_i \right) \\ +\text{subject to } \quad & y_i \leq p^\text{max}_i x_i & i=1,\ldots,n \\ +& y_i \geq p^\text{min}_i x_i & i=1,\ldots,n \\ +& \sum_{i=1}^n y_i = d \\ +& x_i \in \{0,1\} & i=1,\ldots,n \\ +& y_i \geq 0 & i=1,\ldots,n +\end{align}\end{split}\]
+
+

Note

+

We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.

+
+

Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class UnitCommitmentData, which holds all the input data.

+
+
[1]:
+
+
+
from dataclasses import dataclass
+from typing import List
+
+import numpy as np
+
+
+@dataclass
+class UnitCommitmentData:
+    demand: float
+    pmin: List[float]
+    pmax: List[float]
+    cfix: List[float]
+    cvar: List[float]
+
+
+
+

Next, we write a build_uc_model function, which converts the input data into a concrete Pyomo model. The function accepts UnitCommitmentData, the data structure we previously defined, or the path to a compressed pickle file containing this data.

+
+
[2]:
+
+
+
import pyomo.environ as pe
+from typing import Union
+from miplearn.io import read_pkl_gz
+from miplearn.solvers.pyomo import PyomoModel
+
+
+def build_uc_model(data: Union[str, UnitCommitmentData]) -> PyomoModel:
+    if isinstance(data, str):
+        data = read_pkl_gz(data)
+
+    model = pe.ConcreteModel()
+    n = len(data.pmin)
+    model.x = pe.Var(range(n), domain=pe.Binary)
+    model.y = pe.Var(range(n), domain=pe.NonNegativeReals)
+    model.obj = pe.Objective(
+        expr=sum(
+            data.cfix[i] * model.x[i] + data.cvar[i] * model.y[i] for i in range(n)
+        )
+    )
+    model.eq_max_power = pe.ConstraintList()
+    model.eq_min_power = pe.ConstraintList()
+    for i in range(n):
+        model.eq_max_power.add(model.y[i] <= data.pmax[i] * model.x[i])
+        model.eq_min_power.add(model.y[i] >= data.pmin[i] * model.x[i])
+    model.eq_demand = pe.Constraint(
+        expr=sum(model.y[i] for i in range(n)) == data.demand,
+    )
+    return PyomoModel(model, "gurobi_persistent")
+
+
+
+

At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:

+
+
[3]:
+
+
+
model = build_uc_model(
+    UnitCommitmentData(
+        demand=100.0,
+        pmin=[10, 20, 30],
+        pmax=[50, 60, 70],
+        cfix=[700, 600, 500],
+        cvar=[1.5, 2.0, 2.5],
+    )
+)
+
+model.optimize()
+print("obj =", model.inner.obj())
+print("x =", [model.inner.x[i].value for i in range(3)])
+print("y =", [model.inner.y[i].value for i in range(3)])
+
+
+
+
+
+
+
+
+Restricted license - for non-production use only - expires 2024-10-28
+Set parameter QCPDual to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 7 rows, 6 columns and 15 nonzeros
+Model fingerprint: 0x15c7a953
+Variable types: 3 continuous, 3 integer (3 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 7e+01]
+  Objective range  [2e+00, 7e+02]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [1e+02, 1e+02]
+Presolve removed 2 rows and 1 columns
+Presolve time: 0.00s
+Presolved: 5 rows, 5 columns, 13 nonzeros
+Variable types: 0 continuous, 5 integer (3 binary)
+Found heuristic solution: objective 1400.0000000
+
+Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 1035.00000    0    1 1400.00000 1035.00000  26.1%     -    0s
+     0     0 1105.71429    0    1 1400.00000 1105.71429  21.0%     -    0s
+*    0     0               0    1320.0000000 1320.00000  0.00%     -    0s
+
+Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 2: 1320 1400
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%
+WARNING: Cannot get reduced costs for MIP.
+WARNING: Cannot get duals for MIP.
+obj = 1320.0
+x = [-0.0, 1.0, 1.0]
+y = [0.0, 60.0, 40.0]
+
+
+

Running the code above, we found that the optimal solution for our small problem instance costs $1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power.

+
+

Notes

+
    +
  • In the example above, PyomoModel is just a thin wrapper around a standard Pyomo model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as optimize. For more control, and to query the solution, the original Pyomo model can be accessed through model.inner, as illustrated above.

  • +
  • To use CPLEX or XPRESS, instead of Gurobi, replace gurobi_persistent by cplex_persistent or xpress_persistent in the build_uc_model. Note that only persistent Pyomo solvers are currently supported. Pull requests adding support for other types of solver are very welcome.

  • +
+
+
+
+

1.4. Generating training data

+

Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a trained solver, which can optimize new instances (similar to the ones it was trained on) faster.

+

In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a +random instance generator:

+
+
[4]:
+
+
+
from scipy.stats import uniform
+from typing import List
+import random
+
+
+def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:
+    random.seed(seed)
+    np.random.seed(seed)
+    pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)
+    pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)
+    cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)
+    cvar = uniform(loc=1.25, scale=0.25).rvs(n)
+    return [
+        UnitCommitmentData(
+            demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),
+            pmin=pmin,
+            pmax=pmax,
+            cfix=cfix,
+            cvar=cvar,
+        )
+        for _ in range(samples)
+    ]
+
+
+
+

In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.

+

Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple +machines. The code below generates the files uc/train/00000.pkl.gz, uc/train/00001.pkl.gz, etc., which contain the input data in compressed (gzipped) pickle format.

+
+
[5]:
+
+
+
from miplearn.io import write_pkl_gz
+
+data = random_uc_data(samples=500, n=500)
+train_data = write_pkl_gz(data[0:450], "uc/train")
+test_data = write_pkl_gz(data[450:500], "uc/test")
+
+
+
+

Finally, we use BasicCollector to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files uc/train/00000.h5, uc/train/00001.h5, etc. The optimization models are also exported to compressed MPS files uc/train/00000.mps.gz, uc/train/00001.mps.gz, etc.

+
+
[6]:
+
+
+
from miplearn.collectors.basic import BasicCollector
+
+bc = BasicCollector()
+bc.collect(train_data, build_uc_model, n_jobs=4)
+
+
+
+
+
+

1.5. Training and solving test instances

+

With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using \(k\)-nearest neighbors. More specifically, the strategy is to:

+
    +
  1. Memorize the optimal solutions of all training instances;

  2. +
  3. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;

  4. +
  5. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.

  6. +
  7. Provide this partial solution to the solver as a warm start.

  8. +
+

This simple strategy can be implemented as shown below, using MemorizingPrimalComponent. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide.

+
+
[7]:
+
+
+
from sklearn.neighbors import KNeighborsClassifier
+from miplearn.components.primal.actions import SetWarmStart
+from miplearn.components.primal.mem import (
+    MemorizingPrimalComponent,
+    MergeTopSolutions,
+)
+from miplearn.extractors.fields import H5FieldsExtractor
+
+comp = MemorizingPrimalComponent(
+    clf=KNeighborsClassifier(n_neighbors=25),
+    extractor=H5FieldsExtractor(
+        instance_fields=["static_constr_rhs"],
+    ),
+    constructor=MergeTopSolutions(25, [0.0, 1.0]),
+    action=SetWarmStart(),
+)
+
+
+
+

Having defined the ML strategy, we next construct LearningSolver, train the ML component and optimize one of the test instances.

+
+
[8]:
+
+
+
from miplearn.solvers.learning import LearningSolver
+
+solver_ml = LearningSolver(components=[comp])
+solver_ml.fit(train_data)
+solver_ml.optimize(test_data[0], build_uc_model)
+
+
+
+
+
+
+
+
+Set parameter QCPDual to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x5e67c6ee
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve removed 1000 rows and 500 columns
+Presolve time: 0.00s
+Presolved: 1 rows, 500 columns, 500 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    6.6166537e+09   5.648803e+04   0.000000e+00      0s
+       1    8.2906219e+09   0.000000e+00   0.000000e+00      0s
+
+Solved in 1 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  8.290621916e+09
+Set parameter QCPDual to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x4a7cfe2b
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+
+User MIP start produced solution with objective 8.29153e+09 (0.01s)
+User MIP start produced solution with objective 8.29153e+09 (0.01s)
+Loaded user MIP start with objective 8.29153e+09
+
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+
+Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 8.2906e+09    0    1 8.2915e+09 8.2906e+09  0.01%     -    0s
+     0     0 8.2907e+09    0    3 8.2915e+09 8.2907e+09  0.01%     -    0s
+     0     0 8.2907e+09    0    1 8.2915e+09 8.2907e+09  0.01%     -    0s
+     0     0 8.2907e+09    0    2 8.2915e+09 8.2907e+09  0.01%     -    0s
+
+Cutting planes:
+  Gomory: 1
+  Flow cover: 2
+
+Explored 1 nodes (565 simplex iterations) in 0.04 seconds (0.01 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 1: 8.29153e+09
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%
+WARNING: Cannot get reduced costs for MIP.
+WARNING: Cannot get duals for MIP.
+
+
+
+
[8]:
+
+
+
+
+{}
+
+
+

By examining the solve log above, specifically the line Loaded user MIP start with objective..., we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided.

+
+
[9]:
+
+
+
solver_baseline = LearningSolver(components=[])
+solver_baseline.fit(train_data)
+solver_baseline.optimize(test_data[0], build_uc_model)
+
+
+
+
+
+
+
+
+Set parameter QCPDual to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x5e67c6ee
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve removed 1000 rows and 500 columns
+Presolve time: 0.00s
+Presolved: 1 rows, 500 columns, 500 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    6.6166537e+09   5.648803e+04   0.000000e+00      0s
+       1    8.2906219e+09   0.000000e+00   0.000000e+00      0s
+
+Solved in 1 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  8.290621916e+09
+Set parameter QCPDual to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x8a0f9587
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+Found heuristic solution: objective 9.757128e+09
+
+Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 8.2906e+09    0    1 9.7571e+09 8.2906e+09  15.0%     -    0s
+H    0     0                    8.298273e+09 8.2906e+09  0.09%     -    0s
+     0     0 8.2907e+09    0    4 8.2983e+09 8.2907e+09  0.09%     -    0s
+     0     0 8.2907e+09    0    1 8.2983e+09 8.2907e+09  0.09%     -    0s
+     0     0 8.2907e+09    0    4 8.2983e+09 8.2907e+09  0.09%     -    0s
+H    0     0                    8.293980e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2907e+09    0    5 8.2940e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2907e+09    0    1 8.2940e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2907e+09    0    2 8.2940e+09 8.2907e+09  0.04%     -    0s
+     0     0 8.2908e+09    0    1 8.2940e+09 8.2908e+09  0.04%     -    0s
+     0     0 8.2908e+09    0    4 8.2940e+09 8.2908e+09  0.04%     -    0s
+     0     0 8.2908e+09    0    4 8.2940e+09 8.2908e+09  0.04%     -    0s
+H    0     0                    8.291465e+09 8.2908e+09  0.01%     -    0s
+
+Cutting planes:
+  Gomory: 2
+  MIR: 1
+
+Explored 1 nodes (1025 simplex iterations) in 0.12 seconds (0.03 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%
+WARNING: Cannot get reduced costs for MIP.
+WARNING: Cannot get duals for MIP.
+
+
+
+
[9]:
+
+
+
+
+{}
+
+
+

In the log above, the MIP start line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems.

+
+
+

1.6. Accessing the solution

+

In the example above, we used LearningSolver.solve together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver.

+
+
[10]:
+
+
+
data = random_uc_data(samples=1, n=500)[0]
+model = build_uc_model(data)
+solver_ml.optimize(model)
+print("obj =", model.inner.obj())
+print(" x =", [model.inner.x[i].value for i in range(5)])
+print(" y =", [model.inner.y[i].value for i in range(5)])
+
+
+
+
+
+
+
+
+Set parameter QCPDual to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x2dfe4e1c
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+Presolve removed 1000 rows and 500 columns
+Presolve time: 0.00s
+Presolved: 1 rows, 500 columns, 500 nonzeros
+
+Iteration    Objective       Primal Inf.    Dual Inf.      Time
+       0    6.5917580e+09   5.627453e+04   0.000000e+00      0s
+       1    8.2535968e+09   0.000000e+00   0.000000e+00      0s
+
+Solved in 1 iterations and 0.01 seconds (0.00 work units)
+Optimal objective  8.253596777e+09
+Set parameter QCPDual to value 1
+Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
+
+CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]
+Thread count: 10 physical cores, 20 logical processors, using up to 20 threads
+
+Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
+Model fingerprint: 0x0f0924a1
+Variable types: 500 continuous, 500 integer (500 binary)
+Coefficient statistics:
+  Matrix range     [1e+00, 2e+06]
+  Objective range  [1e+00, 6e+07]
+  Bounds range     [1e+00, 1e+00]
+  RHS range        [3e+08, 3e+08]
+
+User MIP start produced solution with objective 8.25814e+09 (0.00s)
+User MIP start produced solution with objective 8.25512e+09 (0.01s)
+User MIP start produced solution with objective 8.25483e+09 (0.01s)
+User MIP start produced solution with objective 8.25483e+09 (0.01s)
+User MIP start produced solution with objective 8.25483e+09 (0.01s)
+User MIP start produced solution with objective 8.25459e+09 (0.01s)
+User MIP start produced solution with objective 8.25459e+09 (0.01s)
+Loaded user MIP start with objective 8.25459e+09
+
+Presolve time: 0.00s
+Presolved: 1001 rows, 1000 columns, 2500 nonzeros
+Variable types: 500 continuous, 500 integer (500 binary)
+
+Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)
+
+    Nodes    |    Current Node    |     Objective Bounds      |     Work
+ Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
+
+     0     0 8.2536e+09    0    1 8.2546e+09 8.2536e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    3 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    1 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    4 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2537e+09    0    4 8.2546e+09 8.2537e+09  0.01%     -    0s
+     0     0 8.2538e+09    0    4 8.2546e+09 8.2538e+09  0.01%     -    0s
+     0     0 8.2538e+09    0    5 8.2546e+09 8.2538e+09  0.01%     -    0s
+     0     0 8.2538e+09    0    6 8.2546e+09 8.2538e+09  0.01%     -    0s
+
+Cutting planes:
+  Cover: 1
+  MIR: 2
+  StrongCG: 1
+  Flow cover: 1
+
+Explored 1 nodes (575 simplex iterations) in 0.09 seconds (0.01 work units)
+Thread count was 20 (of 20 available processors)
+
+Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09
+
+Optimal solution found (tolerance 1.00e-04)
+Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%
+WARNING: Cannot get reduced costs for MIP.
+WARNING: Cannot get duals for MIP.
+obj = 8254590409.96973
+ x = [1.0, 1.0, 0.0, 1.0, 1.0]
+ y = [935662.0949262811, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]
+
+
+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+ + + + +
+
+
+
+

+ + © Copyright 2020-2023, UChicago Argonne, LLC.
+

+
+
+
+ + +
+
+ + + + + + \ No newline at end of file diff --git a/index.html b/index.html index 60f3e8ed..3270ba9b 100644 --- a/index.html +++ b/index.html @@ -1 +1 @@ - +