Skip to content

This code is part of my Ph.D. research. The objective is to test the best chosen hybrid partitions with silhouette coefficient. A version HPML where both internal and external chaining is performed. This is the original version of HPML, i.e., a version without any type of chaining.

License

Notifications You must be signed in to change notification settings

cissagatto/HPML.D.padrao

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

STANDARD HPML

This code is part of my PhD research at PPG-CC/DC/UFSCar in colaboration with Katholieke Universiteit Leuven Campus Kulak Kortrijk in Belgium. This code test the best hybrid partition chosen - with any of the criteria methods - using CLUS or Random Forests.

FLOW DESIGN

How to cite

@misc{Gatto2023, author = {Gatto, E. C.}, title = {Standard Version of Hybrid Partitions for Multi-Label Classification}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/cissagatto/Standard-HPML}}}

Source Code

This code source is composed of the project R to be used in RStudio IDE and also the following scripts R:

  1. libraries.R
  2. utils.R
  3. misc.R
  4. test-clus-maf1.R
  5. test-clus-mif1.R
  6. test-clus-silho.R
  7. test-rf-maf1.R (not implemented)
  8. test-rf-mif1.R (not implemented)
  9. test-rf-silho.R
  10. run-clus.R
  11. run-rf.R
  12. start.R
  13. jobs.R
  14. config-files.R

Preparing your experiment

STEP 1

A file called datasets-original.csv must be in the root project directory. This file is used to read information about the datasets and they are used in the code. We have 90 multilabel datasets in this .csv file. If you want to use another dataset, please, add the following information about the dataset in the file:

Parameter Status Description
Id mandatory Integer number to identify the dataset
Name mandatory Dataset name (please follow the benchmark)
Domain optional Dataset domain
Instances mandatory Total number of dataset instances
Attributes mandatory Total number of dataset attributes
Labels mandatory Total number of labels in the label space
Inputs mandatory Total number of dataset input attributes
Cardinality optional **
Density optional **
Labelsets optional **
Single optional **
Max.freq optional **
Mean.IR optional **
Scumble optional **
TCS optional **
AttStart mandatory Column number where the attribute space begins * 1
AttEnd mandatory Column number where the attribute space ends
LabelStart mandatory Column number where the label space begins
LabelEnd mandatory Column number where the label space ends
Distinct optional ** 2
xn mandatory Value for Dimension X of the Kohonen map
yn mandatory Value for Dimension Y of the Kohonen map
gridn mandatory X times Y value. Kohonen's map must be square
max.neigbors mandatory The maximum number of neighbors is given by LABELS -1

1 - Because it is the first column the number is always 1.

2 - Click here to get explanation about each property.

STEP 2

To run this experiment you need the X-Fold Cross-Validation files and they must be compacted in tar.gz format. You can download these files, with 10-folds, ready for multilabel dataset by clicking here. For a new dataset, in addition to including it in the datasets-original.csv file, you must also run this code here. In the repository in question you will find all the instructions needed to generate the files in the format required for this experiment. The tar.gz file can be placed on any directory on your computer or server. The absolute path of the file should be passed as a parameter in the configuration file that will be read by start.R script. The dataset folds will be loaded from there.

STEP 3

You will need the previously best chosen partitions by one of the following codes:

You must use here the results generated from the OUTPUT directory in that source code. They must be compressed into a TAR.GZ file and placed in a directory on your computer. The absolute path of this directory must be passed as a parameter in the configuration file. Please see the configuration file example in this source code (config-files directory).

STEP 4

You need to have installed all the Java, Python and R packages required to execute this code on your machine or server. This code does not provide any type of automatic package installation!

You can use the Conda Environment that I created to perform this experiment. Try to use the command below to extract the environment to your computer:

conda env create -file AmbienteTeste.yaml

See more information about Conda environments here

You can also run this code using the AppTainer container that I'm using to run this code in a SLURM cluster. Please, check this tutorial (in portuguese) to see how to do that.

STEP 5

To run this code you will need a configuration file saved in csv format and with the following information:

Config Value
Dataset_Path Absolute path to the directory where the dataset's tar.gz is stored
Temporary_Path Absolute path to the directory where temporary processing will be performed*
Partitions_Path Absolute path to the directory where the best partitions are
Implementation Must be "clus" or "rf"
Dendrogram The linkage metric that were used to built the dendrogram:
- single, complete, average, ward.D, ward.D2 or mcQuitty
Similarity Must be "jaccard", "rogers" or another similarity measure
Criteria Must be "maf1" to test the best partition chosen with Macro-F1,
"mif1" to test the best partition chosen with Micro-F1,
or "silho" to test the best partition chosen with Silhouette Coefficient
Dataset_Name Dataset name according to datasets-original.csv file
Number_Dataset Dataset number according to datasets-original.csv file
Number_Folds Number of folds used in cross validation
Number_Cores Number of cores for parallel processing
  • Use directorys like /dev/shm, tmp or scratch here.

You can save configuration files wherever you want. The absolute path will be passed as a command line argument.

Software Requirements

This code was develop in RStudio Version 2022.07.2+576 "Spotted Wakerobin" Release (e7373ef832b49b2a9b88162cfe7eac5f22c40b34, 2022-09-06) for Ubuntu Bionic Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) QtWebEngine/5.12.8 Chrome/69.0.3497.128 Safari/537.36

Hardware Requirements

This code may or may not be executed in parallel, however, it is highly recommended that you run it in parallel. The number of cores can be configured via the command line (number_cores). If number_cores = 1 the code will run sequentially. In our experiments, we used 10 cores. For reproducibility, we recommend that you also use ten cores. This code was tested with the birds dataset in the following machine:

System:

Kernel: 5.4.0-136-generic x86_64 bits: 64 compiler: gcc v: 9.4.0. Desktop: Cinnamon 5.2.7 wm: muffin dm: LightDM. Distro: Linux Mint 20.3 Una. Base: Ubuntu 20.04 focal

Machine:

Type: Laptop System: LENOVO product: 82CG v: IdeaPad Gaming 3 15IMH05 serial: Chassis: type: 10 v: IdeaPad Gaming 3 15IMH05 serial: Mobo: LENOVO model: LNVNB161216 v: SDK0R33126 WIN serial: UEFI: LENOVO v: EGCN33WW date: 12/24/2020

CPU:

Topology: 6-Core model: Intel Core i7-10750H bits: 64 type: MT MCP arch: N/A | L2 cache: 12.0 MiB | flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 62399 | Speed: 4287 MHz min/max: 800/5000 MHz Core speeds (MHz): 1: 4264 2: 4240 3: 4254 | 4: 4240 5: 4273 6: 4275 7: 4267 8: 4223 9: 4275 10: 4226 11: 4264 12:4282

Then the experiment was executed in a cluster at UFSCar.

Results

The results stored in the OUTPUT directory.

RUN

To run the code, open the terminal, enter the ~/Standard-HPML/R directory, and type

Rscript tbhp.R [absolute_path_to_config_file]

Example:

Rscript tbhp.R "~/Standard-HPML/config-files/rf/jaccard/ward.D2/silho/srf-GpositiveGO.csv"

DOWNLOAD RESULTS

  • Generated Hybrid Partitions
  • Best Hybrid Partitions
  • Test

Acknowledgment