diff --git a/DocBook/docs.xml b/DocBook/docs.xml
index 90428f50..f9834558 100644
--- a/DocBook/docs.xml
+++ b/DocBook/docs.xml
@@ -22,7 +22,7 @@
Release Notes
- The following are the release notes for HammerDB v4.2.
+ The following are the release notes for HammerDB v4.3.Nomenclature Change
@@ -67,28 +67,29 @@
- MariaDB added as database separate from MySQL
-
- Prior to version 4.2 MariaDB was supported using the MySQL
- workload, this required the separate installation of the MySQL 8.0
- client. From version 4.2 MariaDB has been added as a separate database
- using the MariaDB client meaning no separate installation is required.
- At version 4.2 the MariaDB workload implementation is the same as the
- MySQL workload, however these are expected to diverge over time by
- taking advantage of different database features.
+ MySQL and MariaDB TPROC-H indexing
+
+ From version 4.3 the default MySQL and MariaDB TPROC-H
+ configuration has been modified to use the InnoDB storage engine
+ instead of MyISAM and to remove additional indexes on the LINEITEM
+ table not compliant with the TPC-H specification. Consequently
+ performance of TPROC-H on MySQL and MariaDB in version 4.3 is not
+ comparable with version 4.2 and earlier.
- Increase of TPROC-C Schema Size Limits to 100,000
- warehouses
-
- From version 4.2 the TPROC-C Schema Size Limits have been
- increased from 5,000 for the GUI/CLI schema build and 30,000 for the
- datagen feature to 100,000 for both enabling the building of larger
- schemas. This enables the testing of larger scaled workload scenarios
- however it is recommended to read the documentation carefully to
- prevent over-provisioning of the schema for the intended test and
- system.
+ sslmode support for PostgreSQL
+
+ From version 4.3 PostgreSQL connections support SSL
+ connections this requires OpenSSL on client and server and
+ needs to be enabled in PostgreSQL at compilation. When enabled
+ HammerDB uses the "prefer" option as default meaning that when
+ selected if SSL is supported and available HammerDB will use it. When
+ unselected "disable" is used and SSL is not used even if supported.
+ Where SSL is enabled and used performance should not be directly
+ compared to HammerDB v4.2 and earlier where SSL is not
+ supported.
@@ -172,24 +173,24 @@ DIAG_DDE_ENABLED=FALSE
- HammerDB v4.2 New Features
+ HammerDB v4.3 New Features
- HammerDB v4.2 New Features are all referenced to GitHub issues,
- where more details for each new feature and related pull requests can be
- found here: https://github.com/TPC-Council/HammerDB/issues
+ HammerDB v4.3 New Features are all referenced to GitHub
+ Issues and Pull Requests, where more details for each new feature
+ and related pull requests can be found here:
- [TPC-Council#253] Xtprof timer can retain old data
+ [TPC-Council#279] Add performance metrics for PostgreSQL
- [TPC-Council#250] Update to increase TPROC-C Schema Size Limits
- for Issue
+ [TPC-Council#278] Add CLI to Web Service with SQLite repository
+ for output, timing and transactions
- [TPC-Council#248] Fix for Timing Data Times Out
+ [TPC-Council#274] Update for #264 [TPROC-H] MySQL create FKs after
+ table load
- [TPC-Council#242] Fix for Issue #242 can't read tc_flog no such
- variable
-
- [TPC-Council#54] Add MariaDB as a separate database
+ [TPC-Council#271] Add sslmode support for PostgreSQL
+ database
@@ -279,7 +280,7 @@ DIAG_DDE_ENABLED=FALSE
MariaDB
- 10.2 / 10.3 / 10.4 / 10.5
+ 10.2 / 10.3 / 10.4 / 10.5 / 10.6
@@ -304,12 +305,12 @@ DIAG_DDE_ENABLED=FALSE
which can be downloaded at no cost from Microsoft and run as
follows:
- fciv -both HammerDB-4.2-Win-x86-64-Setup.exe
+ fciv -both HammerDB-4.3-Win-x86-64-Setup.exeand on Linux with md5sum and sha1sum as shown:
- md5sum HammerDB-4.2-Linux.tar.gz
-sha1sum HammerDB-4.2-Linux.tar.gz
+ md5sum HammerDB-4.3-Linux.tar.gz
+sha1sum HammerDB-4.3-Linux.tar.gz
@@ -507,10 +508,10 @@ sha1sum HammerDB-4.2-Linux.tar.gz
To install from the tar.gz run the command
- tar -zxvf HammerDB-4.2.tar.gz
+ tar -zxvf HammerDB-4.3.tar.gz This will extract HammerDB into a directory named
- HammerDB-4.2.
+ HammerDB-4.3.
@@ -557,7 +558,7 @@ sha1sum HammerDB-4.2-Linux.tar.gz
and type librarycheck.
- HammerDB CLI v4.2
+ HammerDB CLI v4.3
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
@@ -578,11 +579,32 @@ Success ... loaded library mariatcl for MariaDB
hammerdb>
- in the example it can be seen that the libraries for all databases
- were found and loaded. The following table illustrates the first level
- library that HammerDB requires however there may be additional
- dependencies. Refer to the Test Matrix to determine which database
- versions HammerDB was built against. On Windows the librarycheck is also provided in the HammerDB Web Service.
+
+ HammerDB Web Service v4.3
+Copyright (C) 2003-2021 Steve Shaw
+Type "help" for a list of commands
+The xml is well-formed, applying configuration
+Initialized SQLite on-disk database C:/Users/Steve/AppData/Local/Temp/hammer.DB using existing tables
+Starting HammerDB Web Service on port 8080
+Listening for HTTP requests on TCP port 8080
+hammerws>librarycheck
+[
+ "success ... loaded library Oratcl for Oracle",
+ "success ... loaded library tdbc::odbc for MSSQLServer",
+ "success ... loaded library db2tcl for Db2",
+ "success ... loaded library mysqltcl for MySQL",
+ "success ... loaded library Pgtcl for PostgreSQL",
+ "success ... loaded library mariatcl for MariaDB"
+]
+
+hammerws>
+
+ in the examples it can be seen that the libraries for all
+ databases were found and loaded. The following table illustrates the
+ first level library that HammerDB requires however there may be
+ additional dependencies. Refer to the Test Matrix to determine which
+ database versions HammerDB was built against. On Windows the Dependency Walker
Utility can be used to determine the dependencies and on Linux
the command ldd.
@@ -859,11 +881,11 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}
MySQL
- HammerDB version 4.2 (4.1/4.0 and version 3.3) has been built
- and tested against a MySQL 8.0 client installation, hammerDB version
- 3.0-3.2 has been built against MySQL 5.7. On Linux this means that
- HammerDB will require a MySQL client library called
- libmysqlclient.so.21 for HammerDB version 4.1,4.0 and 3.3 and
+ HammerDB version 4.3 (4.2/4.1/4.0 and version 3.3) has been
+ built and tested against a MySQL 8.0 client installation, hammerDB
+ version 3.0-3.2 has been built against MySQL 5.7. On Linux this means
+ that HammerDB will require a MySQL client library called
+ libmysqlclient.so.21 for HammerDB version 4.2,4.1,4.0 and 3.3 and
libmysqlclient.so.20 for version 3.2 and earlier. This client library
needs to be referenced in the LD_LIBRARY_PATH as shown previously in
this section. On Windows libmysqll.dll is required and should be
@@ -873,7 +895,7 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}
MariaDB
- HammerDB version 4.2 has been built and tested against MariaDB
+ HammerDB version 4.3 has been built and tested against MariaDB
10.5.10 client installation. On Linux this means that HammerDB will
require a MariaDB client library called libmariadb.so.3. This client
library needs to be referenced in the LD_LIBRARY_PATH as shown
@@ -2905,7 +2927,7 @@ GO
- TRPOC-C MySQL Database
+ TPROC-C MySQL DatabaseThe MySQL Database is the database that will be
created containing the TPROC-C schema creation. There must
@@ -3204,6 +3226,17 @@ GO
PostgreSQL stored procedures instead of functions.
+
+ Prefer PostgreSQL SSL Mode
+
+ If both the PostgreSQL client and server have been
+ compiled to support SSL this option when selected enables
+ "prefer" SSL, when unselected it sets this option to
+ "disable" for connections. Other valid options such as
+ "allow" or "require" can be set directly in the build and
+ driver scripts if required.
+
+
Number of Warehouses
@@ -3922,10 +3955,13 @@ tpcc=# select relname, n_tup_ins - n_tup_del as rowcount from pg_stat_user_table
Time Profile
- This option should be selected in conjunction with having
- enabled output to the logfile. When selected client side time
- profiling will be conducted for the first active virtual user
- and output written to the logfile.
+ When this option is selected client side time profiling
+ will be conducted. There are two time profilers available, the
+ xtprof profiler and the etprof profiler. The profiler chosen can
+ be configured in the generic.xml file with xtprof profiling all
+ virtual users and printing the timing output at the end of a run
+ and the etprof profiler timing only the first virtual user but
+ printing output at 10 second intervals.
@@ -6345,6 +6381,92 @@ HammerDB Metric Agent active @ id 20376 hostname CRANE (Ctrl-C to Exit)
+
+
+ PostgreSQL Database Metrics
+
+ When the PostgreSQL Database is selected on both Windows and Linux
+ an additional option is available to connect to the PostgreSQL Database
+ and display detailed performance metrics. PostgreSQL metrics use the
+ Active Session History feature and therefore the pg_stat_statements and
+ pg_sentinel extensions must be installed and operational with parameters
+ set in postgresql.conf such as follows.
+
+ shared_preload_libraries = 'pg_stat_statements,pgsentinel'
+track_activity_query_size=2048
+pg_stat_statements.save=on
+pg_stat_statements.track=all
+pgsentinel_pgssh.enable = true
+pgsentinel_ash.pull_frequency = 1
+pgsentinel_ash.max_entries = 1000000
+
+ The PostgreSQL Metrics Options screen requires details of
+ superuser access to connect to the database.
+
+
+
+ When the metrics button is pressed HammerDB connects to the
+ database and verifies if the pg_active_session_history table is
+ installed and populated with data, if this is verified it displays
+ graphical information from the PostgreSQL Active Session History
+ detailing wait events. By default in embedded mode the PostgreSQL
+ Metrics will display the Active Session History Graph. For detailed
+ PostgreSQL Database Metrics the Notebook tab should be dragged out and
+ expanded to display in a separate window.
+
+
+
+ When display in a separate window, it is possible to make a
+ selection from the window and display the wait events related to that
+ period of time when the test was running. When the SQL_ID is selected
+ the buttons then enable the detailed viewing of SQL text, IO statistics
+ and SQL statistics related to that SQL.
+
+
+
+ When an event is selected the analysis shows details related to
+ that particular event for the time period selected. Selecting the
+ username shows the statistics related to that particular session.
+
+
+
+ As described in the Oracle metrics section the CPU Metrics button
+ displays the current standard HammerDB CPU Metrics display in an
+ embedded Window. To enable this functionality follow the guide for CPU
+ metrics earlier in this section.
+
@@ -7616,873 +7738,925 @@ waittocomplete
Web Service Interface (WS)
- In addition to the CLI there is a HTTP Web Service that provides the
- same commands as the CLI that can be accessed with a HTTP/REST client
- passing parameters and returning output in JSON format. The key difference
- from the configuration of the CLI is the addition of jobs. Under the web
- service output from schema builds or tests are stored in a SQLite database
- and retrieved at a later point using a job id. A rest interface has been
- provided in HammerDB for interacting with the web service using TCL,
- however this is not a necessity and although the examples in this section
- are given using TCL the web service can be driven with scripts written in
- any language. Additionally the huddle package has been provided for TCL to
- JSON formatting.
+ From version 4.3 the HammerDB HTTP Web Service has been enhanced to
+ include a CLI interface as well as an extended SQLite repository database
+ for all log output. The CLI interface translates the CLI commands to the
+ HTTP/REST client passing parameters and returning output in JSON format.
+ The key difference from the configuration of the standard CLI is the
+ addition of jobs. Under the web service output from schema builds, tests,
+ timing, transaction counter and configuration are stored in a SQLite
+ database and retrieved at a later point using a query based on a job
+ id.Web Service Configuration
- There are 2 configuration parameters for the webservice in the
+ There are 2 configuration parameters for the web service in the
file generic.xml in the config directory, ws_port and sqlite_db. ws_port
defines the port on which the service will run and sqlite_db defines the
- location of the SQLite database file. By default an in-memory location
- is used. Alternatively the name of a file can be given or "TMP" or
- "TEMP" for HammerDB to find a temporary directory to use for the
- file.
+ location of the SQLite database file. By default a temporary file
+ location is used by specifying TMP. If :memory: is used an in-memory
+ SQLite database will be used, however the data in this location will not
+ be stored after the webservice stops and is incompatible with
+ functionality such as time profiling, for this reason an on-disk
+ location is recommended. <webservice>
<ws_port>8080</ws_port>
- <sqlite_db>:memory:</sqlite_db>
- </webservice>
-
+ <sqlite_db>TMP</sqlite_db>
+ </webservice>
Starting the Web Service and Help ScreenOn starting the Web service with the hammerdbws command HammerDB
- will listen on the specified port for HTTP requests.
+ will return a hammerws prompt and also listen on the specified port for
+ HTTP requests. Navigating to the prompt returns the home page.
+
+ The help command at the CLI prompt shows the available
+ commands.
- [oracle@vulture HammerDB-4.1]$ ./hammerdbws
-HammerDB Web Service v4.1
+ HammerDB Web Service v4.3
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
-Initialized new SQLite in-memory database
+Initialized new SQLite on-disk database C:/Users/Steve/AppData/Local/Temp/hammer.DB
Starting HammerDB Web Service on port 8080
Listening for HTTP requests on TCP port 8080
-
+hammerws>help
+HammerDB v4.3 WS Help Index
- Navigating to the configured port without further argument will
- return the help screen.
+Type "help command" for more details on specific commands below
- HammerDB Web Service
+ buildschema
+ clearscript
+ customscript
+ datagenrun
+ dbset
+ dgset
+ diset
+ jobs
+ librarycheck
+ loadscript
+ print
+ quit
+ runtimer
+ tcset
+ tcstart
+ tcstatus
+ tcstop
+ vucomplete
+ vucreate
+ vudestroy
+ vurun
+ vuset
+ vustatus
+ waittocomplete
-See the HammerDB Web Service Environment
-HAMMERDB REST/HTTP API
+hammerws>
+
+
+ hammerws supports all commands supported by the CLI except for the
+ remote modes commands of switchmode and steprun at release v4.3.
+ Clicking on the Web Service API link returns the underlying interface.
+ The Web Service can be interacted with either through the CLI commands
+ or directly over HTTP. As noted when using the CLI it will automatically
+ translate the command and call the local HTTP interface for you with the
+ API as shown.
+ HammerDB Web Service
+HammerDB API
GET db: Show the configured database.
get http://localhost:8080/print?db / get http://localhost:8080/db
-{
-"ora": "Oracle",
-"mssqls": "MSSQLServer",
-"db2": "Db2",
-"mysql": "MySQL",
-"pg": "PostgreSQL",
-"redis": "Redis"
-}
-
GET bm: Show the configured benchmark.
get http://localhost:8080/print?bm / get http://localhost:8080/bm
{"benchmark": "TPC-C"}
-
GET dict: Show the dictionary for the current database ie all active variables.
get http://localhost:8080/print?dict / http://localhost:8080/dict
-{
-"connection": {
-"system_user": "system",
-"system_password": "manager",
-"instance": "oracle",
-"rac": "0"
-},
-"tpcc": {
-"count_ware": "1",
-"num_vu": "1",
-"tpcc_user": "tpcc",
-"tpcc_pass": "tpcc",
-"tpcc_def_tab": "tpcctab",
-"tpcc_ol_tab": "tpcctab",
-"tpcc_def_temp": "temp",
-"partition": "false",
-"hash_clusters": "false",
-"tpcc_tt_compat": "false",
-"total_iterations": "1000000",
-"raiseerror": "false",
-"keyandthink": "false",
-"checkpoint": "false",
-"ora_driver": "test",
-"rampup": "2",
-"duration": "5",
-"allwarehouse": "false",
-"timeprofile": "false"
-}
-}
-
GET script: Show the loaded script.
get http://localhost:8080/print?script / http://localhost:8080/script
-{"script": "#!/usr/local/bin/tclsh8.6
-#TIMED AWR SNAPSHOT DRIVER SCRIPT##################################
-#THIS SCRIPT TO BE RUN WITH VIRTUAL USER OUTPUT ENABLED
-#EDITABLE OPTIONS##################################################
-set library Oratcl ;# Oracle OCI Library
-set total_iterations 1000000 ;# Number of transactions before logging off
-set RAISEERROR \"false\" ;# Exit script on Oracle error (true or false)
-set KEYANDTHINK \"false\" ;# Time for user thinking and keying (true or false)
-set CHECKPOINT \"false\" ;# Perform Oracle checkpoint when complete (true or false)
-set rampup 2; # Rampup time in minutes before first snapshot is taken
-set duration 5; # Duration in minutes before second AWR snapshot is taken
-set mode \"Local\" ;# HammerDB operational mode
-set timesten \"false\" ;# Database is TimesTen
-set systemconnect system/manager@oracle ;# Oracle connect string for system user
-set connect tpcc/new_password@oracle ;# Oracle connect string for tpc-c user
-#EDITABLE OPTIONS##################################################
-#LOAD LIBRARIES AND MODULES ….
-"}
-
GET vuconf: Show the virtual user configuration.
get http://localhost:8080/print?vuconf / http://localhost:8080/vuconf
-{
-"Virtual Users": "1",
-"User Delay(ms)": "500",
-"Repeat Delay(ms)": "500",
-"Iterations": "1",
-"Show Output": "1",
-"Log Output": "0",
-"Unique Log Name": "0",
-"No Log Buffer": "0",
-"Log Timestamps": "0"
-}
-
GET vucreate: Create the virtual users. Equivalent to the Virtual User Create option in the graphical interface. Use vucreated to see the number created, vustatus to see the status and vucomplete to see whether all active virtual users have finished the workload. A script must be loaded before virtual users can be created.
get http://localhost:8080/vucreate
-{"success": {"message": "4 Virtual Users Created"}}
-
GET vucreated: Show the number of virtual users created.
get http://localhost:8080/print?vucreated / get http://localhost:8080/vucreated
-{"Virtual Users created": "10"}
-
GET vustatus: Show the status of virtual users, status will be "WAIT IDLE" for virtual users that are created but not running a workload,"RUNNING" for virtual users that are running a workload, "FINISH SUCCESS" for virtual users that completed successfully or "FINISH FAILED" for virtual users that encountered an error.
get http://localhost:8080/print?vustatus / get http://localhost:8080/vustatus
-{"Virtual User status": "1 {WAIT IDLE} 2 {WAIT IDLE} 3 {WAIT IDLE} 4 {WAIT IDLE} 5 {WAIT IDLE} 6 {WAIT IDLE} 7 {WAIT IDLE} 8 {WAIT IDLE} 9 {WAIT IDLE} 10 {WAIT IDLE}"}
-
GET datagen: Show the datagen configuration
get http://localhost:8080/print?datagen / get http://localhost:8080/datagen
-{
-"schema": "TPC-C",
-"database": "Oracle",
-"warehouses": "1",
-"vu": "1",
-"directory": "/tmp\""
-}
-
GET vucomplete: Show if virtual users have completed. returns "true" or "false" depending on whether all virtual users that started a workload have completed regardless of whether the status was "FINISH SUCCESS" or "FINISH FAILED".
get http://localhost:8080/vucomplete
-{"Virtual Users complete": "true"}
-
GET vudestroy: Destroy the virtual users. Equivalent to the Destroy Virtual Users button in the graphical interface that replaces the Create Virtual Users button after virtual user creation.
get http://localhost:8080/vudestroy
-{"success": {"message": "vudestroy success"}}
-
GET loadscript: Load the script for the database and benchmark set with dbset and the dictionary variables set with diset. Use print?script to see the script that is loaded. Equivalent to loading a Driver Script in the Script Editor window in the graphical interface. Driver script must be set to timed for the script to be loaded. Test scripts should be run in the GUI environment.
get http://localhost:8080/loadscript
-{"success": {"message": "script loaded"}}
-
GET clearscript: Clears the script. Equivalent to the "Clear the Screen" button in the graphical interface.
get http://localhost:8080/clearscript
-{"success": {"message": "Script cleared"}}
-
GET vurun: Send the loaded script to the created virtual users for execution. Equivalent to the Run command in the graphical interface. Creates a job id associated with all output.
get http://localhost:8080/vurun
-{"success": {"message": "Running Virtual Users: JOBID=5CEFBFE658A103E253238363"}}
+GET buildschema: Runs the schema build for the database and benchmark selected with dbset and variables selected with diset. Equivalent to the Build command in the graphical interface. Creates a job id associated with all output.
+get http://localhost:8080/buildschema
-GET datagenrun: Run Data Generation. Equivalent to the Generate option in the graphical interface. Not supported in web service. Generate data using GUI or CLI.
+GET jobs: Show the job ids, configuration, output, status, results and timings of jobs created by buildschema and vurun. Job output is equivalent to the output viewed in the graphical interface or command line.
+get http://localhost:8080/jobs
+get http://localhost:8080/jobs?jobid=TEXT
+get http://localhost:8080/jobs?jobid=TEXT&bm
+get http://localhost:8080/jobs?jobid=TEXT&db
+get http://localhost:8080/jobs?jobid=TEXT&delete
+get http://localhost:8080/jobs?jobid=TEXT&dict
+get http://localhost:8080/jobs?jobid=TEXT&result
+get http://localhost:8080/jobs?jobid=TEXT&status
+get http://localhost:8080/jobs?jobid=TEXT&tcount
+get http://localhost:8080/jobs?jobid=TEXT×tamp
+get http://localhost:8080/jobs?jobid=TEXT&timing
+get http://localhost:8080/jobs?jobid=TEXT&timing&vuid=INTEGER
+get http://localhost:8080/jobs?jobid=TEXT&vu=INTEGER
+
+GET librarycheck: Attempts to load the vendor provided 3rd party library for all databases and reports whether the attempt was successful.
+get http://localhost:8080/librarycheck
+
+GET tcstart: Starts the Transaction Counter.
+get http://localhost:8080/tcstart
+
+GET tcstop: Stops the Transaction Counter.
+get http://localhost:8080/tcstop
+
+GET tcstatus: Checks the status of the Transaction Counter.
+get http://localhost:8080/tcstatus
+
+GET quit: Terminates the webservice and reports message to the console.
+get http://localhost:8080/quit
+POST dbset: Usage: dbset [db|bm] value. Sets the database (db) or benchmark (bm). Equivalent to the Benchmark Menu in the graphical interface. Database value is set by the database prefix in the XML configuration.
+post http://localhost:8080/dbset { "db": "ora" }
-GET buildschema: Runs the schema build for the database and benchmark selected with dbset and variables selected with diset. Equivalent to the Build command in the graphical interface. Creates a job id associated with all output.
-get http://localhost:8080/buildschema
-{"success": {"message": "Building 6 Warehouses with 4 Virtual Users, 3 active + 1 Monitor VU(dict value num_vu is set to 3): JOBID=5CEFA68458A103E273433333"}}
+POST diset: Usage: diset dict key value. Set the dictionary variables for the current database. Equivalent to the Schema Build and Driver Options windows in the graphical interface. Use print?dict to see what these variables are and diset to change.
+post http://localhost:8080/diset { "dict": "tpcc", "key": "duration", "value": "1" }
+POST vuset: Usage: vuset [vu|delay|repeat|iterations|showoutput|logtotemp|unique|nobuff|timestamps]. Configure the virtual user options. Equivalent to the Virtual User Options window in the graphical interface.
+post http://localhost:8080/vuset { "vu": "4" }
-GET jobs: Show the job ids, output, status and results of jobs created by buildschema and vurun. Job output is equivalent to the output viewed in the graphical interface or command line.
-GET http://localhost:8080/jobs: Show all job ids
-get http://localhost:8080/jobs
-[
-"5CEE889958A003E203838313",
-"5CEFA68458A103E273433333"
-]
-GET http://localhost:8080/jobs?jobid=TEXT: Show output for the specified job id.
-get http://localhost:8080/jobs?jobid=5CEFA68458A103E273433333
-[
-"0",
-"Ready to create a 6 Warehouse Oracle TPC-C schema
-in database VULPDB1 under user TPCC in tablespace TPCCTAB?",
-"0",
-"Vuser 1:RUNNING",
-"1",
-"Monitor Thread",
-"1",
-"CREATING TPCC SCHEMA",
-...
-"1",
-"TPCC SCHEMA COMPLETE",
-"0",
-"Vuser 1:FINISHED SUCCESS",
-"0",
-"ALL VIRTUAL USERS COMPLETE"
-]
-GET http://localhost:8080/jobs?jobid=TEXT&vu=INTEGER: Show output for the specified job id and virtual user.
-get http://localhost:8080/jobs?jobid=5CEFA68458A103E273433333&vu=1
-[
-"1",
-"Monitor Thread",
-"1",
-"CREATING TPCC SCHEMA",
-"1",
-"CREATING USER tpcc",
-"1",
-"CREATING TPCC TABLES",
-"1",
-"Loading Item",
-"1",
-"Loading Items - 50000",
-"1",
-"Loading Items - 100000",
-"1",
-"Item done",
-"1",
-"Monitoring Workers...",
-"1",
-"Workers: 3 Active 0 Done"
-]
-GET http://localhost:8080/jobs?jobid=TEXT&status: Show status for the specified job id. Equivalent to virtual user 0.
-get http://localhost:8080/jobs?jobid=5CEFA68458A103E273433333&status
-[
-"0",
-"Ready to create a 6 Warehouse Oracle TPC-C schema
-in database VULPDB1 under user TPCC in tablespace TPCCTAB?",
-"0",
-"Vuser 1:RUNNING",
-"0",
-"Vuser 2:RUNNING",
-"0",
-"Vuser 3:RUNNING",
-"0",
-"Vuser 4:RUNNING",
-"0",
-"Vuser 4:FINISHED SUCCESS",
-"0",
-"Vuser 3:FINISHED SUCCESS",
-"0",
-"Vuser 2:FINISHED SUCCESS",
-"0",
-"Vuser 1:FINISHED SUCCESS",
-"0",
-"ALL VIRTUAL USERS COMPLETE"
-]
-GET http://localhost:8080/jobs?jobid=TEXT&result: Show the test result for the specified job id. If job is not a test job such as build job then no result will be reported.
-get http://localhost:8080/jobs?jobid=5CEFA68458A103E273433333&result
-[
-"5CEFA68458A103E273433333",
-"Jobid has no test result"
-]
-GET http://localhost:8080/jobs?jobid=TEXT&delete: Delete all output for the specified jobid.
-get http://localhost:8080/jobs?jobid=5CEFA68458A103E273433333&delete
-{"success": {"message": "Deleted Jobid 5CEFA68458A103E273433333"}}
+POST tcset: Usage: tcset [refreshrate]
+post http://localhost:8080/tcset { "refreshrate": "20" }
+POST dgset: Usage: dgset [vu|ware|directory]. Set the Datagen options. Equivalent to the Datagen Options dialog in the graphical interface.
+post http://localhost:8080/dgset { "directory": "/home/oracle" }
-GET killws: Terminates the webservice and reports message to the console.
-get http://localhost:8080/killws
-Shutting down HammerDB Web Service
+POST customscript: Load an external script. Equivalent to the "Open Existing File" button in the graphical interface. Script must be converted to JSON format before post.
+post http://localhost:8080/customscript { "script": "customscript"}
+
+
+
+ Retrieving Output
-POST dbset: Usage: dbset [db|bm] value. Sets the database (db) or benchmark (bm). Equivalent to the Benchmark Menu in the graphical interface. Database value is set by the database prefix in the XML configuration.
-set body { "db": "ora" }
-rest::post http://localhost:8080/dbset $body
+ The key difference from the standard CLI is the addition of the
+ jobs command. job can also be used as an alias for jobs. All build and
+ tests are assigned a jobid and the associated output is stored in the
+ SQLite database. The jobs command can then be used to retrieve
+ information about any particular job.
+
+ Jobs commands
-POST diset: Usage: diset dict key value. Set the dictionary variables for the current database. Equivalent to the Schema Build and Driver Options windows in the graphical interface. Use print?dict to see what these variables are and diset to change.
-set body { "dict": "tpcc", "key": "rampup", "value": "0" }
-rest::post http://localhost:8080/diset $body
-set body { "dict": "tpcc", "key": "duration", "value": "1" }
-rest::post http://localhost:8080/diset $body
+
+
+
-POST vuset: Usage: vuset [vu|delay|repeat|iterations|showoutput|logtotemp|unique|nobuff|timestamps]. Configure the virtual user options. Equivalent to the Virtual User Options window in the graphical interface.
-set body { "vu": "4" }
-rest::post http://localhost:8080/vuset $body
+
+
+
+ CLI Command
-POST customscript: Load an external script. Equivalent to the "Open Existing File" button in the graphical interface. Script must be converted to JSON format before post as shown in the example:
-set customscript "testscript.tcl"
-set _ED(file) $customscript
-if {$_ED(file) == ""} {return}
-if {![file readable $_ED(file)]} {
-puts "File [$_ED(file)] is not readable."
-return
-}
-if {[catch "open \"$_ED(file)\" r" fd]} {
-puts "Error while opening $_ED(file): [$fd]"
-} else {
-set _ED(package) "[read $fd]"
-close $fd
-}
-set huddleobj [ huddle compile {string} "$_ED(package)" ]
-set jsonobj [ huddle jsondump $huddleobj ]
-set body [ subst { {"script": $jsonobj}} ]
-set res [ rest::post http://localhost:8080/customscript $body ]
+ API
+ Description
+
+
-POST dgset: Usage: dgset [vu|ware|directory]. Set the Datagen options. Equivalent to the Datagen Options dialog in the graphical interface.
-set body { "directory": "/home/oracle" }
-rest::post http://localhost:8080/dgset $body
+
+
+ jobs
+ get http://localhost:8080/jobs
-DEBUG
-GET dumpdb: Dumps output of the SQLite database to the console.
-GET http://localhost:8080/dumpdb
-***************DEBUG***************
-5CEE889958A003E203838313 0 {Ready to create a 6 Warehouse Oracle TPC-C schema
-in database VULPDB1 under user TPCC in tablespace TPCCTAB?} 5CEE889958A003E203838313 0 {Vuser 1:RUNNING} 5CEE889958A003E203838313 1 {Monitor Thread} 5CEE889958A003E203838313 1 {CREATING TPCC SCHEMA} 5CEE889958A003E203838313 0 {Vuser 2:RUNNING} 5CEE889958A003E203838313 2 {Worker Thread} 5CEE889958A003E203838313 2 {Waiting for Monitor Thread...} 5CEE889958A003E203838313 1 {Error: ORA-12541: TNS:no listener} 5CEE889958A003E203838313 0 {Vuser 1:FINISHED FAILED} 5CEE889958A003E203838313 0 {Vuser 3:RUNNING} 5CEE889958A003E203838313 3 {Worker Thread} 5CEE889958A003E203838313 3 {Waiting for Monitor Thread...} 5CEE889958A003E203838313 0 {Vuser 4:RUNNING} 5CEE889958A003E203838313 4 {Worker Thread} 5CEE889958A003E203838313 4 {Waiting for Monitor Thread...} 5CEE889958A003E203838313 2 {Monitor failed to notify ready state} 5CEE889958A003E203838313 0 {Vuser 2:FINISHED SUCCESS} 5CEE889958A003E203838313 3 {Monitor failed to notify ready state} 5CEE889958A003E203838313 0 {Vuser 3:FINISHED SUCCESS} 5CEE889958A003E203838313 4 {Monitor failed to notify ready state} 5CEE889958A003E203838313 0 {Vuser 4:FINISHED SUCCESS} 5CEE889958A003E203838313 0 {ALL VIRTUAL USERS COMPLETE}
-***************DEBUG***************
+ list all jobs in the SQLite database.
+
-
+
+ jobs jobid
- As an example the following script shows printing the output of
- print commands in both JSON and text format.
-
- set UserDefaultDir [ file dirname [ info script ] ]
-::tcl::tm::path add "$UserDefaultDir/modules"
-package require rest
-package require huddle
-puts "TEST DIRECT PRINT COMMANDS"
-puts "--------------------------------------------------------"
-foreach i {db bm dict script vuconf vucreated vustatus datagen} {
-puts "Printing output for $i and converting JSON to text"
- set res [rest::get http://localhost:8080/$i "" ]
-puts "JSON format"
-puts $res
-puts "TEXT format"
- set res [rest::format_json $res]
- puts $res
-}
-puts "--------------------------------------------------------"
-puts "PRINT COMMANDS COMPLETE"
-puts "--------------------------------------------------------"
-
+ get http://localhost:8080/jobs?jobid=TEXT
- Once the Web Service is running in another port, run the TCL shell
- as follows and run the script above, the output is shown as
- follows.
+ list VU output for jobid.
+
- $ ./bin/tclsh8.6
-% source restchk.tcl
-TEST DIRECT PRINT COMMANDS
---------------------------------------------------------
-Printing output for db and converting JSON to text
-JSON format
-{
- "ora": "Oracle",
- "mssqls": "MSSQLServer",
- "db2": "Db2",
- "mysql": "MySQL",
- "pg": "PostgreSQL",
- "redis": "Redis"
-}
-TEXT format
-ora Oracle mssqls MSSQLServer db2 Db2 mysql MySQL pg PostgreSQL redis Redis
-Printing output for bm and converting JSON to text
-JSON format
-{"benchmark": "TPC-C"}
-TEXT format
-benchmark TPC-C
-Printing output for dict and converting JSON to text
-JSON format
-{
- "connection": {
- "system_user": "system",
- "system_password": "manager",
- "instance": "oracle",
- "rac": "0"
- },
- "tpcc": {
- "count_ware": "1",
- "num_vu": "1",
- "tpcc_user": "tpcc",
- "tpcc_pass": "tpcc",
- "tpcc_def_tab": "tpcctab",
- "tpcc_ol_tab": "tpcctab",
- "tpcc_def_temp": "temp",
- "partition": "false",
- "hash_clusters": "false",
- "tpcc_tt_compat": "false",
- "total_iterations": "1000000",
- "raiseerror": "false",
- "keyandthink": "false",
- "checkpoint": "false",
- "ora_driver": "test",
- "rampup": "2",
- "duration": "5",
- "allwarehouse": "false",
- "timeprofile": "false"
- }
-}
-TEXT format
-connection {system_user system system_password manager instance oracle rac 0} tpcc {count_ware 1 num_vu 1 tpcc_user tpcc tpcc_pass tpcc tpcc_def_tab tpcctab tpcc_ol_tab tpcctab tpcc_def_temp temp partition false hash_clusters false tpcc_tt_compat false total_iterations 1000000 raiseerror false keyandthink false checkpoint false ora_driver test rampup 2 duration 5 allwarehouse false timeprofile false}
-Printing output for script and converting JSON to text
-JSON format
-{"error": {"message": "No Script loaded: load with loadscript"}}
-TEXT format
-error {message {No Script loaded: load with loadscript}}
-Printing output for vuconf and converting JSON to text
-JSON format
-{
- "Virtual Users": "1",
- "User Delay(ms)": "500",
- "Repeat Delay(ms)": "500",
- "Iterations": "1",
- "Show Output": "1",
- "Log Output": "0",
- "Unique Log Name": "0",
- "No Log Buffer": "0",
- "Log Timestamps": "0"
-}
-TEXT format
-{Virtual Users} 1 {User Delay(ms)} 500 {Repeat Delay(ms)} 500 Iterations 1 {Show Output} 1 {Log Output} 0 {Unique Log Name} 0 {No Log Buffer} 0 {Log Timestamps} 0
-Printing output for vucreated and converting JSON to text
-JSON format
-{"Virtual Users created": "0"}
-TEXT format
-{Virtual Users created} 0
-Printing output for vustatus and converting JSON to text
-JSON format
-{"Virtual User status": "No Virtual Users found"}
-TEXT format
-{Virtual User status} {No Virtual Users found}
-Printing output for datagen and converting JSON to text
-JSON format
-{
- "schema": "TPC-C",
- "database": "Oracle",
- "warehouses": "1",
- "vu": "1",
- "directory": "\/tmp\""
-}
-TEXT format
-schema TPC-C database Oracle warehouses 1 vu 1 directory /tmp\"
---------------------------------------------------------
-PRINT COMMANDS COMPLETE
---------------------------------------------------------
-%
+
+ jobs result
+
+ N/A
+
+ list all results found in the SQLite database.
+
+
+
+ jobs timestamp
+
+ N/A
+
+ list all timestamps found in the SQLite database.
+
+
+
+ jobs jobid bm
+
+ get http://localhost:8080/jobs?jobid=TEXT&bm
+
+ list the configured benchmark for the jobid.
+
+
+
+ jobs jobid db
+
+ get http://localhost:8080/jobs?jobid=TEXT&db
+
+ list the configured database for the jobid.
+
+
+
+ jobs jobid delete
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&delete
+
+ delete the jobid from the SQLite database.
+
+
+
+ jobs jobid dict
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&dict
+
+ list the configured dict for the jobid.
+
+
+
+ jobs jobid result
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&result
+
+ list the result for the jobid.
+
+
+
+ jobs jobid status
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&status
+
+ list the current status for the jobid.
+
+
+
+ jobs jobid tcount
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&tcount
+
+ list the transaction count for the jobid if the
+ transaction counter was run.
+
+
+
+ jobs jobid timestamp
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT×tamp
+
+ list the timestamp for when the job started.
+
+
+
+ jobs jobid timing
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&timing
+
+ list the xtprof timing for the job if time profiling was
+ run.
+
+
+
+ jobs jobid vuid
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&vu=INTEGER
+
+ list the output for the jobid for the specified virtual
+ user.
+
+
+
+ jobs jobid timing vuid
+
+ get
+ http://localhost:8080/jobs?jobid=TEXT&timing&vuid=INTEGER
+
+ list the time profile for the jobid for the specified
+ virtual user.
+
+
+
+
- Retrieving Output
+ CLI & Scripted Commands
- As an example the following script shows printing the output of
- print commands in both JSON and text format.
-
- set UserDefaultDir [ file dirname [ info script ] ]
-::tcl::tm::path add "$UserDefaultDir/modules"
-package require rest
-package require huddle
-puts "TEST DIRECT PRINT COMMANDS"
-puts "--------------------------------------------------------"
-foreach i {db bm dict script vuconf vucreated vustatus datagen} {
-puts "Printing output for $i and converting JSON to text"
- set res [rest::get http://localhost:8080/$i "" ]
-puts "JSON format"
-puts $res
-puts "TEXT format"
- set res [rest::format_json $res]
- puts $res
-}
-puts "--------------------------------------------------------"
-puts "PRINT COMMANDS COMPLETE"
-puts "--------------------------------------------------------"
+ The web service CLI can also be used for both interactive and
+ scripted commands. Note that whether using the GUI, CLI or Web Service
+ the Virtual Users run exactly the same workloads. There is no difference
+ in the Virtual User workload based on the chosen interface. As an
+ example the following script executes a schema build and then exits
+ using waittocomplete.
+
+ dbset db mssqls
+dbset bm TPC-C
+diset connection mssqls_server {(local)\\SQLDEVELOP}
+diset tpcc mssqls_count_ware 20
+diset tpcc mssqls_num_vu 8
+buildschema
+waittocomplete
- Once the Web Service is running in another port, run the TCL shell
- as follows and run the script above, the output is shown as
- follows.
+ When run at the CLI the following output is shown.
- $ ./bin/tclsh8.6
-% source restchk.tcl
-TEST DIRECT PRINT COMMANDS
---------------------------------------------------------
-Printing output for db and converting JSON to text
-JSON format
-{
- "ora": "Oracle",
- "mssqls": "MSSQLServer",
- "db2": "Db2",
- "mysql": "MySQL",
- "pg": "PostgreSQL",
- "redis": "Redis"
-}
-TEXT format
-ora Oracle mssqls MSSQLServer db2 Db2 mysql MySQL pg PostgreSQL redis Redis
-Printing output for bm and converting JSON to text
-JSON format
-{"benchmark": "TPC-C"}
-TEXT format
-benchmark TPC-C
-Printing output for dict and converting JSON to text
-JSON format
-{
- "connection": {
- "system_user": "system",
- "system_password": "manager",
- "instance": "oracle",
- "rac": "0"
- },
- "tpcc": {
- "count_ware": "1",
- "num_vu": "1",
- "tpcc_user": "tpcc",
- "tpcc_pass": "tpcc",
- "tpcc_def_tab": "tpcctab",
- "tpcc_ol_tab": "tpcctab",
- "tpcc_def_temp": "temp",
- "partition": "false",
- "hash_clusters": "false",
- "tpcc_tt_compat": "false",
- "total_iterations": "1000000",
- "raiseerror": "false",
- "keyandthink": "false",
- "checkpoint": "false",
- "ora_driver": "test",
- "rampup": "2",
- "duration": "5",
- "allwarehouse": "false",
- "timeprofile": "false"
- }
-}
-TEXT format
-connection {system_user system system_password manager instance oracle rac 0} tpcc {count_ware 1 num_vu 1 tpcc_user tpcc tpcc_pass tpcc tpcc_def_tab tpcctab tpcc_ol_tab tpcctab tpcc_def_temp temp partition false hash_clusters false tpcc_tt_compat false total_iterations 1000000 raiseerror false keyandthink false checkpoint false ora_driver test rampup 2 duration 5 allwarehouse false timeprofile false}
-Printing output for script and converting JSON to text
-JSON format
-{"error": {"message": "No Script loaded: load with loadscript"}}
-TEXT format
-error {message {No Script loaded: load with loadscript}}
-Printing output for vuconf and converting JSON to text
-JSON format
-{
- "Virtual Users": "1",
- "User Delay(ms)": "500",
- "Repeat Delay(ms)": "500",
- "Iterations": "1",
- "Show Output": "1",
- "Log Output": "0",
- "Unique Log Name": "0",
- "No Log Buffer": "0",
- "Log Timestamps": "0"
-}
-TEXT format
-{Virtual Users} 1 {User Delay(ms)} 500 {Repeat Delay(ms)} 500 Iterations 1 {Show Output} 1 {Log Output} 0 {Unique Log Name} 0 {No Log Buffer} 0 {Log Timestamps} 0
-Printing output for vucreated and converting JSON to text
-JSON format
-{"Virtual Users created": "0"}
-TEXT format
-{Virtual Users created} 0
-Printing output for vustatus and converting JSON to text
-JSON format
-{"Virtual User status": "No Virtual Users found"}
-TEXT format
-{Virtual User status} {No Virtual Users found}
-Printing output for datagen and converting JSON to text
-JSON format
-{
- "schema": "TPC-C",
- "database": "Oracle",
- "warehouses": "1",
- "vu": "1",
- "directory": "\/tmp\""
-}
-TEXT format
-schema TPC-C database Oracle warehouses 1 vu 1 directory /tmp\"
---------------------------------------------------------
-PRINT COMMANDS COMPLETE
---------------------------------------------------------
-%
-
+ HammerDB Web Service v4.3
+Copyright (C) 2003-2021 Steve Shaw
+Type "help" for a list of commands
+The xml is well-formed, applying configuration
+Initialized SQLite on-disk database C:/Users/Steve/AppData/Local/Temp/hammer.DB using existing tables
+Starting HammerDB Web Service on port 8080
+Listening for HTTP requests on TCP port 8080
+hammerws>source sqlbuild.tcl
+{"success": {"message": "Database set to MSSQLServer"}}
+{"success": {"message": "Benchmark set to TPC-C for MSSQLServer"}}
+{"success": {"message": "Changed connection:mssqls_server from (local) to (local)\\SQLDEVELOP for MSSQLServer"}}
+{"success": {"message": "Changed tpcc:mssqls_count_ware from 1 to 20 for MSSQLServer"}}
+{"success": {"message": "Changed tpcc:mssqls_num_vu from 1 to 8 for MSSQLServer"}}
+{"success": {"message": "Building 20 Warehouses with 9 Virtual Users, 8 active + 1 Monitor VU(dict value mssqls_num_vu is set to 8): JOBID=618E54ED9E5D03E233732393"}}
-
- Running Jobs
-
- The following script run in the same shows how this can be
- extended so that an external script can interact with the web service
- and run a build and then a test successively. Note that wait_to_complete
- procedures can properly sleep using the after command without activity
- and without affecting the progress of the jobs as the driving script is
- run in one interpreter and the web service in another.
-
- set UserDefaultDir [ file dirname [ info script ] ]
-::tcl::tm::path add "$UserDefaultDir/modules"
-package require rest
-package require huddle
-
-proc wait_for_run_to_complete { runjob } {
-global complete
-set res [rest::get http://localhost:8080/vucomplete "" ]
-set complete [ lindex [rest::format_json $res] 1]
-if {!$complete} {
-#sleep for 20 seconds and recheck
-after 20000
-wait_for_run_to_complete $runjob
- } else {
-set res [rest::get http://localhost:8080/vudestroy "" ]
-puts "Test Complete"
-set jobid [ lindex [ split [ lindex [ lindex [ lindex [rest::format_json $runjob ] 1 ] 1 ] 3 ] \= ] 1 ]
-set res [rest::get http://localhost:8080/jobs?jobid=$jobid&result "" ]
-puts "Test result: $res"
- }
-}
+ The key difference from the standard CLI is that no output is
+ printed to stdout and is instead stored in the SQLite database under
+ jobid "618E54ED9E5D03E233732393".
-proc wait_for_build_to_complete {} {
-global complete
-set res [rest::get http://localhost:8080/vucomplete "" ]
-set complete [ lindex [rest::format_json $res] 1]
-if {!$complete} {
-#sleep for 20 seconds and recheck
-after 20000
-wait_for_build_to_complete
- } else {
-set res [rest::get http://localhost:8080/vudestroy "" ]
-puts "Build Complete"
-set complete false
- }
-}
+ The interface is closed after the build (as we used
+ waittocomplete) and restarted and the output remains persisted to the
+ database to be queried at any time. For example the list below shows
+ querying the timestamp when the job started, the dict configuration for
+ the build and the status of the build having completed
+ successfully.
-proc run_test {} {
-puts "Setting Db values"
-set body { "db": "ora" }
- set res [ rest::post http://localhost:8080/dbset $body ]
-set body { "bm": "TPC-C" }
- set res [ rest::post http://localhost:8080/dbset $body ]
-puts "Setting Vusers"
-set body { "vu": "5" }
- set res [ rest::post http://localhost:8080/vuset $body ]
-puts $res
-puts "Setting Dict Values"
-set body { "dict": "connection", "key": "system_password", "value": "oracle" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "connection", "key": "instance", "value": "VULPDB1" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "tpcc_pass", "value": "oracle" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "ora_driver", "value": "timed" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "rampup", "value": "1" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "duration", "value": "2" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "checkpoint", "value": "false" }
- set res [rest::post http://localhost:8080/diset $body ]
-puts "Config"
-set res [rest::get http://localhost:8080/dict "" ]
-puts $res
-puts "Clearscript"
- set res [rest::post http://localhost:8080/clearscript "" ]
-puts $res
-puts "Loadscript"
- set res [rest::post http://localhost:8080/loadscript "" ]
-puts $res
-puts "Create VU"
- set res [rest::post http://localhost:8080/vucreate "" ]
-puts $res
-puts "Run VU"
- set res [rest::post http://localhost:8080/vurun "" ]
-puts $res
-wait_for_run_to_complete $res
-}
+ hammerws>jobs 618E54ED9E5D03E233732393 timestamp
-proc run_build {} {
-puts "running build"
-set body { "db": "ora" }
- set res [ rest::post http://localhost:8080/dbset $body ]
-set body { "bm": "TPC-C" }
- set res [ rest::post http://localhost:8080/dbset $body ]
-puts "Setting Dict Values"
-set body { "dict": "connection", "key": "system_password", "value": "oracle" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "connection", "key": "instance", "value": "VULPDB1" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "count_ware", "value": "10" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "tpcc_pass", "value": "oracle" }
- set res [rest::post http://localhost:8080/diset $body ]
-set body { "dict": "tpcc", "key": "num_vu", "value": "5" }
- set res [rest::post http://localhost:8080/diset $body ]
-puts "Starting Schema Build"
- set res [rest::post http://localhost:8080/buildschema "" ]
-puts $res
-wait_for_build_to_complete
- }
-#Run build followed by run test
-run_build
-run_test
-
+{"618E54ED9E5D03E233732393": {"2021-11-12": "11:50:05"}}
+
+hammerws>jobs 618E54ED9E5D03E233732393 dict
- An example of the output running the script is shown.
-
- ./bin/tclsh8.6
-% source buildrun_tpcc.tcl
-running build
-Setting Dict Values
-Starting Schema Build
-{"success": {"message": "Building 10 Warehouses with 6 Virtual Users, 5 active + 1 Monitor VU(dict value num_vu is set to 5): JOBID=5D1F4CA858CE03E213431323"}}
-Build Complete
-Setting Db values
-Setting Vusers
-{"success": {"message": "Virtual users set to 5"}}
-Setting Dict Values
-Config
{
"connection": {
- "system_user": "system",
- "system_password": "oracle",
- "instance": "VULPDB1",
- "rac": "0"
+ "mssqls_server": "(local)\\SQLDEVELOP",
+ "mssqls_linux_server": "localhost",
+ "mssqls_tcp": "false",
+ "mssqls_port": "1433",
+ "mssqls_azure": "false",
+ "mssqls_authentication": "windows",
+ "mssqls_linux_authent": "sql",
+ "mssqls_odbc_driver": "ODBC Driver 17 for SQL Server",
+ "mssqls_linux_odbc": "ODBC Driver 17 for SQL Server",
+ "mssqls_uid": "sa",
+ "mssqls_pass": "admin"
},
"tpcc": {
- "count_ware": "10",
- "num_vu": "5",
- "tpcc_user": "tpcc",
- "tpcc_pass": "oracle",
- "tpcc_def_tab": "tpcctab",
- "tpcc_ol_tab": "tpcctab",
- "tpcc_def_temp": "temp",
- "partition": "false",
- "hash_clusters": "false",
- "tpcc_tt_compat": "false",
- "total_iterations": "1000000",
- "raiseerror": "false",
- "keyandthink": "false",
- "checkpoint": "false",
- "ora_driver": "timed",
- "rampup": "1",
- "duration": "2",
- "allwarehouse": "false",
- "timeprofile": "false"
+ "mssqls_count_ware": "20",
+ "mssqls_num_vu": "8",
+ "mssqls_dbase": "tpcc",
+ "mssqls_imdb": "false",
+ "mssqls_bucket": "1",
+ "mssqls_durability": "SCHEMA_AND_DATA",
+ "mssqls_total_iterations": "10000000",
+ "mssqls_raiseerror": "false",
+ "mssqls_keyandthink": "false",
+ "mssqls_checkpoint": "false",
+ "mssqls_driver": "test",
+ "mssqls_rampup": "2",
+ "mssqls_duration": "5",
+ "mssqls_allwarehouse": "false",
+ "mssqls_timeprofile": "false",
+ "mssqls_async_scale": "false",
+ "mssqls_async_client": "10",
+ "mssqls_async_verbose": "false",
+ "mssqls_async_delay": "1000",
+ "mssqls_connect_pool": "false"
}
}
-Clearscript
-{"success": {"message": "Script cleared"}}
-Loadscript
-{"success": {"message": "script loaded"}}
-Create VU
-{"success": {"message": "6 Virtual Users Created with Monitor VU"}}
-Run VU
-{"success": {"message": "Running Virtual Users: JOBID=5D1F4FF558CE03E223730313"}}
-Test Complete
-Test result: [
- "5D1F4FF558CE03E223730313",
- "TEST RESULT : System achieved 0 Oracle TPM at 27975 NOPM"
-]
-%
-
-
-
- Query Job Output
-
- Note that no output is seen directly in the script and no output
- recorded to a logfile. Instead the output is stored as a job by the web
- service. For example the following script would retrieve the output for
- the run job.
-
- set UserDefaultDir [ file dirname [ info script ] ]
-::tcl::tm::path add "$UserDefaultDir/modules"
-package require rest
-package require huddle
- set res [rest::get http://localhost:8080/jobs?jobid=5D1F4FF558CE03E223730313 "" ]
-puts "JSON format"
-puts $res
-
- With the output as follows.
+hammerws>jobs 618E54ED9E5D03E233732393 status
- % source joboutput.tcl
-JSON format
[
+ "0",
+ "Ready to create a 20 Warehouse MS SQL Server TPROC-C schema\nin host (LOCAL)\\SQLDEVELOP in database TPCC?",
"0",
"Vuser 1:RUNNING",
- "1",
- "Beginning rampup time of 1 minutes",
"0",
"Vuser 2:RUNNING",
- "2",
- "Processing 1000000 transactions with output suppressed...",
"0",
"Vuser 3:RUNNING",
- "3",
- "Processing 1000000 transactions with output suppressed...",
"0",
"Vuser 4:RUNNING",
- "4",
- "Processing 1000000 transactions with output suppressed...",
"0",
"Vuser 5:RUNNING",
- "5",
- "Processing 1000000 transactions with output suppressed...",
"0",
"Vuser 6:RUNNING",
- "6",
- "Processing 1000000 transactions with output suppressed...",
- "1",
- "Rampup 1 minutes complete ...",
- "1",
- "Rampup complete, Taking start AWR snapshot.",
- "1",
- "Start Snapshot 298 taken at 05 JUL 2019 14:20 of instance VULCDB1 (1) of database VULCDB1 (1846545596)",
- "1",
- "Timing test period of 2 in minutes",
- "1",
- "1 ...,",
- "1",
- "2 ...,",
- "1",
- "Test complete, Taking end AWR snapshot.",
- "1",
- "End Snapshot 298 taken at 05 JUL 2019 14:20 of instance VULCDB1 (1) of database VULCDB1 (1846545596)",
- "1",
- "Test complete: view report from SNAPID 298 to 298",
- "1",
- "5 Active Virtual Users configured",
- "1",
- "TEST RESULT : System achieved 0 Oracle TPM at 27975 NOPM",
"0",
- "Vuser 2:FINISHED SUCCESS",
+ "Vuser 7:RUNNING",
"0",
- "Vuser 1:FINISHED SUCCESS",
+ "Vuser 8:RUNNING",
+ "0",
+ "Vuser 9:RUNNING",
"0",
"Vuser 6:FINISHED SUCCESS",
"0",
+ "Vuser 9:FINISHED SUCCESS",
+ "0",
+ "Vuser 7:FINISHED SUCCESS",
+ "0",
+ "Vuser 8:FINISHED SUCCESS",
+ "0",
+ "Vuser 4:FINISHED SUCCESS",
+ "0",
"Vuser 5:FINISHED SUCCESS",
"0",
+ "Vuser 2:FINISHED SUCCESS",
+ "0",
"Vuser 3:FINISHED SUCCESS",
"0",
- "Vuser 4:FINISHED SUCCESS",
+ "Vuser 1:FINISHED SUCCESS",
"0",
"ALL VIRTUAL USERS COMPLETE"
]
+
+hammerws>
+
+ The output can also be queried using the HTTP interface
+ directly.
+
+
+
+ For the run script this is the same as would be run using the CLI
+ except for the addition of the jobs command. Note that as the command
+ line parameters are converted to JSON a backslash will be recognised as
+ an escape character, therefore in this example the database name has the
+ backslash itself preceded by the escape character {(local)\\SQLDEVELOP}.
+ This behaviour is different from the CLI where the brackets will prevent
+ this happening. An example is shown below of the incorrect and correct
+ usage.
+
+ hammerws>diset connection mssqls_server {(local)\SQLDEVELOP}
+{"error": {"message": "Not a valid escaped JSON character: 'S' in { \"dict\": \"connection\", \"key\": \"mssqls_server\", \"value\": \"(local)\\SQLDEVELOP\" }"}}
+
+hammerws>diset connection mssqls_server {(local)\\SQLDEVELOP}
+{"success": {"message": "Changed connection:mssqls_server from (local) to (local)\\SQLDEVELOP for MSSQLServer"}}
- The dumpdb command can be used to dump all of the SQLite database
- to the web service console for debugging and the killws command cause
- the web service terminate.
+ The example run script is configured as follows.
+
+ dbset db mssqls
+dbset bm TPC-C
+diset connection mssqls_server {(local)\\SQLDEVELOP}
+diset tpcc mssqls_driver timed
+diset tpcc mssqls_rampup 1
+diset tpcc mssqls_duration 2
+diset tpcc mssqls_timeprofile true
+diset tpcc mssqls_allwarehouse true
+tcset refreshrate 10
+loadscript
+foreach z {1 2} {
+puts "$z VU TEST"
+vuset vu $z
+vucreate
+tcstart
+set jobid [ vurun ]
+runtimer 200
+tcstop
+jobs $jobid result
+jobs $jobid timing
+vudestroy
+}
+
+ When the script is run it again reports a jobid that can be
+ queried for all of the job output. In the example the job result and
+ summary timing data is queried automatically after the job.
+
+ HammerDB Web Service v4.3
+Copyright (C) 2003-2021 Steve Shaw
+Type "help" for a list of commands
+The xml is well-formed, applying configuration
+Initialized SQLite on-disk database C:/Users/Steve/AppData/Local/Temp/hammer.DB using existing tables
+Starting HammerDB Web Service on port 8080
+Listening for HTTP requests on TCP port 8080
+hammerws>source sqlrun.tcl
+{"success": {"message": "Database set to MSSQLServer"}}
+{"success": {"message": "Benchmark set to TPC-C for MSSQLServer"}}
+{"success": {"message": "Changed connection:mssqls_server from (local) to (local)\\SQLDEVELOP for MSSQLServer"}}
+{"success": {"message": "Set driver script to timed, clearing Script, reload script to activate new setting"}}
+{"success": {"message": "Changed tpcc:mssqls_rampup from 2 to 1 for MSSQLServer"}}
+{"success": {"message": "Changed tpcc:mssqls_duration from 5 to 2 for MSSQLServer"}}
+{"success": {"message": "Changed tpcc:mssqls_timeprofile from false to true for MSSQLServer"}}
+{"success": {"message": "Changed tpcc:mssqls_allwarehouse from false to true for MSSQLServer"}}
+{"success": {"message": "Transaction Counter refresh rate set to 10"}}
+{"success": {"message": "script loaded"}}
+1 VU TEST
+{"success": {"message": "Virtual users set to 1"}}
+{"success": {"message": "2 Virtual Users Created with Monitor VU"}}
+{"success": {"message": "Transaction Counter Thread Started"}}
+{"success": {"message": "Running Virtual Users: JOBID=618E6DD7782203E263932303"}}
+{"success": {"message": "Timer: 1 minutes elapsed"}}
+{"success": {"message": "Timer: 2 minutes elapsed"}}
+{"success": {"message": "Timer: 3 minutes elapsed"}}
+{"success": {"message": "runtimer returned after 181 seconds"}}
+{"success": {"message": "Transaction Counter thread running with threadid:tid0000000000003C54"}}{"success": {"message": "Stopping Transaction Counter"}}
+[
+ "618E6DD7782203E263932303",
+ "2021-11-12 13:36:23",
+ "1 Active Virtual Users configured",
+ "TEST RESULT : System achieved 35839 NOPM from 82362 SQL Server TPM"
+]
+{
+ "NEWORD": {
+ "elapsed_ms": "179711.0",
+ "calls": "107839",
+ "min_ms": "0.414",
+ "avg_ms": "0.687",
+ "max_ms": "62.606",
+ "total_ms": "74062.174",
+ "p99_ms": "1.0",
+ "p95_ms": "0.887",
+ "p50_ms": "0.679",
+ "sd": "3590.836",
+ "ratio_pct": "41.212"
+ },
+ "PAYMENT": {
+ "elapsed_ms": "179711.0",
+ "calls": "107137",
+ "min_ms": "0.331",
+ "avg_ms": "0.583",
+ "max_ms": "153.081",
+ "total_ms": "62447.329",
+ "p99_ms": "1.136",
+ "p95_ms": "0.926",
+ "p50_ms": "0.554",
+ "sd": "5546.55",
+ "ratio_pct": "34.749"
+ },
+ "DELIVERY": {
+ "elapsed_ms": "179711.0",
+ "calls": "10662",
+ "min_ms": "1.245",
+ "avg_ms": "1.498",
+ "max_ms": "336.131",
+ "total_ms": "15974.446",
+ "p99_ms": "1.832",
+ "p95_ms": "1.659",
+ "p50_ms": "1.442",
+ "sd": "32468.41",
+ "ratio_pct": "8.889"
+ },
+ "SLEV": {
+ "elapsed_ms": "179711.0",
+ "calls": "10837",
+ "min_ms": "0.581",
+ "avg_ms": "1.284",
+ "max_ms": "442.591",
+ "total_ms": "13913.749",
+ "p99_ms": "1.029",
+ "p95_ms": "0.913",
+ "p50_ms": "0.768",
+ "sd": "142934.778",
+ "ratio_pct": "7.742"
+ },
+ "OSTAT": {
+ "elapsed_ms": "179711.0",
+ "calls": "10805",
+ "min_ms": "0.19",
+ "avg_ms": "0.72",
+ "max_ms": "405.053",
+ "total_ms": "7776.908",
+ "p99_ms": "1.303",
+ "p95_ms": "1.014",
+ "p50_ms": "0.495",
+ "sd": "77492.84",
+ "ratio_pct": "4.327"
+ }
+}
+{"success": {"message": "vudestroy success"}}
+2 VU TEST
+{"success": {"message": "Virtual users set to 2"}}
+{"success": {"message": "3 Virtual Users Created with Monitor VU"}}
+{"success": {"message": "Transaction Counter Thread Started"}}
+{"success": {"message": "Running Virtual Users: JOBID=618E6E8EE53403E243832333"}}
+{"success": {"message": "Timer: 1 minutes elapsed"}}
+{"success": {"message": "Timer: 2 minutes elapsed"}}
+{"success": {"message": "Timer: 3 minutes elapsed"}}
+{"success": {"message": "runtimer returned after 181 seconds"}}
+{"success": {"message": "Transaction Counter thread running with threadid:tid0000000000003374"}}{"success": {"message": "Stopping Transaction Counter"}}
+[
+ "618E6E8EE53403E243832333",
+ "2021-11-12 13:39:26",
+ "2 Active Virtual Users configured",
+ "TEST RESULT : System achieved 68606 NOPM from 157773 SQL Server TPM"
+]
+{
+ "NEWORD": {
+ "elapsed_ms": "179673.5",
+ "calls": "100437",
+ "min_ms": "0.433",
+ "avg_ms": "0.724",
+ "max_ms": "937.718",
+ "total_ms": "72671.06",
+ "p99_ms": "1.252",
+ "p95_ms": "0.877",
+ "p50_ms": "0.671",
+ "sd": "36684.99",
+ "ratio_pct": "40.532"
+ },
+ "PAYMENT": {
+ "elapsed_ms": "179673.5",
+ "calls": "100658",
+ "min_ms": "0.341",
+ "avg_ms": "0.64",
+ "max_ms": "1558.282",
+ "total_ms": "64373.678",
+ "p99_ms": "1.259",
+ "p95_ms": "0.987",
+ "p50_ms": "0.556",
+ "sd": "66749.215",
+ "ratio_pct": "35.905"
+ },
+ "DELIVERY": {
+ "elapsed_ms": "179673.5",
+ "calls": "10196",
+ "min_ms": "1.273",
+ "avg_ms": "1.582",
+ "max_ms": "89.852",
+ "total_ms": "16134.015",
+ "p99_ms": "2.57",
+ "p95_ms": "1.791",
+ "p50_ms": "1.503",
+ "sd": "13087.684",
+ "ratio_pct": "8.999"
+ },
+ "SLEV": {
+ "elapsed_ms": "179673.5",
+ "calls": "10190",
+ "min_ms": "0.619",
+ "avg_ms": "1.157",
+ "max_ms": "448.198",
+ "total_ms": "11792.671",
+ "p99_ms": "1.221",
+ "p95_ms": "0.986",
+ "p50_ms": "0.824",
+ "sd": "81906.924",
+ "ratio_pct": "6.577"
+ },
+ "OSTAT": {
+ "elapsed_ms": "179673.5",
+ "calls": "9943",
+ "min_ms": "0.212",
+ "avg_ms": "0.811",
+ "max_ms": "480.319",
+ "total_ms": "8064.587",
+ "p99_ms": "1.429",
+ "p95_ms": "1.105",
+ "p50_ms": "0.572",
+ "sd": "76772.016",
+ "ratio_pct": "4.498"
+ }
+}
+{"success": {"message": "vudestroy success"}}
+
+hammerws>
+
+
+ Later we can use the jobs command to query the information about
+ the jobs. In the example below it queries the database, workload and
+ timestamp when the job started and verifies that the job finished
+ successfully. It also retrieves the configuration that was set for the
+ job and the result. It also queries the xtprof timing data for virtual
+ user 2 only.
+
+ hammerws>job 618E6DD7782203E263932303 db
+
+["MSSQLServer"]
+
+hammerws>job 618E6DD7782203E263932303 bm
+
+["TPC-C"]
+
+hammerws>job 618E6DD7782203E263932303 timestamp
+
+{"618E6DD7782203E263932303": {"2021-11-12": "13:36:23"}}
+
+hammerws>job 618E6DD7782203E263932303 status
+
+[
+ "0",
+ "Vuser 1:RUNNING",
+ "0",
+ "Vuser 2:RUNNING",
+ "0",
+ "Vuser 2:FINISHED SUCCESS",
+ "0",
+ "Vuser 1:FINISHED SUCCESS",
+ "0",
+ "ALL VIRTUAL USERS COMPLETE"
+]
+
+hammerws>job 618E6DD7782203E263932303 dict
+
+{
+ "connection": {
+ "mssqls_server": "(local)\\SQLDEVELOP",
+ "mssqls_linux_server": "localhost",
+ "mssqls_tcp": "false",
+ "mssqls_port": "1433",
+ "mssqls_azure": "false",
+ "mssqls_authentication": "windows",
+ "mssqls_linux_authent": "sql",
+ "mssqls_odbc_driver": "ODBC Driver 17 for SQL Server",
+ "mssqls_linux_odbc": "ODBC Driver 17 for SQL Server",
+ "mssqls_uid": "sa",
+ "mssqls_pass": "admin"
+ },
+ "tpcc": {
+ "mssqls_count_ware": "1",
+ "mssqls_num_vu": "1",
+ "mssqls_dbase": "tpcc",
+ "mssqls_imdb": "false",
+ "mssqls_bucket": "1",
+ "mssqls_durability": "SCHEMA_AND_DATA",
+ "mssqls_total_iterations": "10000000",
+ "mssqls_raiseerror": "false",
+ "mssqls_keyandthink": "false",
+ "mssqls_checkpoint": "false",
+ "mssqls_driver": "timed",
+ "mssqls_rampup": "1",
+ "mssqls_duration": "2",
+ "mssqls_allwarehouse": "true",
+ "mssqls_timeprofile": "true",
+ "mssqls_async_scale": "false",
+ "mssqls_async_client": "10",
+ "mssqls_async_verbose": "false",
+ "mssqls_async_delay": "1000",
+ "mssqls_connect_pool": "false"
+ }
+}
+
+hammerws>job 618E6DD7782203E263932303 result
+
+[
+ "618E6DD7782203E263932303",
+ "2021-11-12 13:36:23",
+ "1 Active Virtual Users configured",
+ "TEST RESULT : System achieved 35839 NOPM from 82362 SQL Server TPM"
+]
+
+hammerws>job 618E6DD7782203E263932303 timing 2
+
+{
+ "NEWORD": {
+ "elapsed_ms": "179711.0",
+ "calls": "107839",
+ "min_ms": "0.414",
+ "avg_ms": "0.687",
+ "max_ms": "62.606",
+ "total_ms": "74062.174",
+ "p99_ms": "1.0",
+ "p95_ms": "0.887",
+ "p50_ms": "0.679",
+ "sd": "3590.836",
+ "ratio_pct": "41.212"
+ },
+ "PAYMENT": {
+ "elapsed_ms": "179711.0",
+ "calls": "107137",
+ "min_ms": "0.331",
+ "avg_ms": "0.583",
+ "max_ms": "153.081",
+ "total_ms": "62447.329",
+ "p99_ms": "1.136",
+ "p95_ms": "0.926",
+ "p50_ms": "0.554",
+ "sd": "5546.55",
+ "ratio_pct": "34.749"
+ },
+ "DELIVERY": {
+ "elapsed_ms": "179711.0",
+ "calls": "10662",
+ "min_ms": "1.245",
+ "avg_ms": "1.498",
+ "max_ms": "336.131",
+ "total_ms": "15974.446",
+ "p99_ms": "1.832",
+ "p95_ms": "1.659",
+ "p50_ms": "1.442",
+ "sd": "32468.41",
+ "ratio_pct": "8.889"
+ },
+ "SLEV": {
+ "elapsed_ms": "179711.0",
+ "calls": "10837",
+ "min_ms": "0.581",
+ "avg_ms": "1.284",
+ "max_ms": "442.591",
+ "total_ms": "13913.749",
+ "p99_ms": "1.029",
+ "p95_ms": "0.913",
+ "p50_ms": "0.768",
+ "sd": "142934.778",
+ "ratio_pct": "7.742"
+ },
+ "OSTAT": {
+ "elapsed_ms": "179711.0",
+ "calls": "10805",
+ "min_ms": "0.19",
+ "avg_ms": "0.72",
+ "max_ms": "405.053",
+ "total_ms": "7776.908",
+ "p99_ms": "1.303",
+ "p95_ms": "1.014",
+ "p50_ms": "0.495",
+ "sd": "77492.84",
+ "ratio_pct": "4.327"
+ }
+}
+
+hammerws>
+
+ This output can also be queried over HTTP, for example querying
+ the transaction counter output.
+
+
+
+ The Web Service provides an alternative to the CLI where it is
+ wished to store the output of jobs in a database repository that can be
+ queried returning JSON output rather than using the log file output.
+ This scenario is most likely when building automated testing
+ systems.
@@ -8818,8 +8992,7 @@ order by
MySQL does not support row oriented parallel query or a column
store configuration and therefore queries run against a MySQL database
- are expected to be long-running. In testing the myisam storage engine
- offers improved query times compared to InnoDB.
+ are expected to be long-running.
@@ -9931,6 +10104,24 @@ infinidb_import_for_batchinsert_delimiter=7
TPROC-H schema.
+
+ PostgreSQL Tablespace
+
+ The PostgreSQL Tablespace where the PostgreSQL
+ Database will be installed.
+
+
+
+ Prefer PostgreSQL SSL Mode
+
+ If both the PostgreSQL client and server have been
+ compiled to support SSL this option when selected enables
+ "prefer" SSL, when unselected it sets this option to
+ "disable" for connections. Other valid options such as
+ "allow" or "require" can be set directly in the build and
+ driver scripts if required.
+
+
Greenplum Database Compatible
@@ -11304,8 +11495,7 @@ GEOMEAN 4.011822724
warehouses. Note that this is an interface limit to prevent
over-provisioning (100,000 warehouses may generate up to 10TB of data),
however it is straightforward to exceed this capacity by manually
- modifying the generated datageb build script to increase the value.
-
+ modifying the generated datageb build script to increase the value.
Generate the Dataset
@@ -11980,7 +12170,7 @@ set o_carrier_id = nullif(@id1,'');
MariaDBAs with MySQL bulk loading is done in MariaDB using the load
- data infile command.
+ data infile command.
mysql> load data infile '/home/mysql/TPCHDATA/supplier_1.tbl' INTO table SUPPLIER fields terminated by '|';
diff --git a/DocBook/docs/images/ch1-10.PNG b/DocBook/docs/images/ch1-10.PNG
index 3b60e165..2873520d 100644
Binary files a/DocBook/docs/images/ch1-10.PNG and b/DocBook/docs/images/ch1-10.PNG differ
diff --git a/DocBook/docs/images/ch1-11.PNG b/DocBook/docs/images/ch1-11.PNG
index 612eb897..0db9a671 100644
Binary files a/DocBook/docs/images/ch1-11.PNG and b/DocBook/docs/images/ch1-11.PNG differ
diff --git a/DocBook/docs/images/ch1-12.PNG b/DocBook/docs/images/ch1-12.PNG
index 7b760b56..8521b5cd 100644
Binary files a/DocBook/docs/images/ch1-12.PNG and b/DocBook/docs/images/ch1-12.PNG differ
diff --git a/DocBook/docs/images/ch1-17.PNG b/DocBook/docs/images/ch1-17.PNG
index 377dccc4..60d61ff1 100644
Binary files a/DocBook/docs/images/ch1-17.PNG and b/DocBook/docs/images/ch1-17.PNG differ
diff --git a/DocBook/docs/images/ch1-4a.PNG b/DocBook/docs/images/ch1-4a.PNG
index 12bbf133..ddbba572 100644
Binary files a/DocBook/docs/images/ch1-4a.PNG and b/DocBook/docs/images/ch1-4a.PNG differ
diff --git a/DocBook/docs/images/ch1-6.PNG b/DocBook/docs/images/ch1-6.PNG
index a4ca7e47..46f3d10c 100644
Binary files a/DocBook/docs/images/ch1-6.PNG and b/DocBook/docs/images/ch1-6.PNG differ
diff --git a/DocBook/docs/images/ch1-8.PNG b/DocBook/docs/images/ch1-8.PNG
index cc0e0d3b..8d70f80a 100644
Binary files a/DocBook/docs/images/ch1-8.PNG and b/DocBook/docs/images/ch1-8.PNG differ
diff --git a/DocBook/docs/images/ch1-9.PNG b/DocBook/docs/images/ch1-9.PNG
index ae510afc..95603cad 100644
Binary files a/DocBook/docs/images/ch1-9.PNG and b/DocBook/docs/images/ch1-9.PNG differ
diff --git a/DocBook/docs/images/ch10ws-1.PNG b/DocBook/docs/images/ch10ws-1.PNG
new file mode 100644
index 00000000..bac37c84
Binary files /dev/null and b/DocBook/docs/images/ch10ws-1.PNG differ
diff --git a/DocBook/docs/images/ch10ws-2.PNG b/DocBook/docs/images/ch10ws-2.PNG
new file mode 100644
index 00000000..770faa5d
Binary files /dev/null and b/DocBook/docs/images/ch10ws-2.PNG differ
diff --git a/DocBook/docs/images/ch10ws-3.PNG b/DocBook/docs/images/ch10ws-3.PNG
new file mode 100644
index 00000000..9583db1d
Binary files /dev/null and b/DocBook/docs/images/ch10ws-3.PNG differ
diff --git a/DocBook/docs/images/ch13-7.PNG b/DocBook/docs/images/ch13-7.PNG
index ac3122b0..f7512412 100644
Binary files a/DocBook/docs/images/ch13-7.PNG and b/DocBook/docs/images/ch13-7.PNG differ
diff --git a/DocBook/docs/images/ch13-8.PNG b/DocBook/docs/images/ch13-8.PNG
index c9ecb961..276cea40 100644
Binary files a/DocBook/docs/images/ch13-8.PNG and b/DocBook/docs/images/ch13-8.PNG differ
diff --git a/DocBook/docs/images/ch4-8.PNG b/DocBook/docs/images/ch4-8.PNG
index ab616d18..0e7bab05 100644
Binary files a/DocBook/docs/images/ch4-8.PNG and b/DocBook/docs/images/ch4-8.PNG differ
diff --git a/DocBook/docs/images/ch7-16.png b/DocBook/docs/images/ch7-16.png
new file mode 100644
index 00000000..5e776a77
Binary files /dev/null and b/DocBook/docs/images/ch7-16.png differ
diff --git a/DocBook/docs/images/ch7-17.PNG b/DocBook/docs/images/ch7-17.PNG
index 38e6d1cc..79f33304 100644
Binary files a/DocBook/docs/images/ch7-17.PNG and b/DocBook/docs/images/ch7-17.PNG differ
diff --git a/DocBook/docs/images/ch7-18.png b/DocBook/docs/images/ch7-18.png
new file mode 100644
index 00000000..6514453a
Binary files /dev/null and b/DocBook/docs/images/ch7-18.png differ
diff --git a/DocBook/docs/images/ch7-19.png b/DocBook/docs/images/ch7-19.png
new file mode 100644
index 00000000..b889e783
Binary files /dev/null and b/DocBook/docs/images/ch7-19.png differ
diff --git a/DocBook/docs/images/ch9-11.PNG b/DocBook/docs/images/ch9-11.PNG
index 3d51e7f2..f7512412 100644
Binary files a/DocBook/docs/images/ch9-11.PNG and b/DocBook/docs/images/ch9-11.PNG differ
diff --git a/DocBook/docs/images/ch9-12.PNG b/DocBook/docs/images/ch9-12.PNG
index 10f3120d..276cea40 100644
Binary files a/DocBook/docs/images/ch9-12.PNG and b/DocBook/docs/images/ch9-12.PNG differ