diff --git a/.github/actions/create-upload-suggestions/action.yml b/.github/actions/create-upload-suggestions/action.yml index ec9af1e29e8..0696e06589b 100644 --- a/.github/actions/create-upload-suggestions/action.yml +++ b/.github/actions/create-upload-suggestions/action.yml @@ -225,7 +225,7 @@ runs: steps.upload-changes.outputs.artifact-url }}) - name: Fail action if some files were changed if: >- - ${{ (steps.files_changed.outputs.files_changed == 'true') && + ${{ (steps.files_changed.outputs.files_changed == 'true') && (steps.inputs.outputs.fail-if-changed == 'true') }} shell: bash run: | diff --git a/.github/workflows/coverity.yml b/.github/workflows/coverity.yml index b034d72b0fc..18dceb1a6b1 100644 --- a/.github/workflows/coverity.yml +++ b/.github/workflows/coverity.yml @@ -1,10 +1,10 @@ name: Coverity Scan on: workflow_dispatch: # run whenever a contributor calls it - schedule: + schedule: - cron: '48 5 * * *' # Run at 05:48 # Coverity will let GRASS do a scan a maximum of twice per day, so this schedule will help GRASS fit within that limit with some additional space for manual runs - + jobs: build: runs-on: [ ubuntu-latest ] diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml index 4a55fdee7c6..d961b034627 100644 --- a/.github/workflows/ubuntu.yml +++ b/.github/workflows/ubuntu.yml @@ -96,7 +96,7 @@ jobs: - name: Add extra exclusions to a gunittest config file run: | - sed 's:exclude =:exclude = ${{ + sed 's:exclude =:exclude = ${{ steps.get-exclude.outputs.extra-exclude }}:g' .gunittest.cfg > .gunittest.extra.cfg cat .gunittest.extra.cfg diff --git a/scripts/v.dissolve/v.dissolve.html b/scripts/v.dissolve/v.dissolve.html index cc69e21e1cc..54044a1bcf4 100644 --- a/scripts/v.dissolve/v.dissolve.html +++ b/scripts/v.dissolve/v.dissolve.html @@ -1,87 +1,87 @@

DESCRIPTION

-The v.dissolve module is used to merge adjacent or overlapping -features in a vector map that share the same category value. The -resulting merged feature(s) retain this category value. +The v.dissolve module is used to merge adjacent or overlapping +features in a vector map that share the same category value. The +resulting merged feature(s) retain this category value.

-Figure: Areas with the same attribute value (first image) are merged +Figure: Areas with the same attribute value (first image) are merged into one (second image).

-Instead of dissolving features based on the category values, the user -can define an integer or string column using the column -parameter. In that case, features that share the same value in that -column are dissolved. Note, the newly created layer does not retain the +Instead of dissolving features based on the category values, the user +can define an integer or string column using the column +parameter. In that case, features that share the same value in that +column are dissolved. Note, the newly created layer does not retain the category (cat) values from the input layer.

-Note that multiple areas with the same category or the same attribute -value that are not adjacent are merged into one entity, which consists +Note that multiple areas with the same category or the same attribute +value that are not adjacent are merged into one entity, which consists of multiple features, i.e., a multipart feature.

Attribute aggregation

-The attributes of merged areas can be aggregated using various -aggregation methods. The specific methods available depend on the -backend used for aggregation. Two aggregate backends (specified with -the aggregate_backend parameter) are available, univar -and sql. The backend is determined automatically based on the -requested methods. When the function is one of the SQL -build-in aggregate functions, the sql backend is used. -Otherwise, the univar backend is used. +The attributes of merged areas can be aggregated using various +aggregation methods. The specific methods available depend on the +backend used for aggregation. Two aggregate backends (specified with +the aggregate_backend parameter) are available, univar +and sql. The backend is determined automatically based on the +requested methods. When the function is one of the SQL +build-in aggregate functions, the sql backend is used. +Otherwise, the univar backend is used.

-The default behavior is intended for interactive use and -testing. For scripting and other automated usage, explicitly specifying -the backend with the aggregate_backend parameter is strongly -recommended. When choosing, note that the sql aggregate -backend, regardless of the underlying database, will typically perform +The default behavior is intended for interactive use and +testing. For scripting and other automated usage, explicitly specifying +the backend with the aggregate_backend parameter is strongly +recommended. When choosing, note that the sql aggregate +backend, regardless of the underlying database, will typically perform significantly better than the univar backend.

Aggregation using univar backend

-When univar is used, the methods available are the ones which -v.db.univar uses by default, i.e., n, min, -max, range, mean, mean_abs, -variance, stddev, coef_var, and +When univar is used, the methods available are the ones which +v.db.univar uses by default, i.e., n, min, +max, range, mean, mean_abs, +variance, stddev, coef_var, and sum.

Aggregation using sql backend

-When the sql backend is used, the methods depend on the SQL -database backend used for the attribute table of the input vector. For -SQLite, there are at least the following built-in aggregate -functions: count, min, max, +When the sql backend is used, the methods depend on the SQL +database backend used for the attribute table of the input vector. For +SQLite, there are at least the following built-in aggregate +functions: count, min, max, avg, sum, and total. -For PostgreSQL, the list of aggregate -functions is much longer and includes, e.g., count, -min, max, avg, sum, -stddev, and variance. +For PostgreSQL, the list of aggregate +functions is much longer and includes, e.g., count, +min, max, avg, sum, +stddev, and variance.

Defining the aggregation method

-If only the parameter aggregate_columns is provided, all the -following aggregation statistics are calculated: n, -min, max, mean, and sum. If the -univar backend is specified, all the available methods for the +If only the parameter aggregate_columns is provided, all the +following aggregation statistics are calculated: n, +min, max, mean, and sum. If the +univar backend is specified, all the available methods for the univar backend are used.

-The aggregate_methods parameter can be used to specify which -aggregation statistics should be computed. Alternatively, the parameter -aggregate_columns can be used to specify the method using SQL -syntax. This provides the highest flexibility, and it is suitable for -scripting. The SQL statement should specify both the column and the +The aggregate_methods parameter can be used to specify which +aggregation statistics should be computed. Alternatively, the parameter +aggregate_columns can be used to specify the method using SQL +syntax. This provides the highest flexibility, and it is suitable for +scripting. The SQL statement should specify both the column and the functions applied, e.g.,

@@ -89,24 +89,24 @@ 

Defining the aggregation method

-Note that when the aggregate_columns parameter is used, the -sql backend should be used. In addition, the -aggregate_columns and aggregate_methods cannot be used +Note that when the aggregate_columns parameter is used, the +sql backend should be used. In addition, the +aggregate_columns and aggregate_methods cannot be used together.

-For convenience, certain methods, namely n, count, -mean, and avg, are automatically converted to the -appropriate name for the selected backend. However, for scripting, it -is recommended to specify the appropriate method (function) name for -the backend, as the conversion is a heuristic that may change in the +For convenience, certain methods, namely n, count, +mean, and avg, are automatically converted to the +appropriate name for the selected backend. However, for scripting, it +is recommended to specify the appropriate method (function) name for +the backend, as the conversion is a heuristic that may change in the future.

-If the result_columns is not provided, each method is applied to +If the result_columns is not provided, each method is applied to each column specified by aggregate_columns. This results in a -column for each of the combinations. These result columns have -auto-generated names based on the aggregate column and method. For +column for each of the combinations. These result columns have +auto-generated names based on the aggregate column and method. For example, setting the following parameters:

@@ -115,13 +115,13 @@ 

Defining the aggregation method

-results in the following columns: A_sum, A_n, B_sum, B_n. See +results in the following columns: A_sum, A_n, B_sum, B_n. See the Examples section.

-If the result_column is provided, each method is applied only -once to the matching column in the aggregate column list, and the -result will be available under the name of the matching result column. +If the result_column is provided, each method is applied only +once to the matching column in the aggregate column list, and the +result will be available under the name of the matching result column. For example, setting the following parameter:

@@ -131,45 +131,45 @@ 

Defining the aggregation method

-results in the column sum_a with the sum of the values of -A and the column n_b with the max of B. Note that -the number of items in aggregate_columns, -aggregate_methods (unless omitted), and result_column -needs to match, and no combinations are created on the fly. See +results in the column sum_a with the sum of the values of +A and the column n_b with the max of B. Note that +the number of items in aggregate_columns, +aggregate_methods (unless omitted), and result_column +needs to match, and no combinations are created on the fly. See the Examples section.

-For scripting, it is recommended to specify all resulting column names, -while for interactive use, automatically created combinations are +For scripting, it is recommended to specify all resulting column names, +while for interactive use, automatically created combinations are expected to be beneficial, especially for exploratory analysis.

-The type of the result column is determined based on the method -selected. For n and count, the type is INTEGER and -for all other methods, it is DOUBLE. Aggregate methods that produce -other types require the type to be specified as part of the -result_columns. A type can be provided in result_columns -using the SQL syntax name type, e.g., sum_of_values -double precision. Type specification is mandatory when SQL -syntax is used in aggregate_columns (and +The type of the result column is determined based on the method +selected. For n and count, the type is INTEGER and +for all other methods, it is DOUBLE. Aggregate methods that produce +other types require the type to be specified as part of the +result_columns. A type can be provided in result_columns +using the SQL syntax name type, e.g., sum_of_values +double precision. Type specification is mandatory when SQL +syntax is used in aggregate_columns (and aggregate_methods is omitted).

NOTES

-GRASS defines a vector area as a composite entity consisting of a set -of closed boundaries and a centroid. The centroids must contain a -category number (see v.centroids), this number is linked to -area attributes and database links. +GRASS defines a vector area as a composite entity consisting of a set +of closed boundaries and a centroid. The centroids must contain a +category number (see v.centroids), this number is linked to +area attributes and database links.

-Multiple attributes may be linked to a single vector entity through -numbered fields referred to as layers. Refer to v.category for +Multiple attributes may be linked to a single vector entity through +numbered fields referred to as layers. Refer to v.category for more details.

-Merging of areas can also be accomplished using v.extract -d -which provides some additional options. In fact, v.dissolve is -simply a front-end to that module. The use of the column +Merging of areas can also be accomplished using v.extract -d +which provides some additional options. In fact, v.dissolve is +simply a front-end to that module. The use of the column parameter adds a call to v.reclass before. @@ -262,27 +262,27 @@

Aggregating multiple attributes

-By default, all methods specified in the aggregate_methods are -applied to all columns, so result of the above is four columns. While -this is convenient for getting multiple statistics for similar columns -(e.g. averages and standard deviations of multiple population -statistics columns), in our case, each column is different and each +By default, all methods specified in the aggregate_methods are +applied to all columns, so result of the above is four columns. While +this is convenient for getting multiple statistics for similar columns +(e.g. averages and standard deviations of multiple population +statistics columns), in our case, each column is different and each aggregate method should be applied only to its corresponding column.

-The v.dissolve module will apply each aggregate method only to -the corresponding column when column names for the results are +The v.dissolve module will apply each aggregate method only to +the corresponding column when column names for the results are specified manually with the result_columns option:

 v.dissolve input=boundary_municp column=DOTURBAN_N output=municipalities_4 \
-	aggregate_columns=ACRES,NEW_PERC_G aggregate_methods=sum,avg \ 
+	aggregate_columns=ACRES,NEW_PERC_G aggregate_methods=sum,avg \
 	result_columns=acres,new_perc_g
 

-Now we have full control over what columns are created, but we also -need to specify an aggregate method for each column even when the +Now we have full control over what columns are created, but we also +need to specify an aggregate method for each column even when the aggregate methods are the same:

@@ -292,29 +292,29 @@ 

Aggregating multiple attributes

-While it is often not necessary to specify aggregate methods or names -for interactive exploratory analysis, specifying both -aggregate_methods and result_columns manually is a best -practice for scripting (unless SQL syntax is used for +While it is often not necessary to specify aggregate methods or names +for interactive exploratory analysis, specifying both +aggregate_methods and result_columns manually is a best +practice for scripting (unless SQL syntax is used for aggregate_columns, see below).

Aggregating using SQL syntax

-The aggregation can be done also using the full SQL syntax and set of +The aggregation can be done also using the full SQL syntax and set of aggregate functions available for a given attribute database backend. Here, we will assume the default SQLite database backend for attribute.

-Modifying the previous example, we will now specify the SQL aggregate -function calls explicitly instead of letting v.dissolve -generate them for us. We will compute sum of the ACRES column using -sum(ACRES) (alternatively, we could use SQLite specific -total(ACRES) which returns zero even when all values are -NULL). Further, we will count number of aggregated (i.e., dissolved) -parts using count(*) which counts all rows regardless of -NULL values. Then, we will count all unique names of parts as -distinguished by the MB_NAME column using count(distinct -MB_NAME). Finally, we will collect all these names into a +Modifying the previous example, we will now specify the SQL aggregate +function calls explicitly instead of letting v.dissolve +generate them for us. We will compute sum of the ACRES column using +sum(ACRES) (alternatively, we could use SQLite specific +total(ACRES) which returns zero even when all values are +NULL). Further, we will count number of aggregated (i.e., dissolved) +parts using count(*) which counts all rows regardless of +NULL values. Then, we will count all unique names of parts as +distinguished by the MB_NAME column using count(distinct +MB_NAME). Finally, we will collect all these names into a comma-separated list using group_concat(MB_NAME):

@@ -324,15 +324,15 @@ 

Aggregating using SQL syntax

-Here, v.dissolve doesn't make any assumptions about the -resulting column types, so we specified both named and the type of each +Here, v.dissolve doesn't make any assumptions about the +resulting column types, so we specified both named and the type of each column.

-When working with general SQL syntax, v.dissolve turns off its -checks for number of aggregate and result columns to allow for all SQL -syntax to be used for aggregate columns. This allows us to use also -functions with multiple parameters, for example specify separator to be +When working with general SQL syntax, v.dissolve turns off its +checks for number of aggregate and result columns to allow for all SQL +syntax to be used for aggregate columns. This allows us to use also +functions with multiple parameters, for example specify separator to be used with group_concat:

@@ -342,7 +342,7 @@ 

Aggregating using SQL syntax

-To inspect the result, we will use v.db.select retrieving only +To inspect the result, we will use v.db.select retrieving only one row for DOTURBAN_N == 'Wadesboro':