Skip to content

Releases: cinchapi/concourse

Version 0.11.6

06 Jul 11:27
Compare
Choose a tag to compare
  • Added new configuration options for initializing Concourse Server with custom admin credentials upon first run. These options enhance security by allowing a non-default usernames and passwords before starting the server.
    • The init_root_username option in concourse.prefs can be used to specify the username for the initial administrator account.
    • The init_root_password option in concourse.prefs can be used to specify the password for the initial administrator account
  • Exposed the default JMX port, 9010, in the Dockerfile.
  • Fixed a bug that kept HELP documentation from being packaged with Concourse Shell and prevented it from being displayed.
  • Added a fallback option to display Concourse Shell HELP documentation in contexts when the less command isn't available (e.g., IDEs).
  • Fixed a bug that caused Concourse Server to unnecessarily add error logging whenever a client disconnected.
  • Added the ability to create ConnectionPools that copy the credentials and connection information from an existing handler These copying connection pools can be created by using the respective "cached" or "fixed" factory methods in the ConnectionPool class that take a Concourse parameter.

Version 0.11.5

06 Nov 01:36
Compare
Choose a tag to compare
  • Fixed a bug that made it possible for a Transaction to silently fail and cause a deadlock when multiple distinct writes committed in other operations caused that Transaction to become preempted (e.g., unable to continue or successfully commit because of a version change).
  • Fixed a bug that allowed a Transaction's atomic operations (e.g., verifyAndSwap) to ignore range conflicts stemming from writes committed in other operations. As a result, the atomic operation would successfully commit to its a Transaction, but the Transaction would inevitably fail due to the aforementioned conflict. The correct (and now current) behaviour is that the atomic operation fails (so it can be retried) without dooming the entire Transaction to failure.
  • Fixed a bug that caused an innocuous Exception to be thrown when importing CSV data using the interactive input feature of concourse import CLI.
  • Fixed a bug that caused an inexplicable failure to occur when invoking a plugin method that indirectly depended on result set sorting.

Version 0.11.4

04 Jul 23:42
Compare
Choose a tag to compare
  • Slightly improved the performance of result sorting by removing unnecessary intermediate data gathering.
  • Improved random access performance for all result sets.

Version 0.11.3

04 Jun 21:00
Compare
Choose a tag to compare
  • Improved the performance of commands that select multiple keys from a record by adding herustics to the storage engine to reduce the number of overall lookups required. As a result, commands that select multiple keys are up to 96% faster.
  • Streamlined the logic for reads that have a combination of time, order and page parameters by adding more intelligent heuristics for determining the most efficient code path. For example, a read that only has time and page parameters (e.g., no order) does not need to be performed atomically. Previously, those reads converged into an atomic code path, but now a separate code path exists so those reads can be more performant. Additionally, the logic is more aware of when attempts to sort or paginate data don't actually have an effect and now avoids unnecessary data transformations of re-collection.
  • Fixed a bug that caused Concourse Server to not use the Strategy framework to determine the most efficient lookup source (e.g., field, record, or index) for navigation keys.
  • Added support for querying on the intrinsic identifier of Records, as both a selection and evaluation key. The record identifier can be refernced using the $id$ key (NOTE: this must be properly escaped in concourse shell as \$id\$).
    • It is useful to include the Record identifier as a selection key for some navigation reads (e.g., select(["partner.name", partner.\$id\$], 1))).
    • It is useful to include the Record identifier as an evaluation key in cases where you want to explictly exclude a record from matching a Condition (e.g., select(["partner.name", parner.\$id\$], "\$id\$ != 2")))
  • Fixed a bug that caused historical reads with sorting to not be performed atomically; potentially violating ACID semantics.
  • Fixed a bug that caused commands to find data matching a Condition (e.g., Criteria or CCL Statement) to not be fully performed atomically; potentially violating ACID semantics.

Version 0.11.2

18 Mar 20:39
Compare
Choose a tag to compare
  • Fixed a bug that caused Concourse Server to incorrectly detect when an attempt was made to atomically commit multiple Writes that toggle the state of a field (e.g. ADD name as jeff in 1, REMOVE name as jeff in 1, ADD name as jeff in 1) in user-defined transactions. As a result of this bug, all field toggling Writes were committed instead of the desired behaviour where there was a commit of at most one equal Write that was required to obtain the intended field state. Committing multiple writes that toggled the field state within the same transaction could cause failures, unexplained results or fatal inconsistencies when reconciling data.
  • Added a fallback to automatically switch to reading data from Segment files using traditional IO in the event that Concourse Server ever exceedes the maximum number of open memory mapped files allowed (as specified by the vm.max_map_count property on some Linux systems).
  • Removed the DEBUG logging (added in 0.11.1) that provides details on the execution path chosen for each lookup because it is too noisy and drastically degrades performance.
  • Fixed a bug in the way that Concourse Server determined if duplicate data existed in the v3 storage files, which caused the concourse data repair CLI to no longer work properly (compared to how it worked on the v2 storage files).
  • Fixed a regression that caused a memory leak when data values were read from disk. The nature of the memory leak caused a degradation in performance because Concourse Server was forced to evict cached records from memory more frequently than in previous versions.

Version 0.11.1

10 Mar 01:34
Compare
Choose a tag to compare
  • Upgraded to CCL version 3.1.2 to fix a regression that caused parenthetical expressions within a Condition containing LIKE REGEX, NOT_LIKE and NOT_REGEX operators to mistakenly throw a SyntaxException when being parsed.
  • Added the ConcourseCompiler#evaluate(ConditionTree, Multimap) method that uses the Operators#evaluate static method to perform local evaluation.
  • Fixed a bug that, in some cases, caused the wrong default environment to be used when invoking server-side data CLIs (e.g., concourse data <action>). When a data CLI was invoked without specifying the environment using the -e <environment> flag, the default environment was always used instead of the default_environment that was specified in the Concourse Server configuration.
  • Fixed a bug that caused the concourse data compact CLI to inexplicably die when invoked while enable_compaction was set to false in the Concourse Server configuration.
  • Fixed the usage message description of the concourse export and concourse import CLIs.
  • Fixed a bug that caused Concourse Shell to fail to parse short syntax within statements containing an open parenthesis as described in GH-463 and GH-139.
  • Fixed a bug that caused the Strategy framework to select the wrong execution path when looking up historical values for order keys. This caused a regression in the performance for relevant commands.
  • Added DEBUG logging that provides details on the execution path chosen for each lookup.
  • Fixed a bug that caused Order/Sort instructions that contain multiple clauses referencing the same key to drop all but the last clause for that key.
  • Fixed a bug that caused the concourse export CLI to not process some combinations of command line arguments properly.
  • Fixed a bug tha caused an error to be thrown when using the max or min function over an entire index as an operation value in a CCL statement.
  • Fixed several corner case bugs with Concourse's arithmetic engine that caused the calculate functions to 1) return inaccurate results when aggregating numbers of different types and 2) inexplicably throw an error when a calculation was performed on data containing null values.

Version 0.11.0

04 Mar 15:11
Compare
Choose a tag to compare
BREAKING CHANGES

There is only PARTIAL COMPATIBILITY between

  • an 0.11.0+ client and an older server, and
  • a 0.11.0+ server and an older client.

Due to changes in Concourse's internal APIs,

  • An older client will receive an error when trying to invoke any audit methods on a 0.11.0+ server.
  • An older server will throw an error message when any audit or review methods are invoked from an 0.11.0+ client.
Storage Format Version 3
  • This version introduces a new, more concise storage format where Database files are now stored as Segments instead of Blocks. In a segment file (.seg), all views of indexed data (primary, secondary, and search) are stored in the same file whereas a separate block file (.blk) was used to store each view of data in the v2 storage format. The process of transporting writes from the Buffer to the Database remains unchanged. When a Buffer page is fully transported, its data is durably synced in a new Segment file on disk.
  • The v3 storage format should reduce the number of data file corruptions because there are fewer moving parts.
  • An upgrade task has been added to automatically copy data to the v3 storage format.
    • The upgrade task will not delete v2 data files, so be mindful that you will need twice the amount of data space available on disk to upgrade. You can safely manually delete the v2 files after the upgrade. If the v2 files remain, a future version of Concourse may automatically delete them for you.
  • In addition to improved data integrity, the v3 storage format brings performance improvements to all operations because of more efficient memory management and smarter usage of asynchronous work queues.
Atomic Commit Timestamps

All the writes in a committed atomic operation (e.g. anything from primitive atomics to user-defined transactions) will now have the same version/timestamp. Previously, when an atomic operation was committed, each write was assigned a distinct version. But, because each atomic write was applied as a distinct state change, it was possible to violate ACID semantics after the fact by performing a partial undo or partial historical read. Now, the version associated with each write is known as the commit version. For non-atomic operations, autocommit is in effect, so each write continues to have a distinct commit version. For atomic operations, the commit version is assigned when the operation is committed and assigned to each atomic write. As a result, all historical reads will either see all or see none of the committed atomic state and undo operations (e.g. clear, revert) will either affect all or affect none of the commited atomic state.

Optimizations
  • The storage engine has been optimized to use less memory when indexing by de-duplicating and reusing equal data components. This drastically reduces the amount of time that the JVM must dedicate to Garbage Collection. Previously, when indexing, the storage engine would allocate new objects to represent data even if equal objects were already buffered in memory.
  • We switched to a more compact in-memory representation of the Inventory, resulting in a reduction of its heap space usage by up to 97.9%. This has an indirect benefit to overall performance and throughput by reducing memory contention that could lead to frequence JVM garbage collection cycles.
  • Improved user-defined transactions by detecting when an attempt is made to atomically commit multiple Writes that toggle the state of a field (e.g. ADD name as jeff in 1, REMOVE name as jeff in 1, ADD name as jeff in 1) and only committing at most one equal Write that is required to obtain the intended state. For example, in the previous example, only 1 write for ADD name as jeff in 1 would be committed.
Performance
  • We improved the performance of commands that sort data by an average of 38.7%. These performance improvements are the result of an new Strategy framework that allows Concourse Server to dynamically choose the most opitmal path for data lookups depending upon the entire context of the command and the state of storage engine. For example, when sorting a result set on key1, Concourse Server will now intelligently decide to lookup the values across key1 using the relevant secondary index if key1 is also a condition key. Alternatively, Concourse Server will decide to lookup the values across key1 using the primary key for each impacted record if key1 is also a being explicitly selected as part of the operation.
  • Search is drastically faster as a result of the improved memory management that comes wth the v3 storage format as well as some other changes to the way that search indexes are read from disk and represented in memory. As a result, search performance is up-to 95.52% faster on real-world data.
New Functionality
  • Added trace functionality to atomically locate and return all the incoming links to one or more records. The incoming links are represented as a mapping from key to a set of records where the key is stored as a Link to the record being traced.
  • Added consolidate functionality to atomically combine data from one or more records into another record. The records from which data is merged are cleared and all references to those cleared records are replaced with the consolidated record on the document-graph.
  • Added the concourse-export framework which provides the Exporter construct for building tools that print data to an OutputStream in accordance with Concourse's multi-valued data format (e.g. a key mapped to multiple values will have those values printed as a delimited list). The Exporters utility class contains built-in exporters for exporting within CSV and Microsoft Excel formats.
  • Added an export CLI that uses the concourse-export framework to export data from Concourse in CSV format to STDOUT or a file.
  • For CrossVersionTests, as an alternative to using the Versions annotation., added the ability to define test versions in a no-arg static method called versions that returns a String[]. Using the static method makes it possible to centrally define the desired test versions in a static variable that is shared across test classes.
  • The server variable in a ClientServerTest (from the concourse-ete-test-core framework) now exposes the server configuration from the prefs() method to facilitate programatic configuration management within tests.
  • Added the ability to configure the location of the access credentials file using the new access_credentials_file preference in concourse.prefs. This makes it possible to store credentials in a more secure directory that is also protected againist instances when the concourse-server installation directory is deleted. Please note that changing the value of access_credentials_file does not migrate existing credentials. By default, credentials are still stored in the .access within the root of the concourse-server installation directory.
  • Added a separate log file for upgrade tasks (log/upgrade.log).
  • Added a mechanism for failed upgrade tasks to automatically perform a rollback that'll reset the system state to be consistent with the state before the task was attempted.
  • Added PrettyLinkedHashMap.of and PrettyLinkedTableMap.of factory methods that accept an analogous Map as a parameter. The input Map is lazily converted into one with a pretty toString format on-demand. In cases where a Map is not expected to be rendered as a String, but should be pretty if it is, these factories return a Map that defers the overhead of prettification until it is necessary.
CCL Support
  • Added support for specifying a CCL Function Statement as a selection/operation key, evaluation key (within a Condition or evaluation value (wthin a Conditon). A function statement can be provided as either the appropriate string form (e.g. function(key), function(key, ccl), key | function, etc) or the appropriate Java Object (e.g. IndexFunction, KeyConditionFunction, ImplicitKeyRecordFunction, etc). The default behaviour when reading is to interpret any string that looks like a function statement as a function statement. To perform a literal read of a string that appears to be a function statement, simply wrap the string in quotes. Finally, a function statement can never be written as a value.
Experimental Features
Compaction
  • Concourse Server can now be configured to compact data files in an effort to optimize storage and improve read performance. When enabled, compaction automatically runs continuously in the background without disrupting data consistency or normal operations (although the impact on operational throughput has yet to be determined). The initial rollout of compaction is intentionally conservative (e.g. the built-in strategy will likely only make changes to a few data files). While this feature is experimental, there is no ability to tune it, but we plan to offer additional preferences to tailor the behaviour in future releases.
  • Additionally, if enabled, performing compaction can be suggested to Concourse Server on an adhoc basis using the new concourse data compact CLI.
    • Compaction can be enabled by setting the enable_compaction preference to true. If this setting is false, Concourse Server will not perform compaction automatically or when suggested to do so.
Search Caching
  • Concouse Server can now be configured to cache search indexes. This feature is currently experimental and turned off by default. Enabling the search cache will further improve the performance of repeated searches by up to 200%, but there is additional overhead that can slightly decrease the throughput of overall data indexing. Decreased indexing throughput may also indirectly affect write performance.
    • The search cache can be...
Read more

Version 0.10.6

09 Sep 10:30
Compare
Choose a tag to compare
Removed limit on Block file sizes
  • Added support for storing data in Block files that are larger than 2147483647 bytes (e.g. ~2.147GB) and fixed bugs that existed because of the previous limitation:
    • If a mutable Block exceeded the previous limit in memory it was not synced to disk and the storage engine didn't provide an error or warning, so indexing continued as normal. As a result, there was the potential for permanent data loss.
    • When a mutable Block failed to sync in the manner described above, the data held in the Block remained completely in memory, resulting in a memory leak.
  • To accommodate the possibility of larger Block files, the BlockIndex now records position pointers using 8 bytes instead of 4. As a result, all Block files must be reindexed, which is automatically done when Concourse Server starts are new installation or upgrade.
Eliminated risks of data inconsistency caused by premature shutdown
  • Fixed the logic that prevents duplicate data indexing when Concourse Server prematurely shuts down or the background indexing job terminates because of an unexpected error. The logic was previously implemented to address CON-83, but it relied on data values instead of data versions and was therefore not robust enough to handle corner cases descried in GH-441 and GH-442.
    • A concourse data repair CLI has been added to detect and remediate data files that are corrupted because of the abovementioned bugs. The CLI can be run at anytime. If no corrupt data files are detected, the CLI has no effect.
    • Upon upgrading to this version, as a precuation, the CLIs routine is run for each environment.
Other
  • Fixed a bug that caused the default log_level to be DEBUG instead of INFO.

Version 0.10.5

23 Aug 00:55
Compare
Choose a tag to compare
  • Fixed a bug where sorting on a navigation key that isn't fetched (e.g. using a navigation key in a find operation or not specifying the navigation key as an operation key in a get or select operation), causes the results set to be returned in the incorrect order.
  • Upgraded CCL version to 2.6.3 in order to fix a parsing bug that occurred when creating a Criteria containing a String or String-like value with a whitespace or equal sign (e.g. =) character.
  • Fixed a bug that made it possible to store circular links (e.g. a link from a record to itself) when atomically adding or setting data in multiple records at once.
  • Fixed a race condition that occurred when multiple client connections logged into a non-default environment at the same time. The proper concurrency controls weren't in place, so the simultaneous connection attempts, in many cases, caused the Engine for that environment to be initialized multiple times. This did not cause any data duplication issues (because only one of the duplicate Engines would be recognized at any given time), but it could cause an OutOfMemoryException if that corresponding environment had a lot of metadata to be loaded into memory during intialization.

Version 0.10.4

15 Dec 21:24
Compare
Choose a tag to compare
  • Added support for using the LIKE, NOT_LIKE and LINKS_TO operators in the TObject#is methods.
  • Fixed a bug that made it possible for a ConnectionPool to refuse to accept the release of a previously issued Concourse connection due to a race condition.
  • Fixed a bug that made it possible for Concourse to violate ACID consistency when performing a concurrent write to a key/record alongside a wide read in the same record.
  • Fixed a bug that caused inconsistencies in the intrinsic order of the result set records from a find operation vs a select or get operation.