Skip to content

cnayoung/GS1_DigitalLink_Resolver_CE

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to the GS1 Digital Link Resolver

Community Edition v2.4

Welcome! The purpose of this repository is to provide you with the ability to build a complete resolver service that will enable you to enter information about GTINs and other GS1 keys and resolve (that is, redirect) web clients to their appropriate destinations.

Overview

The GS1 Digital Link Standard enables consistent representation of GS1 identification keys within web addresses to link to online information and services.

A GS1 Digital Link URI is a Web URI that encodes one or more GS1 Application Identifiers and their value(s) according to the structure defined in this standard. So, for example, a product barcode (called its 'GTIN') value can be converted into a consistently formatted web address which connects that product to online information about it.

GS1 is an organisation that defines a wide range of identifiers that underpin the supply chain and retail industry across the world. These are known as 'GS1 Application Identifiers' a

A Resolver, in the context of a GS1 Digital Link, understands HTTPS requests conforming to the GS1 Digital Link standard, and resolves - that is, 'smart redirects' - to an onward destination.

Versions

Version 2.4 Features

  1. Various bug fixes / code changes from triallists' feedback
  2. Performance improvements to existing code
  3. Express routes applied for GS1 Identifier Key Types: GTIN, GLN, GLNX, SSCC, GRAI, GIAI, GSRN, GDTI, GINC, GSIN, GCN, CPID, GMN
  4. Changed all instances of defaultLink* to defaultLinkMulti
  5. Compressed URIs implemented
  6. Updated rules that generate a 400 Bad Request
  7. Updated Node from v15.1 on Alpine Linux 3.12 to v15.11 on Alpine Linux 3.13 to fix increasingly old version of npm as well as taking advantages of improved performance and bug fixes in Node V8 runtime.

Version 2.3 Features

  1. New JSON output format conforming to the IETF Linkset standard.
  2. New extended format for Mongo documents that reduces the processing overhead of the resolving web server, thus improving performance.
  3. New HTTP 303 'See Other' return code enabling clients to get more general info about an entry, if the specific lot or serial number is not present (part of the 'walking up the tree' functionality).
  4. New HTTP 410 'Gone Away' return code if entry is present in the database but its 'active' flag is set to false (as compared to HTTP 404 'Not Found') when no entry exists at all.
  5. Improvements to GS1 Digital Link Toolkit library.
  6. Various bug fixes and improvements to the applications thanks to developer and triallists feedback.

Version 2.2 Features

  1. URI Template Variables - instead of using static values for qualifiers such as serial number, you can use a string value wrapped in curly braces like this: {myvar}. See the example in the CSV file resolverdata.csv in the 'Example Files To Upload'
  2. Simplified linktype=all JSON document
  3. New linktype=linkset JSON document
  4. Massively reduced container image sizes. Using the latest version of Node and NPM with its updated packages, we can now run most of the service in the tiny Alpine Linux containers.
  5. Better access to SQL via pooling - this makes better use of cloud-based databases such as SQL Azure (as well as dedicated databases)
  6. Lots of optimisations, enhancements and security improvements.
  7. Optimised for working in Kubernetes clusters - tested on DigitalOcean and Microsoft Azure Kubernetes offerings.

Important Notes for existing users of previous versions 1.0 and 1.1

This is a brand new resolving architecture, not backwards compatible with version 1.0 or 1.1 as it is updated to reflect big changes to the design and architecture of the service. These changes were to provide:

  • More complete compliance with the GS1 Digital Link standard
  • Performance and security improvements
  • Rewrite of the id_web_server from PHP 7.3 to Node.JS v13.7
  • Removal of separate Digital Link Toolkit server - now integrated into id_web_server
  • Removal of experimental unixtime service (unixtime downloads will be revisited at later time)

If you are using earlier versions of Resolver, contact Nick Lansley (nick@lansley.com) for advice on copying the data from the old SQL format to the new much simpler SQL format. You should stop using the older v1.x service and transition to this version as soon as possible.


Upgrading

Important Notes for existing users of previous version 2.0

The main upgrade of the service is to resolver_data_entry_server which has been upgraded to support batch uploading of data and a validation process which you can optionally harness to check uploaded entries before they are published. This has resulted in a data structure change that includes '_prevalid' suffix named SQL tables into which data is uploaded. A validation process is then kicked off which, if successful for each entry, copies the data into the non _prevalid suffix SQL tables.

To install this new update, make sure all your data is backed up(!), then use the 'docker-compose build' and 'docker-compose run -d' commands over the top of your existing installation, then run the SQL create script as documented in Fast Start step 7 below. This will create a SQL database called "gs1-resolver-ce-v2-1-db" alongside your existing SQL database "gs1-resolver-ce-v2-db" with the updated structure. The containers point to the new SQL database but the Mongo database is unchanged and will continue serving existing data. You will have an extra step of copying data between the databases but, apart from the _prevalid tables, you will find the structure familiar. Note that a few column names have been changed to conform better to GS1 naming conventions for data properties, but the data in the columns is unchanged in format.

Note also that you only see one running instance of the resolver-web-server rather than five, unlike v2.0. The running of multiple servers has become unnecessary thanks to the latest Node v14 V8 engine and a lot of code optimisation. Fast!

Finally, by popular request, docker-compose exposes the web service on port 80, no longer port 8080. It also exposes SQL Server and MongoDB on their default ports, so use your favourite SQL Server client and Mongo DB to connect to localhost with credentials supplied in the SQL and Mongo Dockerfiles.


Important Notes for existing users of previous version 2.1

In v2.2 the new JSON format for linktype=all has been highly simplified and is a breaking change if you have a client that expects the previous format. The unixtime batch format also uses the new format.

We have upgraded the security of the service in many ways. An important new environment variable in the Dockerfile of resolver_data_entry_server is:

ENV CSP_NONCE_SOURCE_URL="localhost"

Wherever you run Resolver, you must change its domain name in this variable to match it's 'live' domain name, or else the Data Entry UI JavaScript will be blocked from executing.

Emptying SQL table [server_sync_register] to initiate MongoDB rebuild:

The SQL database is unchanged, but we've greatly simplified the data stored in MongoDB. This changed document format is smaller and simpler to both use and understand! So you need to force the Build application to rebuild the Mongo database or the new Resolver web server won't understand it. This is simple to do - using either the API or direct server access, empty the table [gs1-resolver-ce-v2-1-db].[dbo].[server_sync_register]

For example, using the free SQL Server Management Studio, head into the database and use this command:

truncate table [gs1-resolver-ce-v2-1-db].[dbo].[server_sync_register]
  • OR - Using the API with your Admin auth key, use the endpoint to list the servers:
curl --location --request GET 'https://resolver-domain-name/admin/heardbuildsyncservers'

[
  {
    "resolverSyncServerId": "qlh00O7z3JGk",
    "resolverSyncServerHostname": "build-sync-server-deployment-798854fb75-bvk4m",
    "lastHeardDatetime": "2020-06-15T08:35:12.840Z"
  }
]

...then delete each server using its resolverSyncServerId value:

curl --location --request DELETE 'https://resolver-domain-name/admin/heardbuildsyncserver/qlh00O7z3JGk'
  • OR - If you are running Resolver in Docker on your local machine, then you can use the docker volume command to remove the volume that Mongo stores its data in. Make sure that the service is completely down using
    docker-compose down
    then use this command:
docker volume ls

..and look for a volume that should be called 'gs1_digitallink_resolver_ce_resolver-document-volume'. You can then delete it like this:

docker volume rm gs1_digitallink_resolver_ce_resolver-document-volume

Finally, build and restart the new service:

docker-compose build
docker-compose run -d

Mongo will initialise a fresh new empty database which the Build application will detect and perform a full rebuild.


Important Notes for existing users of previous version 2.2

Just as for users upgrading to version 2.2, you will need to empty the table:

[gs1-resolver-ce-v2-1-db].[dbo].[server_sync_register]

Please follow the instructions for doing this in 'Important Notes for existing users of previous version 2.1' section 'Emptying SQL table [server_sync_register]' to initiate MongoDB rebuild of its data.

Important Notes for existing users of previous version 2.3

No changes to data, so you can simply upgrade the code and it will work with v2.3 data structures ion SQL and Mongo.

If you started authoring client apps that use Resolver's linkset defaultLinkType* array, you will need to refer to this array with its new name defaultLinkTypeMulti. This change came about because of some syntax challenges using the asterisk symbol, which could be converted to HTML code * or HTML entity *.


Documentation

Please refer to the document 'GS1 Resolver - Overview and Architecture.pdf' in the root of this repository. This README contains a useful subset of information contained there, but please refer to that PDF for more complete reading.

Architecture

The community edition of the GS1 Digital Link Resolver is an entirely self-contained set of applications, complete with databases and services for data entry and resolving.

We chose a Docker-based containerisation or micro-services architecture model for GS1 Digital Link Resolver for these reasons:

  • The need for end-users to build and host a reliable application free from issues with different versions of database drivers and programming languages.
  • Should a container fail (equivalent of a computer crash) the Docker Engine can instantly start a fresh copy of the container, thus maintaining service.
  • It is simple to scale-up the service by running multiple instances of containers with load-balancing.
  • Most cloud computing providers have the ability to host containers easily within their service platforms.

It is for these reasons that this type of architecture has become so popular.

This repository consists of seven applications which work together to provide the resolving service:

Folder NameProject
resolver_data_entry_serverThe Data Entry service dataentry-web-server consisting of an API that provides controlled access to Create, Read, Update and Delete (CRUD) operations on resolver records, along with a web-based example user interface that allows easy data entry of this information (and uses the API to perform its operations). This project uses a SQL Server database to store information
build_sync_serverThis service runs a 'Build' process once a minute (configurable in Dockerfile) that takes any changes to the data in the SQL database and builds a document for each GS1 key and value, which will be used by...
resolver_web_serverThe resolving service resolver-web-server is completely re-written in Node.js for improved performance and scalability which can be used by client applications that supply a GS1 key and value according to the GS1 Digital Link standard. This service performs a high-speed lookup of the specified GS1 key and value, and returns the appropriate redirection where possible.
resolver_sql_serverThe SQL database service dataentry-sql-server using SQL Server 2017 Express edition (free to use but with 10GB limit) to provide a stable data storage service for the resolver's data-entry needs.
resolver_mongo_serverThe resolver-mongo-server MongoDB database used by the resolver.
frontend_proxy_serverThe frontend web server routing traffic securely to the other containers. Using NGINX, this server's config can be adjusted to support load balancing and more,
digitallink_toolkit_serverA library server available to all the other container applications that tests incoming data against the official reference implementation of the GS1 Digital Link standard

GS1 Resolver Community Edition v2.0 Architecture

Note: In the above diagram you will see five running resolver-web-servers. Unlike v2.0., the running of multiple web servers in v2.1 has become unnecessary thanks to the latest Node V8 engine and a lot of code optimisation! There is now just one resolver-web-server set up by docker-compose.

Web Servers

The only outward-facing web server is frontend-proxy-server which proxies any client requests to the /ui/ data entry web application and /api/ API service through to the resolver_data_entry_server which provides both services. All requests that are not /ui/ or /api/ are sent to resolver-web-server

Build server

The BUILD server looks look for changes in the SQL database and uses it to create documents in the MongoDB database. This 'de-coupled' processing means that the data is simple to understand for date entry purposes, but is repurposed into a more complex structure for highly performant resolving. MongoDB can perform high-speed lookups and is ideal for the high-performance reading of data.

Data Entry API

The Data Entry API is published here: https://documenter.getpostman.com/view/10078469/TVejgpjz

Database servers

This repository includes two extra containers for SQL Server and MongoDB. These are included to help you get up and running quickly to experiment and test the service. However, you are strongly advised to move to cloud-based versions - especially for SQL server, and change the data connection strings as stored below. MongoDB can be left local as long as the volume it stores data on can be made 'permanent'.

  • resolver_data_entry_server stores the required SQL connection resolver_data_entry_server/Dockerfile
  • build_sync_server stores both SQL and MONGO connection strings in build_sync_server/Dockerfile
  • the five resolverN-web-server stores their MONGO (only) string in resolver_web_server/Dockerfile

Disk volumes

Five 'disk' volumes are created for internal use by the service database. Three resolver-sql-server-volume-db- prefixed volumes stores the SQL database and resolver-document-volume stores the Mongo document data so that all the data survives the service being shutdown or restarted. A further volume, resolver-sql-server-dbbackup-volume is used to store a backup of the SQL Server database.

SQL Server Database backup and restore

There are two not-fully-tested-yet backup and restore scripts for the SQL Server. To backup the server:

docker exec -it  dataentry-sql-server  /bin/bash /gs1resolver_data/setup/gs1resolver_dataentry_backupdb_script.sh

.. and to restore it (there are issues with restore which are being worked on!)

docker exec -it  dataentry-sql-server  /bin/bash /gs1resolver_data/setup/gs1resolver_dataentry_restoredb_script.sh

How Resolver decides which response to choose

When you examine Resolver's data structure, you will see that it is defined by seven unique attributes:

  1. Identification Key Type (e.g. '01' for GTIN)
  2. Identification Key Value (the GTIN's 14-digit barcode value (8-, 12- and 13-digit GTINs are zero-padded to make them 14-digits))
  3. Qualifier Path (often just the root path "/" but GTINs can have CPVs, lot number and serial number in the path, each separated by "/")
  4. LinkType (a word from the GS1 Web Vocabulary describing what sort of information is being requested. e.g. Product Info? User Manual?)
  5. Language (which the information is authored in)
  6. Context (which in Community Edition can be any appropriate data value - in GS1 GO Resolver this is 'territory' auch as 'FR' for France)
  7. MimeType or MediaType (how the information is formatted. e.g. 'text/html', 'application/json')

In relational data terms, the Identification Key Type, Identification Key Value, and Qualifier Path (1, 2, 3) form an 'entry' which can have one or more 'responses' based on unique combinations of LinkType, Language, Context and MimeType (4. 5, 6, 7).

For example, if I was scanning a medicine product because I would like to find the patient information leaflet: I want the leaflet (linktype 'gs1:epil') in French (language 'fr') within the legal jurisdiction of Belgium (context 'BE') in PDF format (mimeType 'application/pdf').

Resolver will see if it can match this exact request, but can follow defaults if not all the information is available in its database. Resolver follows these rules in order: (Note that the default linkType is found first and used as Linktype value if no linktype is provided in the request. If no default linktype flag is set at all for this entry, the linktype in the first response of the responses array is chosen. Well, what else can it do?!)

  1. Linktype, Language, Context and mimeType
  2. Linktype, Language, Context and default mimeType
  3. Linktype, Language and Context
  4. Linktype, Language and default Context
  5. Linktype and Language
  6. Linktype and default Language
  7. Linktype
  8. FAIL (404 even though we have an entry as there is nothing we can do. Logically this can only happen if there aren't any responses in the response array).

Fast start

  1. Install the Docker system on your computer. Head to https://www.docker.com/products/docker-desktop for install details for Windows and Mac. If you are using Ubuntu Linux, follow install instructions here: https://docs.docker.com/install/linux/docker-ce/ubuntu/ - Note that if you wish to install on Windows Server editions, read the section on Windows Server at the foot of this README file.

  2. git clone this repository onto your computer.

  3. Open a terminal prompt (Mac and Linux) or PowerShell (Windows 10) and change directory to the one at the 'root' of this repository, so you can see the file docker-compose.yml in the current folder.

  4. Type this command:

    docker-compose config
    ...which should simply list the docker-compose.yml without error, and then type this command
    docker info
    which will get Docker to check that all is well with the service and provide some run-time statistics. You may seem some warnings appear, but if you're not seeing any errors then we're good to go.

  5. Make sure you have a good internet connection, and then type this command:

    docker-compose build
    ...which will cause Docker to build the complete end-to-end GS1 Resolver service. This will take only a short time given a fast internet connection, most of it taken up with downloading the SQL Server instance.

  6. You are nearly ready to start the application. Before you do, make sure you have no SQL Server service (port 1433), MongoDB service (port 27017) or web server (port 80) running on your computer as they will clash with Docker as it tries to start the containers.

  7. Let's do this! Run docker-compose with the 'up' command to get Docker to spin up the complete end-to-end application:

    docker-compose up -d
    (the -d means 'disconnect' - docker-compose will start up everything then hand control back to you).

  8. Now wait 10 seconds while the system settles down (the SQL Server service takes a few seconds to initialise when 'new') then copy and paste this command which will cause you to enter the container and access its terminal prompt:

    docker exec -it  resolver-sql-server bash
    Now run this command which will create the database and some example data:
    /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P its@SECR3T! -i  /gs1resolver_sql_scripts/sqldb_create_script.sql 
    You will see a messages such as '(1 rows affected)' and a sentences that starts 'The module 'END_OF_DAY' depends on the missing object...'. These are all fine - the latter messages are shown because some stored procedures are created by the SQL script before others - and some stored procedures depend on others not created yet as their creation occurs further down this SQL script. As long as the final line says 'Database Create Script Completed' all is well! Exit the container with the command:
    exit

  9. Head to http://localhost/ui and select the Download page.

  10. In the authorization key box, type: "5555555555555" and click the Download button. Save the file to your local computer.

  11. Click the link to go back to the home page, then choose the Upload page.

  12. Type in your authorization key (5555555555555), then choose the file you just downloaded. The Upload page detects 'Download' -format file and will set all the columns correctly for you. Have look at the example data in each column and what it means (read the final section of the PDF document for more details about these columns).

  13. Click 'Check file' followed by 'Upload file'.

  14. By now the local Mongo database should be built (a build event occurs every one minute) so try out this request in a terminal window:

     curl -I http://localhost/gtin/09506000134376?serialnumber=12345 
    which should result in this appearing in your terminal window:

HTTP/1.1 307 Temporary Redirect
Server: nginx/1.19.0
Date: Mon, 09 Nov 2020 16:42:51 GMT
Connection: keep-alive
Vary: Accept-Encoding
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD, GET, OPTIONS
Access-Control-Expose-Headers: Link, Content-Length
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
X-Resolver-ProcessTimeMS: 9
Link: <https://dalgiardino.com/medicinal-compound/pil.html>; rel="gs1:epil"; type="text/html"; hreflang="en"; title="Product Information Page", <https://dalgiardino.com/medicinal-co
mpound/>; rel="gs1:pip"; type="text/html"; hreflang="en"; title="Product Information Page", <https://dalgiardino.com/medicinal-compound/index.html.ja>; rel="gs1:pip"; type="text/htm
l"; hreflang="ja"; title="Product Information Page", <https://id.gs1.org/01/09506000134376>; rel="owl:sameAs"
Location: https://dalgiardino.com/medicinal-compound/?serialnumber=12345

This demonstrates that Resolver has found an entry for GTIN 09506000134376 and is redirecting you to the web site shown in the 'Location' header. You can also see this in action if you use the same web address (in your web browser - you should end up at Dal Giardino web site. The rest of the information above reveals all the alternative links available for this product depending on the context in which Resolver was called.

In this example, try changing the serial number - you will see it change in the resulting 'Location:' header, too! This is an example of using 'URI template variables' to forward incoming requests into outgoing responses. This is a new feature in Resolver CE v2.2 and later.

In the folder "Example Files to Upload" you will also find an Excel spreadsheet and CSV file with the same data - you can upload Excel data too! This particular spreadsheet is the 'official GS1 Resolver upload spreadsheet' which is recognised by the Upload page which sets all the upload columns for you. However, any unencrypted Excel spreadsheet saved by Excel with extension .xlsx can be read by the upload page.

####Shutting down the service

  1. To close the entire application down type this:
    docker-compose down
    Since the data is stored on Docker volumes, the data will survive the shutdown and be available when you 'up' the service again.
  2. If you wish to delete the volumes and thus wipe the data, type these commands:
docker volume rm gs1_digitallink_resolver_ce_resolver-document-volume
docker volume rm gs1_digitallink_resolver_ce_sql-server-dbbackup-volume
docker volume rm gs1_digitallink_resolver_ce_sql-server-volume-db-data
docker volume rm gs1_digitallink_resolver_ce_sql-server-volume-db-log
docker volume rm gs1_digitallink_resolver_ce_sql-server-volume-db-secrets

If the above volumes are the only ones in your Docker Engine then it's quicker to type:

docker volume ls 
to confirm, then to delete all the volumes type:
docker volume prune 


Fast Start: Kubernetes (Beta)

DISCLAIMER: These Kubernetes YAML scripts are currently under test and experimentation to get the best results. Be careful if you run these scripts on a cloud service as it could cause them to create costly resources. You need to be skilled and experienced with Kubernetes to continue! You can also run Kubernetes on your own computer - these scripts have been tested with Docker Desktop for Windows 10 running in 'Kubernetes' mode.

The service is now ready for use with Kubernetes clusters. The container images are now maintained on Docker Hub and the supplied YAML files in this repository will get you up and running quickly.

  1. Make sure you are pointing at the correct K8s cluster context:
    docker context ls
  2. Run this command to get your cluster to install the images and build the complete K8s application:
    kubectl apply -k ./
    Note: It can take several minutes for the SQL Server pod to be set running. Until then expect 'ContainerCreating' status when you list the running pods. Use the command 'kubectl get pods' regularly until the SQL Server pod has status 'Running'. You are always recommended to use SQL Server in a separate cloud resource such as SQL Azure and not in a pod!
  3. Once your cluster is up and running, you will need to run the SQL script to create the database with some example data. To do this, you need to find the SQL Server pod:
    kubectl get pods
    ...and locate a pod with 'sql-server' in its name, then use that name in this command:
    kubectl exec -it  POD_name_containing_sql-server /bin/bash
    ...then once you have a command prompt inside the pod:
    /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P its@SECR3T! -i  /gs1resolver_sql_scripts/sqldb_create_script.sql
    ... and once that script has completed, exit the pod with:
    exit
  4. Now you should follow steps 9 to 14 in the previous 'Fast Start' section to try out the service.
  5. To shutdown the service use:
    kubectl delete -k ./
    ... which will delete all running pods and remove the data volumes.
  6. You will see that the Kubernetes application is set up using a set of YAML files in the /k8s folder in this repo. Using the kubectl apply -k ./ and kubectl delete -k ./ commands, this accesses a file kustomization.yaml in the root folder that links to those files. However, you can make changes to individual YAML files to adjust the application then use the -f (file) command flag instead. For example:
    kubectl apply -f ./k8s/build-sync-server.yaml
  7. Kubernetes provides a resilient and scalable system for managing Resolver; we use it to run Resolver at GS1 Global Office so it is worth taking a look at this method of running your own Resolver service.

The main structural difference between the Kubernetes configuration for this project and our service at GS1 Global Office is that

  1. we use SQL Azure as a seperate database service rather than having a version inside the cluster - the latter is not recommended as it takes up a huge amount of memory and CPU resources, and has no in-built backup capability!
  2. MongoDB is used within our cluster as it provides a high performance read capability for the Resolver web server, since it is 'nearby' on the same internal network. However, rather than using a simple volume as set up by the YAML in this project, we use a Stateful Set which provide a resilient hot-backup of data should a node (virtual machine) in our cluster fail. With simple volumes, should the node crash, it will take a few seconds for Kubernetes to attach the Mongo data volume to a different node, causing an interruption in the service.

We hope you enjoy GS1 Resolver Community Edition!

Please be part of our Developers' forum at https://groups.google.com/g/gs1-digital-link-developers where you can get the latest updates sent to your mailbox.

Best regards
Phil Archer, Director, Web Solutions, GS1
Nick Lansley, Lead Developer, GS1 Digital Link Resolver Project
Rajesh Kumar Rana, Co-Developer, GS1 Digital Link Resolver Project

About

The GS1 DigitalLink Resolver Community Edition

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 91.0%
  • TSQL 6.0%
  • HTML 1.4%
  • Dockerfile 0.6%
  • Python 0.6%
  • CSS 0.4%