Project documentation: The Use of NLP and DLT to Enable the Digitalization of Telecom Roaming Agreements
Table of Content for the project documentation:
- Publications that support the project
- Repository overview
- How to use the repository
- Design criteria
- Implementations criteria
- How to modify
The project is supported by two types of publications Medium Articles and Scientific Contributions.
The project has been documented through the following Medium articles:
- Blockchain-based digitization of the roaming agreement drafting process
- NLP Engine to detect variables, standard clauses, variations, and customized texts
- Chaincode design for managing the drafting of roaming agreements
- Chaincode implementation for managing the drafting of roaming agreements
- A Natural Language Processing Approach for the Digitalization of Roaming Agreements
- Sent to the Conference: ILCICT 2021 (Current status): Under review
This section describes the set of folders include into the project repository.
The backend folder contains:
- The APIs integrated into the
backend
. - The Postman queries to register the admin and user of a MNO.
Dockerfile
to build the backend image.
The chaincode folder contains:
Implementation folder
that contain the Smart Contract created to manage the Roaming Agreement Drafting.Design folder
that contain the chaincode design created with the application tool App Diagrams Tool.
The frontend folder contains:
- Source code for the
frontend
created in ReactJS. Dockerfile
to build the frontend image.
The monitoring
folder contains:
- Configuration files for
Grafana
. - Configuration files for
Prometheus
.
The nlp-engine
folder contains:
- Source code for the
nlp-engine
created inPython
. Dockerfile
to build the nlp-engine image.
The Documentation folder includes:
images folder
with a set of images included as part of the documentation.readme folder
with a set of readme files included as part of the documentation.swagger folder
with a json file for APIs documentation.
The network
a set of subfolders to deploy each of the created services:
- Sub-folder
backend
includes the resources to deploy thebackend
andSwagger
containers. - Sub-folder
elk
includes the resources to deploy theelasticsearch
cluster andkibana
. - Sub-folder
elk-agent
includes the resources to deploy thefilebeat
container agents. - Sub-folder
frontend
includes the resources to deploy thefrontend
container. - Sub-folder
hfb
includes the resources to deploy thehfb
network. - Sub-folder
monitoring
includes the resources to deployGrafana
andKibana
containers. - Sub-folder
nlp-engine
includes the resources to deploy thenlp-engine
.
-
Please make sure that you have set up the environment for the project. Follow the steps listed in prerequisites.
-
To get started with the project, clone the git repository in the go folder:
$ export GOPATH=$HOME/go $ mkdir $GOPATH/src/github.com -p $ cd $GOPATH/src/github.com $ git clone https://github.com/sfl0r3nz05/NLP-DLT.git
-
To use the NLP-Engine following this instructions
-
To deploy the HFB-Network following this instructions
-
To deploy the ELK-Infrastructure following this instructions
-
To deploy the Filebeat-Agent following this instructions
⭐ The Filebeat-Agent is based on the Linux Foundation Project: Blockchain Analyzer: Analyzing Hyperledger Fabric Ledger, Transactions
-
The Backend of the project is:
- To deploy the Backend following this instructions.
- To monitor the Backend following this instructions.
- The Backend has been documented through Swagger, which is deployed along with the Backend. Details of how to modify Swagger are provided in How to modify section,
Demo NLP Part | Demo rest of project |
---|---|
This part is under development ...
- Details of the
chaincode
design here.
This part is under development ...
- Details of the
chaincode
implementation here.
This part is under development ...
- To modify the
NLP-Engine
following this instructions.
- How to modify the
chaincode
.
- To modify
swagger
documentation following this instructions.