diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
new file mode 100644
index 000000000..bafd57195
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -0,0 +1,103 @@
+name: Bug report
+title: "[Bug] "
+description: Problems and issues with code of Exchangis
+labels: [bug, triage]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Thank you for reporting the problem!
+ Please make sure what you are reporting is a bug with reproducible steps. To ask questions
+ or share ideas, pleae post on our [Discussion page](https://github.com/WeBankFinTech/Exchangis/discussions) instead.
+
+ - type: checkboxes
+ attributes:
+ label: Search before asking
+ description: >
+ Please make sure to search in the [issues](https://github.com/WeBankFinTech/Exchangis/issues) first to see
+ whether the same issue was reported already.
+ options:
+ - label: >
+ I searched the [issues](https://github.com/WeBankFinTech/Exchangis/issues) and found no similar
+ issues.
+ required: true
+
+ - type: dropdown
+ attributes:
+ label: Exchangis Component
+ description: |
+ What component are you using? Exchangis has many modules, please make sure to choose the module that
+ you found the bug.
+ multiple: true
+ options:
+ - "exchangis-datasource"
+ - "exchangis-job-launcher"
+ - "exchangis-job-server"
+ - "exchangis-job-builder"
+ - "exchangis-job-metrics"
+ - "exchangis-project"
+ - "exchangis-plugins"
+ - "exchangis-dao"
+ - "exchangis-web"
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: What happened + What you expected to happen
+ description: Describe 1. the bug 2. expected behavior 3. useful information (e.g., logs)
+ placeholder: >
+ Please provide the context in which the problem occurred and explain what happened. Further,
+ To Reproduce Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '.... 4. See error
+ please also explain why you think the behaviour is erroneous. It is extremely helpful if you can
+ copy and paste the fragment of logs showing the exact error messages or wrong behaviour here.
+
+ **NOTE**: Expected behavior A clear and concise description of what you expected to happen.Screenshots If applicable, add screenshots to help explain your problem.
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: Relevent platform
+ description: The platform where you occurred this issue
+ placeholder: >
+ Please specify Desktop or Smartphone, Version / Dependencies / OS / Browser
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: Reproduction script
+ description: >
+ Please provide a reproducible script. Providing a narrow reproduction (minimal / no external dependencies) will
+ help us triage and address issues in the timely manner!
+ placeholder: >
+ Please provide a short code snippet (less than 50 lines if possible) that can be copy-pasted to
+ reproduce the issue. The snippet should have **no external library dependencies**
+ (i.e., use fake or mock data / environments).
+
+ **NOTE**: If the code snippet cannot be run by itself, the issue will be marked as "needs-repro-script"
+ until the repro instruction is updated.
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: Anything else
+ description: Anything else we need to know?
+ placeholder: >
+ How often does this problem occur? (Once? Every time? Only when certain conditions are met?)
+ Any relevant logs to include? Are there other relevant issues?
+
+ - type: checkboxes
+ attributes:
+ label: Are you willing to submit a PR?
+ description: >
+ This is absolutely not required, but we are happy to guide you in the contribution process
+ especially if you already have a good understanding of how to implement the fix.
+ options:
+ - label: Yes I am willing to submit a PR!
+
+ - type: markdown
+ attributes:
+ value: "Thanks for completing our form!"
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
new file mode 100644
index 000000000..7c34114e9
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -0,0 +1,5 @@
+blank_issues_enabled: fasle
+contact_links:
+ - name: Ask a question or get support
+ url: https://github.com/WeBankFinTech/Exchangis/discussions
+ about: Ask a question or request support for using Exchangis
\ No newline at end of file
diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml
new file mode 100644
index 000000000..357f173ff
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/feature_request.yml
@@ -0,0 +1,63 @@
+name: Exchangis feature request
+description: Suggest an idea for Exchangis project
+title: "[Feature] "
+labels: [enhancement]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Thank you for finding the time to propose a new feature!
+ We really appreciate the community efforts to improve Exchangis.
+ - type: checkboxes
+ attributes:
+ label: Search before asking
+ description: >
+ Please make sure to search in the [issues](https://github.com/WeBankFinTech/Exchangis/issues) first to see
+ whether the same feature was requested already.
+ options:
+ - label: >
+ I had searched in the [issues](https://github.com/WeBankFinTech/Exchangis/issues) and found no similar
+ feature requirement.
+ required: true
+ - type: textarea
+ attributes:
+ label: Problem Description
+ description: Is your feature request related to a problem? Please describe.
+
+ - type: textarea
+ attributes:
+ label: Description
+ description: A short description of your feature
+
+ - type: textarea
+ attributes:
+ label: Use case
+ description: >
+ Describe the use case of your feature request.
+ placeholder: >
+ Describe the solution you'd like A clear and concise description of what you want to happen.
+
+ - type: textarea
+ attributes:
+ label: solutions
+ description: Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
+
+ - type: textarea
+ attributes:
+ label: Anything else
+ description: Anything else we need to know?
+ placeholder: >
+ Additional context Add any other context or screenshots about the feature request here.
+
+ - type: checkboxes
+ attributes:
+ label: Are you willing to submit a PR?
+ description: >
+ This is absolutely not required, but we are happy to guide you in the contribution process
+ especially if you already have a good understanding of how to implement the feature.
+ options:
+ - label: Yes I am willing to submit a PR!
+
+ - type: markdown
+ attributes:
+ value: "Thanks for completing our form!"
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
new file mode 100644
index 000000000..57e883bcd
--- /dev/null
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -0,0 +1,28 @@
+### What is the purpose of the change
+(For example: Exchangis-Job defines the core ability of Exchangis, it provides the abilities of job management, job transform, and job launch.
+Related issues: #50. )
+
+### Brief change log
+(for example:)
+- defines the job server module of Exchangis;
+- defines the job launcher module of Exchangis;
+- defines the job metrics module of Exchangis.
+
+### Verifying this change
+(Please pick either of the following options)
+This change is a trivial rework / code cleanup without any test coverage.
+(or)
+This change is already covered by existing tests, such as (please describe tests).
+(or)
+This change added tests and can be verified as follows:
+(example:)
+- Added tests for creating and execute the Exchangis jobs and verify the availability of different Exchangis Job, such as sqoop job, datax job.
+
+### Does this pull request potentially affect one of the following parts:
+- Dependencies (does it add or upgrade a dependency): (yes / no)
+- Anything that affects deployment: (yes / no / don't know)
+- The Core framework, i.e., JobManager, Server.: (yes / no)
+
+### Documentation
+- Does this pull request introduce a new feature? (yes / no)
+- If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
\ No newline at end of file
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
new file mode 100644
index 000000000..5f93411ce
--- /dev/null
+++ b/.github/workflows/build.yml
@@ -0,0 +1,55 @@
+#
+# Copyright 2019 WeBank.
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+name: Exchangis CI Actions
+
+on:
+ push:
+ pull_request:
+
+jobs:
+ build:
+
+ runs-on: ubuntu-latest
+
+ strategy:
+ matrix:
+ node-version: [14.17.3]
+ # See supported Node.js release schedule at https://nodejs.org/en/about/releases/
+
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v2
+ - name: Set up JDK 8
+ uses: actions/setup-java@v2
+ with:
+ distribution: 'adopt'
+ java-version: 8
+ - name: Use Node.js ${{ matrix.node-version }}
+ uses: actions/setup-node@v2
+ with:
+ node-version: ${{ matrix.node-version }}
+ - name: Build backend by maven
+ run: |
+ mvn -N install
+ mvn clean package
+ - name: Build frontend by node.js
+ run: |
+ cd web
+ npm install
+ npm run build
diff --git a/.github/workflows/check_license.yml b/.github/workflows/check_license.yml
new file mode 100644
index 000000000..10e3f9fde
--- /dev/null
+++ b/.github/workflows/check_license.yml
@@ -0,0 +1,48 @@
+#
+# Copyright 2019 WeBank.
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+name: Exchangis License check
+
+on: [push, pull_request]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Checkout source
+ uses: actions/checkout@v2
+ - name: Set up JDK 8
+ uses: actions/setup-java@v2
+ with:
+ java-version: '8'
+ distribution: 'adopt'
+ - name: mvn -N install
+ run:
+ mvn -N install
+ - name: License check with Maven
+ run: |
+ rat_file=`mvn apache-rat:check | { grep -oe "\\S\\+/rat.txt" || true; }`
+ echo "rat_file=$rat_file"
+ if [[ -n "$rat_file" ]];then echo "check error!" && cat $rat_file && exit 123;else echo "check success!" ;fi
+ - name: Upload the report
+ uses: actions/upload-artifact@v2
+ with:
+ name: license-check-report
+ path: "**/target/rat.txt"
diff --git a/.gitignore b/.gitignore
index 90fdae240..46dec8ece 100644
--- a/.gitignore
+++ b/.gitignore
@@ -12,6 +12,7 @@ target
### IntelliJ IDEA ###
.idea
+*.log
*.iws
*.iml
*.ipr
@@ -26,3 +27,12 @@ target
.mvn/wrapper/maven-wrapper.jar
.mvn/wrapper/maven-wrapper.properties
/packages/
+exchangis-server/exchangis-extds
+/logs/
+/web/package-lock.json
+package-lock.json
+.DS_Store
+
+web/dist
+
+workspace/
\ No newline at end of file
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 000000000..294e26a9e
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,9 @@
+FROM harbor.local.hching.com/library/jdk:8u301
+
+ADD assembly-package/target/wedatasphere-exchangis-1.0.0-RC1.tar.gz /opt/wedatasphere-exchangis.tar.gz
+
+RUN cd /opt/wedatasphere-exchangis.tar.gz/packages/ && tar -zxf exchangis-server_1.0.0-RC1.tar.gz && cd /opt/wedatasphere-exchangis.tar.gz/sbin
+
+WORKDIR /opt/wedatasphere-exchangis.tar.gz/sbin
+
+ENTRYPOINT ["/bin/bash start.sh"]
diff --git a/README-ZH.md b/README-ZH.md
new file mode 100644
index 000000000..736279157
--- /dev/null
+++ b/README-ZH.md
@@ -0,0 +1,67 @@
+# Exchangis
+
+[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[English](README.md) | 中文
+
+## 介绍
+
+Exchangis 1.0.0 是微众银行大数据平台 WeDataSphere 与社区用户共同研发的的新版数据交换工具,支持异构数据源之间的结构化和非结构化数据传输同步。
+
+Exchangis 抽象了一套统一的数据源和同步作业定义插件,允许用户快速接入新的数据源,并只需在数据库中简单配置即可在页面中使用。
+
+基于插件化的框架设计,及计算中间件 [Linkis](https://github.com/apache/incubator-linkis),Exchangis 可快速集成对接 Linkis 已集成的数据同步引擎,将 Exchangis 的同步作业转换成 Linkis 数据同步引擎的数据同步作业。
+
+借助于 [Linkis](https://github.com/apache/incubator-linkis) 计算中间件的连接、复用与简化能力,Exchangis 天生便具备了高并发、高可用、多租户隔离和资源管控的金融级数据同步能力。
+
+### 界面预览
+
+![image](https://user-images.githubusercontent.com/27387830/171488936-2cea3ee9-4ef7-4309-93e1-e3b697bd3be1.png)
+
+## 核心特点
+
+### 1. 轻量化的数据源管理
+
+- 基于 Linkis DataSource,抽象了底层数据源在 Exchangis 作为一个同步作业的 Source 和 Sink 所必须的所有能力。只需简单配置即可完成一个数据源的创建。
+
+- 特别数据源版本发布管理功能,支持历史版本数据源回滚,一键发布无需再次配置历史数据源。
+
+
+### 2. 高稳定,快响应的数据同步任务执行
+
+- **近实时任务管控**
+快速抓取传输任务日志以及传输速率等信息,对多任务包括CPU使用、内存使用、数据同步记录等各项指标进行监控展示,支持实时关闭任务;
+
+- **任务高并发传输**
+多任务并发执行,并且支持复制子任务,实时展示每个任务的状态,多租户执行功能有效避免执行过程中任务彼此影响进行;
+
+- **任务状态自检**
+监控长时间运行的任务和状态异常任务,中止任务并及时释放占用的资源。
+
+
+### 3. 与DSS工作流打通,一站式大数据开发的门户
+
+- 实现DSS AppConn包括一级 SSO 规范,二级组织结构规范,三级开发流程规范在内的三级规范;
+
+- 作为DSS工作流的数据交换节点,是整个工作流链路中的门户流程,为后续的工作流节点运行提供稳固的数据基础;
+
+## 整体设计
+
+### 架构设计
+
+![架构设计](https://user-images.githubusercontent.com/27387830/173026793-f1475803-9f85-4478-b566-1ad1d002cd8a.png)
+
+
+## 相关文档
+[安装部署文档](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/zh_CN/ch1/exchangis_deploy_cn.md)
+[用户手册](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/zh_CN/ch1/exchangis_user_manual_cn.md)
+
+## 交流贡献
+
+如果您想得到最快的响应,请给我们提 issue,或者扫码进群:
+
+![communication](images/zh_CN/ch1/communication.png)
+
+## License
+
+Exchangis is under the Apache 2.0 License. See the [License](../../../LICENSE) file for details.
diff --git a/README.md b/README.md
index d48f79fa6..2c8975105 100644
--- a/README.md
+++ b/README.md
@@ -1,62 +1,67 @@
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
-English | [中文](docs/zh_CN/ch1/README.md)
+English | [中文](README-ZH.md)
## Introduction
-Exchangis is a lightweight,highly extensible data exchange platform that supports data transmission between structured and unstructured heterogeneous data sources. On the application layer, it has business features such as data permission management and control, high availability of node services and multi-tenant resource isolation. On the data layer, it also has architectural characteristics such as diversified transmission architecture, module plug-in and low coupling of components.
-Exchnagis's transmission and exchange capabilities depend on its underlying aggregated transmission engines. It defines a unified parameter model for various data sources on the top layer. It maps and configures the parameter model for each transmission engine, and then converts it into the engine's input model. Each type of engine will add Exchangis features, and the enhancement of certain engine features will improve the Exchangis features. Exchangis's default engine aggregated and enhanced is Alibaba's DataX transmission engine.
+Exchangis 1.0.0 is a new version of data exchange tool jointly developed by WeDataSphere, a big data platform of WeBank, and community users, which supports the synchronization of structured and unstructured data transmission between heterogeneous data sources.
-## Features
-- **Data Source Management**
-Share your own data source in a bound project;
-Set the external authority of the data source to control the inflow and outflow of data。
+Exchangis abstracts a unified set of data source and synchronization job definition plugins, allowing users to quickly access new data sources and use them on pages with simple configuration in the database.
-- **Muti-transport Engine Support**
-Transmission engine scales horizontally;
-The current version fully aggregates the offline batch engine DataX and partially aggregates the big data batch derivative engine SQOOP
+Based on the plugin framework design and the computing middleware [Linkis](https://github.com/apache/incubator-Linkis), Exchangis can quickly connect to the data synchronization engine in Linkis, and convert the data synchronization job of Exchangis into the job of Linkis.
-- **Near Real-time Task Control**
-Quickly capture the transmission task log, transmission rate and other information, close the task in real time;
-Dynamically limit transmission rate based on bandwidth
+With the help of [Linkis](https://github.com/apache/incubator-linkis) computing middleware's connection, reuse and simplification capabilities, Exchangia is inherently equipped with financial-grade data synchronization capabilities of high concurrency, high availability, multi-tenant isolation and resource control.
-- **Support Unstructured Transmission**
-Transform the DataX framework and build a binary stream fast channel separately, suitable for pure data synchronization scenarios without data conversion。
+### Interface preview
-- **Task Status Self-check**
-Monitor long-running tasks and tasks with abnormal status, release occupied resources in time and issue alarms。
+![image](https://user-images.githubusercontent.com/27387830/171488936-2cea3ee9-4ef7-4309-93e1-e3b697bd3be1.png)
-## Comparison With Existing Systems
-Comparison of some existing data exchange tools and platforms:
+## Core characteristics
-| Function module | Description | Exchangis | DataX | Sqoop | DataLink | DBus |
-| :----: | :----: |-------|-------|-------|-------|-------|
-| UI | Integrated the convenient management interface and monitoring window | Integrated | None | None | Integrated |Integrated |
-| Installation and deployment | Ease of deployment and third-party dependencies | One-click deployment, no dependencies | No dependencies | Rely on Hadoop environment | Rely on Zookeeper | Rely on a large number of third-party components |
-| Data authority management | Multi-tenant permission configuration and data source permission control | Support | Not support | Not support | Not support | Support |
-| |Dynamic limit transmission | Support | Partially supported, unable to adjust dynamically | Partially supported, unable to adjust dynamically | Support | Support,with Kafka |
-| Data transmission| Unstructured data binary transmission | Support, fast channel | Not support | Not support | Not support,only transport record | Not support,need to be converted to a unified message format|
-| | Embed processing code | Support,dynamic compilation | Not support | Not support | Not support | Partial support |
-| | Transmission breakpoint recovery | Support(Not open source) | Not support | Not support | Support | Support |
-| High availability | Mutiple services, failure does not affect the use | Application high availability, transmission single point(Distributed architecture planning) | Single point service(Open source version) | Multipoint transmission | Application、transmission high availability | Application、transmission high availability |
-| System Management | Nodes、resources management | Support | Not support | Not support | Support | Support |
+### 1. Lightweight datasource management
-## Overall Design
+- Based on Linkis DataSource, Exchangis abstracts all the necessary capabilities of the underlying data source as the Source and Sink of a synchronization job. A data source can be created with simple configuration.
-### Architecture
+- Special datasource version publishing management function supports version history datasource rollback, and one-click publishing does not need to configure historical datasources again.
+
+
+### 2. High-stability and fast-response data synchronization task execution
+
+- **Near-real-time task management**
+ Quickly capture information such as transmission task log and transmission rate, monitor and display various indicators of multi-task including CPU usage, memory usage, data synchronization record, etc., and support closing tasks in real time.
+
+- **Task high concurrent transmission**
+ Multi-tasks are executed concurrently, and sub-tasks can be copied to show the status of each task in real time. Multi-tenant execution function can effectively prevent tasks from affecting each other during execution.
+
+- **Self-check of task status**
+ Monitor long-running tasks and abnormal tasks, stop tasks and release occupied resources in time.
+
+
+### 3. Integrate with DSS workflow, one-stop big data development portal
+
+- Realize DSS AppConn's three-level specification, including the first-level SSO specification, the second-level organizational structure specification and the third-level development process specification.
+
+- As the data exchange node of DSS workflow, it is the fundamental process in the whole workflow link, which provides a solid data foundation for the subsequent operation of workflow nodes.
+
+## Overall Design
+
+### Architecture Design
+
+![架构设计](images/en_US/ch1/architecture.png)
-![Architecture](images/en_US/ch1/architecture.png)
## Documents
-[Quick Deploy](docs/zh_CN/ch1/exchangis_deploy_cn.md)
-[User Manual](docs/zh_CN/ch1/exchangis_user_manual_cn.md)
-## Communication
+[Quick Deploy](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/zh_CN/ch1/exchangis_deploy_cn.md)
+[User Manual](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/zh_CN/ch1/exchangis_user_manual_cn.md)
+
+## Communication and contribution
-If you desire immediate response, please kindly raise issues to us or scan the below QR code by WeChat and QQ to join our group:
+If you want to get the fastest response, please mention issue to us, or scan the code into the group :
-![Communication](images/communication.png)
+![communication](images/en_US/ch1/communication.png)
## License
-Exchangis is under the Apache 2.0 License. See the [License](LICENSE) file for details.
\ No newline at end of file
+Exchangis is under the Apache 2.0 License. See the [License](../../../LICENSE) file for details.
+
diff --git a/docs/en_US/ch1/exchangis_deploy.md b/assembly-package/config/application-eureka.yml
similarity index 100%
rename from docs/en_US/ch1/exchangis_deploy.md
rename to assembly-package/config/application-eureka.yml
diff --git a/assembly-package/config/application-exchangis.yml b/assembly-package/config/application-exchangis.yml
new file mode 100644
index 000000000..f7247d4aa
--- /dev/null
+++ b/assembly-package/config/application-exchangis.yml
@@ -0,0 +1,20 @@
+server:
+ port: 9321
+spring:
+ application:
+ name: exchangis-server
+eureka:
+ client:
+ serviceUrl:
+ defaultZone: http://127.0.0.1:3306/eureka/
+ instance:
+ metadata-map:
+ test: wedatasphere
+
+management:
+ endpoints:
+ web:
+ exposure:
+ include: refresh,info
+logging:
+ config: classpath:log4j2.xml
diff --git a/assembly-package/config/config.sh b/assembly-package/config/config.sh
new file mode 100644
index 000000000..e65884fa2
--- /dev/null
+++ b/assembly-package/config/config.sh
@@ -0,0 +1,4 @@
+LINKIS_GATEWAY_HOST=
+LINKIS_GATEWAY_PORT=
+EXCHANGIS_PORT=
+EUREKA_URL=
\ No newline at end of file
diff --git a/assembly-package/config/db.sh b/assembly-package/config/db.sh
new file mode 100644
index 000000000..b86d3361d
--- /dev/null
+++ b/assembly-package/config/db.sh
@@ -0,0 +1,9 @@
+# 设置数据库的连接信息
+# 包括IP地址、数据库名称、用户名、端口
+MYSQL_HOST=
+MYSQL_PORT=
+MYSQL_USERNAME=
+MYSQL_PASSWORD=
+DATABASE=
+
+
diff --git a/assembly-package/config/exchangis-server.properties b/assembly-package/config/exchangis-server.properties
new file mode 100644
index 000000000..dc55c3f9b
--- /dev/null
+++ b/assembly-package/config/exchangis-server.properties
@@ -0,0 +1,69 @@
+#
+# Copyright 2019 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+#
+
+#wds.linkis.test.mode=true
+wds.linkis.test.mode=false
+
+wds.linkis.server.mybatis.datasource.url=jdbc:mysql://127.0.0.1:3306/exchangis?useSSL=false&characterEncoding=UTF-8&allowMultiQueries=true
+
+wds.linkis.server.mybatis.datasource.username=username
+
+wds.linkis.server.mybatis.datasource.password=password
+
+wds.linkis.gateway.ip=127.0.0.1
+wds.linkis.gateway.port=9001
+wds.linkis.gateway.url=http://127.0.0.1:9001/
+
+wds.linkis.log.clear=true
+
+wds.linkis.server.version=v1
+
+## datasource client
+wds.exchangis.datasource.client.serverurl=http://127.0.0.1:9001
+wds.exchangis.datasource.client.authtoken.key=EXCHANGIS-AUTH
+wds.exchangis.datasource.client.authtoken.value=EXCHANGIS-AUTH
+wds.exchangis.datasource.client.dws.version=v1
+
+# launcher client
+wds.exchangis.client.linkis.server-url=http://127.0.0.1:9001
+wds.exchangis.client.linkis.token.value=EXCHANGIS-AUTH
+
+wds.exchangis.datasource.extension.dir=exchangis-extds
+
+##restful
+wds.linkis.server.restful.scan.packages=com.webank.wedatasphere.exchangis.datasource.server.restful.api,\
+ com.webank.wedatasphere.exchangis.project.server.restful,\
+ com.webank.wedatasphere.exchangis.job.server.restful
+wds.linkis.server.mybatis.mapperLocations=classpath*:com/webank/wedatasphere/dss/framework/appconn/dao/impl/*.xml,classpath*:com/webank/wedatasphere/dss/workflow/dao/impl/*.xml,\
+classpath*:com/webank/wedatasphere/exchangis/job/server/mapper/impl/*.xml,\
+classpath*:com/webank/wedatasphere/exchangis/project/server/mapper/impl/*.xml
+
+wds.linkis.server.mybatis.BasePackage=com.webank.wedatasphere.exchangis.dao,\
+ com.webank.wedatasphere.exchangis.project.server.mapper,\
+ com.webank.wedatasphere.linkis.configuration.dao,\
+ com.webank.wedatasphere.dss.framework.appconn.dao,\
+ com.webank.wedatasphere.dss.workflow.dao,\
+ com.webank.wedatasphere.linkis.metadata.dao,\
+ com.webank.wedatasphere.exchangis.job.server.mapper,\
+ com.webank.wedatasphere.exchangis.job.server.dao
+
+wds.exchangis.job.task.scheduler.load-balancer.flexible.segments.min-occupy=0.25
+wds.exchangis.job.task.scheduler.load-balancer.flexible.segments.max-occupy=0.5
+#wds.exchangis.job.scheduler.group.max.running-jobs=4
+
+wds.linkis.session.ticket.key=bdp-user-ticket-id
+
diff --git a/assembly-package/config/exchangis.properties b/assembly-package/config/exchangis.properties
new file mode 100644
index 000000000..e69de29bb
diff --git a/assembly-package/config/log4j2.xml b/assembly-package/config/log4j2.xml
new file mode 100644
index 000000000..70da2f238
--- /dev/null
+++ b/assembly-package/config/log4j2.xml
@@ -0,0 +1,44 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/assembly-package/pom.xml b/assembly-package/pom.xml
new file mode 100644
index 000000000..15aa169d8
--- /dev/null
+++ b/assembly-package/pom.xml
@@ -0,0 +1,75 @@
+
+
+
+
+ exchangis
+ com.webank.wedatasphere.exchangis
+ 1.0.0-RC1
+
+ 4.0.0
+ assembly-package
+ pom
+
+
+
+ org.apache.maven.plugins
+ maven-install-plugin
+
+ true
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+
+
+ package
+
+ run
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+ 3.1.0
+
+
+ dist
+ package
+
+ single
+
+
+ false
+ wedatasphere-exchangis-${exchangis.version}
+ false
+ false
+
+ src/main/assembly/assembly.xml
+
+
+
+
+
+
+
+
diff --git a/assembly-package/sbin/common.sh b/assembly-package/sbin/common.sh
new file mode 100644
index 000000000..03d4e4666
--- /dev/null
+++ b/assembly-package/sbin/common.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+declare -A MODULE_MAIN_CLASS
+MODULE_MAIN_CLASS["exchangis-server"]="com.webank.wedatasphere.exchangis.server.boot.ExchangisServerApplication"
diff --git a/assembly-package/sbin/configure.sh b/assembly-package/sbin/configure.sh
new file mode 100644
index 000000000..e61c428da
--- /dev/null
+++ b/assembly-package/sbin/configure.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# configure modules
+
+configure_main(){
+
+}
+
+configure_server(){
+
+}
\ No newline at end of file
diff --git a/assembly-package/sbin/daemon.sh b/assembly-package/sbin/daemon.sh
new file mode 100644
index 000000000..40f64a78a
--- /dev/null
+++ b/assembly-package/sbin/daemon.sh
@@ -0,0 +1,68 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+if [[ "x"${EXCHANGIS_HOME} != "x" ]]; then
+ source ${EXCHANGIS_HOME}/sbin/launcher.sh
+ source ${EXCHANGIS_HOME}/sbin/common.sh
+else
+ source ./launcher.sh
+ source ./common.sh
+fi
+
+MODULE_NAME=""
+usage(){
+ echo "Usage is [start|stop|restart {service}]"
+}
+
+start(){
+ # call launcher
+ launcher_start $1 $2
+}
+
+stop(){
+ # call launcher
+ launcher_stop $1 $2
+}
+
+restart(){
+ launcher_stop $1 $2
+ if [[ $? -eq 0 ]]; then
+ sleep 2
+ launcher_start $1 $2
+ fi
+}
+
+COMMAND=$1
+case $COMMAND in
+ start|stop|restart)
+ if [[ ! -z $2 ]]; then
+ MAIN_CLASS=${MODULE_MAIN_CLASS[${MODULE_DEFAULT_PREFIX}$2]}
+ if [[ "x"${MAIN_CLASS} != "x" ]]; then
+ $COMMAND ${MODULE_DEFAULT_PREFIX}$2 ${MAIN_CLASS}
+ else
+ LOG ERROR "Cannot find the main class for [ ${MODULE_DEFAULT_PREFIX}$2 ]"
+ fi
+ else
+ usage
+ exit 1
+ fi
+ ;;
+ *)
+ usage
+ exit 1
+ ;;
+esac
\ No newline at end of file
diff --git a/assembly-package/sbin/env.properties b/assembly-package/sbin/env.properties
new file mode 100644
index 000000000..e69de29bb
diff --git a/assembly-package/sbin/install.sh b/assembly-package/sbin/install.sh
new file mode 100644
index 000000000..16f453870
--- /dev/null
+++ b/assembly-package/sbin/install.sh
@@ -0,0 +1,247 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+source ~/.bashrc
+shellDir=`dirname $0`
+workDir=`cd ${shellDir}/..;pwd`
+
+SOURCE_ROOT=${workDir}
+#load config
+source ${SOURCE_ROOT}/config/config.sh
+source ${SOURCE_ROOT}/config/db.sh
+DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+SHELL_LOG="${DIR}/console.out" #console.out是什么文件?
+export SQL_SOURCE_PATH="${DIR}/../db/exchangis_ddl.sql"
+PACKAGE_DIR="${DIR}/../packages"
+# Home Path
+EXCHNGIS_HOME_PATH="${DIR}/../"
+
+CONF_FILE_PATH="bin/configure.sh"
+FORCE_INSTALL=false
+SKIP_PACKAGE=false
+USER=`whoami`
+SUDO_USER=false
+
+CONF_PATH=${DIR}/../config
+
+usage(){
+ printf "\033[1m Install project, run directly\n\033[0m"
+}
+
+function LOG(){
+ currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
+ echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG} # tee -a 输出是追加到文件里面
+}
+
+abs_path(){
+ SOURCE="${BASH_SOURCE[0]}"
+ while [ -h "${SOURCE}" ]; do
+ DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+ SOURCE="$(readlink "${SOURCE}")"
+ [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
+ done
+ echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
+}
+
+BIN=`abs_path`
+
+
+is_sudo_user(){
+ sudo -v >/dev/null 2>&1 #因为 sudo 在第一次执行时或是在 N分钟内没有执行(N 预设为5)会问密码
+ #这条命令的意思就是在后台执行这个程序,并将错误输出2重定向到标准输出1,然后将标准输出1全部放到/dev/null文件,也就是清空.
+ #所以可以看出" >/dev/null 2>&1 "常用来避免shell命令或者程序等运行中有内容输出。
+}
+
+uncompress_packages(){
+ LOG INFO "\033[1m package dir is: [${PACKAGE_DIR}]\033[0m"
+ local list=`ls ${PACKAGE_DIR}`
+ LOG INFO "\033[1m package list is: [${list}]\033[0m"
+ for pack in ${list}
+ do
+ local uncompress=true
+ if [ ${#PACKAGE_NAMES[@]} -gt 0 ]; then
+ uncompress=false
+ for server in ${PACKAGE_NAMES[@]}
+ do
+ if [ ${server} == ${pack%%.tar.gz*} ] || [ ${server} == ${pack%%.zip*} ]; then
+ uncompress=true
+ break
+ fi
+ done
+ fi
+ if [ ${uncompress} == true ]; then
+ if [[ ${pack} =~ tar\.gz$ ]]; then
+ local do_uncompress=0
+ #if [ ${FORCE_INSTALL} == false ]; then
+ # interact_echo "Do you want to decompress this package: [${pack}]?"
+ # do_uncompress=$?
+ #fi
+ if [ ${do_uncompress} == 0 ]; then
+ LOG INFO "\033[1m Uncompress package: [${pack}] to modules directory\033[0m"
+ tar --skip-old-files -zxf ${PACKAGE_DIR}/${pack} -C ../
+ fi
+ elif [[ ${pack} =~ zip$ ]]; then
+ local do_uncompress=0
+ #if [ ${FORCE_INSTALL} == false ]; then
+ # interact_echo "Do you want to decompress this package: [${pack}]?"
+ # do_uncompress=$?
+ #fi
+ if [ ${do_uncompress} == 0 ]; then
+ LOG INFO "\033[1m Uncompress package: [${pack}] to modules directory\033[0m"
+ unzip -nq ${PACKAGE_DIR}/${pack} -d # n 解压缩时不要覆盖原有的文件
+ fi
+ fi
+ # skip other packages
+ fi
+ done
+}
+
+interact_echo(){
+ while [ 1 ]; do
+ read -p "$1 (Y/N)" yn
+ if [ "${yn}x" == "Yx" ] || [ "${yn}x" == "yx" ]; then
+ return 0
+ elif [ "${yn}x" == "Nx" ] || [ "${yn}x" == "nx" ]; then
+ return 1
+ else
+ echo "Unknown choise: [$yn], please choose again."
+ fi
+ done
+}
+
+init_database(){
+BOOTSTRAP_PROP_FILE="${CONF_PATH}/exchangis-server.properties"
+# Start to initalize database
+if [ "x${SQL_SOURCE_PATH}" != "x" ] && [ -f "${SQL_SOURCE_PATH}" ]; then
+ `mysql --version >/dev/null 2>&1`
+ interact_echo "Do you want to initalize database with sql?"
+ if [ $? == 0 ]; then
+ LOG INFO "\033[1m Scan out mysql command, so begin to initalize the database\033[0m"
+ #interact_echo "Do you want to initalize database with sql: [${SQL_SOURCE_PATH}]?"
+ #if [ $? == 0 ]; then
+ DATASOURCE_URL="jdbc:mysql:\/\/${MYSQL_HOST}:${MYSQL_PORT}\/${DATABASE}\?useSSL=false\&characterEncoding=UTF-8\&allowMultiQueries=true"
+ mysql -h ${MYSQL_HOST} -P ${MYSQL_PORT} -u ${MYSQL_USERNAME} -p${MYSQL_PASSWORD} --default-character-set=utf8 -e \
+ "CREATE DATABASE IF NOT EXISTS ${DATABASE}; USE ${DATABASE}; source ${SQL_SOURCE_PATH};"
+ #sed -ri "s![#]?(DB_HOST=)\S*!\1${HOST}!g" ${BOOTSTRAP_PROP_FILE}
+ #sed -ri "s![#]?(DB_PORT=)\S*!\1${PORT}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.server.mybatis.datasource.username=)\S*!\1${MYSQL_USERNAME}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.server.mybatis.datasource.password=)\S*!\1${MYSQL_PASSWORD}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.server.mybatis.datasource.url=)\S*!\1${DATASOURCE_URL}!g" ${BOOTSTRAP_PROP_FILE}
+ #fi
+ fi
+fi
+}
+
+init_properties(){
+BOOTSTRAP_PROP_FILE="${CONF_PATH}/exchangis-server.properties"
+APPLICATION_YML="${CONF_PATH}/application-exchangis.yml"
+# Start to initalize propertis
+ #interact_echo "Do you want to initalize exchangis-server.properties?"
+ #if [ $? == 0 ]; then
+
+ LINKIS_GATEWAY_URL="http:\/\/${LINKIS_GATEWAY_HOST}:${LINKIS_GATEWAY_PORT}\/"
+
+ if [ "x${LINKIS_SERVER_URL}" == "x" ]; then
+ LINKIS_SERVER_URL="http://127.0.0.1:3306"
+ fi
+ if [ "x${LINKIS_SERVER_URL}" == "x" ]; then
+ LINKIS_SERVER_URL="http://127.0.0.1:3306"
+ fi
+
+ sed -ri "s![#]?(wds.linkis.gateway.ip=)\S*!\1${LINKIS_GATEWAY_HOST}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.gateway.port=)\S*!\1${LINKIS_GATEWAY_PORT}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.gateway.url=)\S*!\1${LINKIS_GATEWAY_URL}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.exchangis.datasource.client.serverurl=)\S*!\1${LINKIS_GATEWAY_URL}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.exchangis.client.linkis.server-url=)\S*!\1${LINKIS_GATEWAY_URL}!g" ${BOOTSTRAP_PROP_FILE}
+ #sed -ri "s![#]?(wds.exchangis.datasource.client.authtoken.key=)\S*!\1${LINKIS_TOKEN}!g" ${BOOTSTRAP_PROP_FILE}
+ #sed -ri "s![#]?(wds.exchangis.datasource.client.authtoken.value=)\S*!\1${LINKIS_TOKEN}!g" ${BOOTSTRAP_PROP_FILE}
+ #sed -ri "s![#]?(wds.exchangis.client.linkis.token.value=)\S*!\1${LINKIS_TOKEN}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.gateway.port=)\S*!\1${LINKIS_GATEWAY_PORT}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(port: )\S*!\1${EXCHANGIS_PORT}!g" ${APPLICATION_YML}
+ sed -ri "s![#]?(defaultZone: )\S*!\1${EUREKA_URL}!g" ${APPLICATION_YML}
+ #fi
+}
+
+install_modules(){
+ LOG INFO "\033[1m ####### Start To Install project ######\033[0m"
+ echo ""
+ if [ ${FORCE_INSTALL} == false ]; then
+ LOG INFO "\033[1m Install project ......\033[0m"
+ init_database
+ init_properties
+ else
+ LOG INFO "\033[1m Install project ......\033[0m"
+ init_database
+ fi
+ LOG INFO "\033[1m ####### Finish To Install Project ######\033[0m"
+}
+
+
+while [ 1 ]; do
+ case ${!OPTIND} in
+ -h|--help)
+ usage
+ exit 0
+ ;;
+ "")
+ break
+ ;;
+ *)
+ echo "Argument error! " 1>&2
+ exit 1
+ ;;
+ esac
+done
+
+is_sudo_user
+if [ $? == 0 ]; then
+ SUDO_USER=true
+fi
+
+MODULE_LIST_RESOLVED=()
+c=0
+RESOLVED_DIR=${PACKAGE_DIR}
+
+server="exchangis-server"
+LOG INFO "\033[1m ####### server is [${server}] ######\033[0m"
+server_list=`ls ${RESOLVED_DIR} | grep -E "^(${server}|${server}_[0-9]+\\.[0-9]+\\.[0-9]+)" | grep -E "(\\.tar\\.gz|\\.zip|)$"`
+LOG INFO "\033[1m ####### server_list is [${server_list}] ######\033[0m"
+for _server in ${server_list}
+ do
+ # More better method to cut string?
+ _server=${_server%%.tar.gz*}
+ _server=${_server%%zip*}
+ MODULE_LIST_RESOLVED[$c]=${_server}
+ c=$(($c + 1))
+ done
+if [ ${SKIP_PACKAGE} == true ]; then
+ MODULE_LIST=${MODULE_LIST_RESOLVED}
+else
+ PACKAGE_NAMES=${MODULE_LIST_RESOLVED}
+fi
+
+
+LOG INFO "\033[1m ####### Start To Uncompress Packages ######\033[0m"
+LOG INFO "Uncompressing...."
+uncompress_packages
+LOG INFO "\033[1m ####### Finish To Umcompress Packages ######\033[0m"
+
+ install_modules
+
+
+exit 0
+
diff --git a/assembly-package/sbin/launcher.sh b/assembly-package/sbin/launcher.sh
new file mode 100644
index 000000000..ba9456329
--- /dev/null
+++ b/assembly-package/sbin/launcher.sh
@@ -0,0 +1,253 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Launcher for modules, provided start/stop functions
+
+DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+ENV_FILE="${DIR}/env.properties"
+SHELL_LOG="${DIR}/command.log"
+USER_DIR="${DIR}/../"
+EXCHANGIS_CONF_PATH="${DIR}/../config"
+EXCHANGIS_LIB_PATH="${DIR}/../lib"
+EXCHANGIS_LOG_PATH="${DIR}/../logs"
+EXCHANGIS_PID_PATH="${DIR}/../runtime"
+MODULE_DEFAULT_PREFIX="exchangis-"
+# Default
+MAIN_CLASS=""
+DEBUG_MODE=False
+DEBUG_PORT="7006"
+SPRING_PROFILE="exchangis"
+SLEEP_TIMEREVAL_S=2
+
+function LOG(){
+ currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
+ echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
+}
+
+abs_path(){
+ SOURCE="${BASH_SOURCE[0]}"
+ while [ -h "${SOURCE}" ]; do
+ DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+ SOURCE="$(readlink "${SOURCE}")"
+ [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
+ done
+ echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
+}
+
+verify_java_env(){
+ if [[ "x${JAVA_HOME}" != "x" ]]; then
+ ${JAVA_HOME}/bin/java -version >/dev/null 2>&1
+ else
+ java -version >/dev/null 2>&1
+ fi
+ if [[ $? -ne 0 ]]; then
+ cat 1>&2 </dev/null`
+ if [ "x"${pid_in_file} != "x" ]; then
+ p=`${JPS} -q | grep ${pid_in_file} | awk '{print $1}'`
+ fi
+ fi
+ else
+ p=`${JPS} -l | grep "$2" | awk '{print $1}'`
+ fi
+ if [ -n "$p" ]; then
+ # echo "$1 ($2) is still running with pid $p"
+ return 0
+ else
+ # echo "$1 ($2) does not appear in the java process table"
+ return 1
+ fi
+}
+
+wait_for_startup(){
+ local now_s=`date '+%s'`
+ local stop_s=$((${now_s} + $1))
+ while [ ${now_s} -le ${stop_s} ];do
+ status_class $2 $3
+ if [ $? -eq 0 ]; then
+ return 0
+ fi
+ sleep ${SLEEP_TIMEREVAL_S}
+ now_s=`date '+%s'` #计算当前时间时间戳
+ done
+ return 1
+}
+
+wait_for_stop(){
+ local now_s=`date '+%s'`
+ local stop_s=$((${now_s} + $1))
+ while [ ${now_s} -le ${stop_s} ];do
+ status_class $2 $3
+ if [ $? -eq 1 ]; then
+ return 0
+ fi
+ sleep ${SLEEP_TIMEREVAL_S}
+ now_s=`date '+%s'`
+ done
+ return 1
+}
+
+# Input: $1:module_name, $2:main class
+launcher_start(){
+ load_env_definitions ${ENV_FILE}
+ LOG INFO "Launcher: launch to start server [ $1 ]"
+ status_class $1 $2
+ if [[ $? -eq 0 ]]; then
+ LOG INFO "Launcher: [ $1 ] has been started in process"
+ return 0
+ fi
+ construct_java_command $1 $2
+ # Execute
+ LOG INFO ${EXEC_JAVA}
+ nohup ${EXEC_JAVA} >/dev/null 2>&1 &
+ LOG INFO "Launcher: waiting [ $1 ] to start complete ..."
+ wait_for_startup 20 $1 $2
+ if [[ $? -eq 0 ]]; then
+ LOG INFO "Launcher: [ $1 ] start success"
+ LOG INFO "Please check exchangis server in EUREKA_ADDRESS: ${EUREKA_URL} "
+ else
+ LOG ERROR "Launcher: [ $1 ] start fail over 20 seconds, please retry it"
+ fi
+}
+
+# Input: $1:module_name, $2:main class
+launcher_stop(){
+ load_env_definitions ${ENV_FILE}
+ LOG INFO "Launcher: stop the server [ $1 ]"
+ local p=""
+ local pid_file_path=${EXCHANGIS_PID_PATH}/$1.pid
+ if [ "x"${pid_file_path} != "x" ]; then
+ if [ -f ${pid_file_path} ]; then
+ local pid_in_file=`cat ${pid_file_path} 2>/dev/null`
+ if [ "x"${pid_in_file} != "x" ]; then
+ p=`${JPS} -q | grep ${pid_in_file} | awk '{print $1}'`
+ fi
+ fi
+ elif [[ "x"$2 != "x" ]]; then
+ p=`${JPS} -l | grep "$2" | awk '{print $1}'`
+ fi
+ if [[ -z ${p} ]]; then
+ LOG INFO "Launcher: [ $1 ] didn't start successfully, not found in the java process table"
+ return 0
+ fi
+ case "`uname`" in
+ CYCGWIN*) taskkill /PID "${p}" ;;
+ *) kill -SIGTERM "${p}" ;;
+ esac
+ LOG INFO "Launcher: waiting [ $1 ] to stop complete ..."
+ wait_for_stop 20
+ if [[ $? -eq 0 ]]; then
+ LOG INFO "Launcher: [ $1 ] stop success"
+ else
+ LOG ERROR "Launcher: [ $1 ] stop exceeded over 20s " >&2
+ return 1
+ fi
+}
\ No newline at end of file
diff --git a/assembly-package/sbin/start-server.sh b/assembly-package/sbin/start-server.sh
new file mode 100644
index 000000000..5889993c8
--- /dev/null
+++ b/assembly-package/sbin/start-server.sh
@@ -0,0 +1,54 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Start exchangis-server module
+MODULE_NAME="exchangis-server"
+
+function LOG(){
+ currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
+ echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
+}
+
+abs_path(){
+ SOURCE="${BASH_SOURCE[0]}"
+ while [ -h "${SOURCE}" ]; do
+ DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+ SOURCE="$(readlink "${SOURCE}")"
+ [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
+ done
+ echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
+}
+
+BIN=`abs_path`
+SHELL_LOG="${BIN}/console.out"
+
+interact_echo(){
+ while [ 1 ]; do
+ read -p "$1 (Y/N)" yn
+ if [ "${yn}x" == "Yx" ] || [ "${yn}x" == "yx" ]; then
+ return 0
+ elif [ "${yn}x" == "Nx" ] || [ "${yn}x" == "nx" ]; then
+ return 1
+ else
+ echo "Unknown choise: [$yn], please choose again."
+ fi
+ done
+}
+
+start_main(){
+
+}
+exit $?
diff --git a/assembly-package/src/main/assembly/assembly.xml b/assembly-package/src/main/assembly/assembly.xml
new file mode 100644
index 000000000..e873afe23
--- /dev/null
+++ b/assembly-package/src/main/assembly/assembly.xml
@@ -0,0 +1,77 @@
+
+
+ exchangis
+
+ tar.gz
+
+ false
+
+
+
+ ${basedir}/sbin
+
+ *
+
+ 0777
+ sbin
+ unix
+
+
+ ${basedir}/bin
+
+ *
+
+ 0777
+ bin
+ unix
+
+
+ ${basedir}/config
+
+ *
+
+ 0777
+ config
+ unix
+
+
+
+ ${basedir}/../db
+
+ *
+
+ 0777
+ db
+ unix
+
+
+
+ ${basedir}/../exchangis-server/target/packages
+
+ *.tar.gz
+ *.zip
+
+ 0755
+ packages
+
+
+
+
\ No newline at end of file
diff --git a/assembly/package.xml b/assembly/package.xml
deleted file mode 100644
index cef49a33c..000000000
--- a/assembly/package.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
- main
-
- tar.gz
-
- true
-
-
- ../packages
-
- exchangis*
-
- packages
-
-
- unix
- ../bin
- bin
- 0755
-
-
- ../docs
- docs
-
-
- ../images
- images
-
-
- ../
- unix
-
- README.md
- LICENSE
-
- /
-
-
-
\ No newline at end of file
diff --git a/assembly/pom.xml b/assembly/pom.xml
deleted file mode 100644
index 9a8450e67..000000000
--- a/assembly/pom.xml
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
-
-
- 4.0.0
-
-
- com.webank.wedatasphere.exchangis
- exchangis
- 0.5.0.RELEASE
- ../pom.xml
-
- exchangis-assembly
-
-
-
-
- org.apache.maven.plugins
- maven-assembly-plugin
- 2.2.1
-
-
- assemble
-
- single
-
-
- install
-
-
-
- false
- false
-
- ${basedir}/package.xml
-
- wedatasphere-${project.parent.artifactId}-${project.parent.version}
- ../build
-
-
-
-
-
\ No newline at end of file
diff --git a/bin/exchangis-init.sql b/bin/exchangis-init.sql
deleted file mode 100644
index 0829c029a..000000000
--- a/bin/exchangis-init.sql
+++ /dev/null
@@ -1,510 +0,0 @@
-
-CREATE TABLE IF NOT EXISTS `exchangis_data_source` (
- `id` bigint(13) NOT NULL AUTO_INCREMENT,
- `source_name` varchar(100) NOT NULL COMMENT 'Data Source Name',
- `source_type` varchar(50) DEFAULT NULL COMMENT 'Data Source Type',
- `source_desc` varchar(200) DEFAULT NULL,
- `owner` varchar(50) DEFAULT 'Exchangis' COMMENT 'Data Source Owner',
- `create_user` varchar(50) DEFAULT NULL COMMENT 'Create User',
- `parameter` text COMMENT 'Parameters',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- `modify_user` varchar(50) DEFAULT NULL COMMENT 'Modify User',
- `modify_time` datetime DEFAULT NULL COMMENT 'Modify Time',
- `model_id` int(11) DEFAULT NULL,
- `auth_entity` varchar(200) DEFAULT NULL COMMENT 'Auth Entity',
- `auth_creden` varchar(200) DEFAULT NULL COMMENT 'Auth Credential',
- `project_id` bigint(13) DEFAULT '0',
- PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_data_source_model` (
- `id` bigint(13) NOT NULL AUTO_INCREMENT,
- `model_name` varchar(100) NOT NULL COMMENT 'Model Name',
- `source_type` varchar(50) DEFAULT NULL COMMENT 'Data Source Type',
- `model_desc` varchar(200) DEFAULT NULL COMMENT 'Model Description',
- `create_owner` varchar(50) DEFAULT '' COMMENT 'Create Owner',
- `create_user` varchar(50) DEFAULT NULL COMMENT 'Create User',
- `parameter` text NOT NULL COMMENT 'Parameters',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- `modify_user` varchar(50) DEFAULT NULL COMMENT 'Modify User',
- `modify_time` datetime DEFAULT NULL COMMENT 'Modify Time',
- PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_data_source_owner` (
- `id` int(11) NOT NULL AUTO_INCREMENT,
- `owner_name` varchar(50) NOT NULL COMMENT 'Owner Name',
- `owner_desc` varchar(200) DEFAULT NULL COMMENT 'Owner Description',
- `create_user` varchar(20) DEFAULT NULL COMMENT 'Create User',
- `create_time` datetime DEFAULT NULL COMMENT 'Create Time',
- `modify_user` varchar(20) DEFAULT NULL COMMENT 'Modify User',
- `modify_time` datetime DEFAULT NULL COMMENT 'Modify Time',
- PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_data_source_permissions` (
- `data_source_id` bigint(13) NOT NULL,
- `access_readable` tinyint(1) DEFAULT '0',
- `access_writable` tinyint(1) DEFAULT '0',
- `access_editable` tinyint(1) DEFAULT '0',
- `access_executable` tinyint(1) DEFAULT '0',
- `modify_time` datetime DEFAULT NULL COMMENT 'Modify Time',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Create Time',
- PRIMARY KEY (`data_source_id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_executor_node` (
- `id` int(11) NOT NULL AUTO_INCREMENT,
- `address` varchar(20) NOT NULL COMMENT 'Address',
- `heartbeat_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- `status` int(11) NOT NULL DEFAULT '0' COMMENT 'Status:0-Up,1-Down',
- `mem_rate` float DEFAULT '0' COMMENT 'Memory Usage',
- `cpu_rate` float DEFAULT '0' COMMENT 'CPU Usage',
- `default_node` tinyint(2) DEFAULT NULL COMMENT 'Default Node',
- PRIMARY KEY (`id`),
- KEY `addres_index` (`address`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_executor_node_tab` (
- `exec_node_id` int(11) NOT NULL COMMENT 'Excutor Node ID',
- `tab_id` int(11) NOT NULL COMMENT 'Tab ID',
- `tab_name` varchar(200) NOT NULL COMMENT 'Tab Name',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Create Time',
- PRIMARY KEY (`exec_node_id`,`tab_id`),
- KEY `tab_name` (`tab_name`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_executor_node_user` (
- `exec_node_id` int(11) NOT NULL COMMENT 'Excutor Node ID',
- `exec_user` varchar(50) NOT NULL COMMENT 'Executive User',
- `user_type` varchar(50) DEFAULT '' COMMENT 'User Type',
- `relation_state` int(2) NOT NULL DEFAULT '0' COMMENT 'Relation State : 0-UnRelated, 1-Relate Success, 2-Relate Fail',
- `uid` int(4) DEFAULT NULL COMMENT 'Machine User ID',
- `gid` int(4) DEFAULT NULL COMMENT 'Machine Group ID',
- `mark_del` tinyint(4) DEFAULT '0' COMMENT 'Mark Delete',
- `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'Update Time',
- PRIMARY KEY (`exec_node_id`,`exec_user`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_executor_user` (
- `id` int(4) NOT NULL AUTO_INCREMENT COMMENT 'ID',
- `exec_user` varchar(50) NOT NULL COMMENT 'Executive User',
- `description` varchar(200) DEFAULT '' COMMENT 'Description',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`id`),
- UNIQUE KEY `exec_user` (`exec_user`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_group` (
- `id` bigint(13) NOT NULL AUTO_INCREMENT,
- `group_name` varchar(50) NOT NULL COMMENT 'Group Name',
- `group_desc` varchar(100) DEFAULT NULL,
- `ref_project_id` bigint(13) DEFAULT '0',
- `create_user` varchar(50) DEFAULT NULL,
- `create_time` datetime NOT NULL,
- `modify_time` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
- PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_exec` (
- `job_id` bigint(20) NOT NULL COMMENT 'Job ID',
- `exec_id` int(11) NOT NULL COMMENT 'Executor Node ID',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`job_id`,`exec_id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_info` (
- `id` bigint(13) NOT NULL AUTO_INCREMENT,
- `job_name` varchar(100) NOT NULL COMMENT 'Job Name',
- `job_cron` varchar(32) DEFAULT NULL COMMENT 'Corn Expression',
- `job_desc` varchar(255) DEFAULT NULL COMMENT 'Desc',
- `job_type` int(11) DEFAULT '1' COMMENT 'Job Type',
- `create_user` varchar(50) DEFAULT NULL COMMENT 'Create User',
- `alarm_user` varchar(255) DEFAULT NULL COMMENT 'Alarm User',
- `alarm_level` int(11) DEFAULT '5' COMMENT 'Alarm Level',
- `fail_retery_count` int(11) DEFAULT '0',
- `project_id` bigint(13) DEFAULT NULL,
- `data_src_id` bigint(13) DEFAULT NULL,
- `data_dst_id` bigint(13) DEFAULT NULL,
- `data_src_type` varchar(50) DEFAULT NULL,
- `data_dst_type` varchar(50) DEFAULT NULL,
- `data_src_owner` varchar(50) DEFAULT NULL ,
- `data_dest_owner` varchar(50) DEFAULT NULL,
- `job_config` text NOT NULL COMMENT 'Job Conf',
- `timeout` int(11) DEFAULT '0',
- `exec_user` varchar(50) DEFAULT '',
- `sync` varchar(45) DEFAULT NULL,
- `modify_user` varchar(50) DEFAULT NULL COMMENT 'Modify User',
- `create_time` datetime DEFAULT NULL,
- `last_trigger_time` datetime DEFAULT NULL,
- `engine_type` varchar(45) DEFAULT '',
- `disposable` tinyint(2) DEFAULT '0',
- `modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- PRIMARY KEY (`id`),
- KEY `index_user` (`create_user`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_info_params` (
- `job_id` bigint(13) NOT NULL,
- `param_name` varchar(100) NOT NULL,
- `param_val` varchar(200) DEFAULT '',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`job_id`,`param_name`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_proc` (
- `job_id` bigint(13) NOT NULL,
- `proc_src_code` text,
- `language` varchar(20) NOT NULL DEFAULT 'java',
- PRIMARY KEY (`job_id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_report` (
- `id` bigint(13) NOT NULL,
- `job_id` bigint(13) DEFAULT NULL,
- `total_costs` double DEFAULT NULL COMMENT 'Cost Time',
- `byte_speed_per_second` bigint(20) DEFAULT NULL ,
- `record_speed_per_second` bigint(20) DEFAULT NULL ,
- `total_read_records` bigint(20) DEFAULT NULL ,
- `total_read_bytes` bigint(20) DEFAULT NULL,
- `total_error_records` bigint(20) DEFAULT NULL ,
- `transformer_total_records` bigint(20) DEFAULT NULL,
- `transformer_failed_records` bigint(20) DEFAULT NULL,
- `transformer_filter_records` bigint(20) DEFAULT NULL,
- `create_time` datetime DEFAULT NULL,
- PRIMARY KEY (`id`),
- KEY `job_id_index` (`job_id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_task` (
- `id` bigint(13) NOT NULL,
- `executer_address` varchar(100) DEFAULT NULL COMMENT 'Executor Address',
- `job_id` bigint(13) DEFAULT NULL,
- `job_name` varchar(100) DEFAULT NULL,
- `job_create_user` varchar(50) DEFAULT NULL,
- `job_alarm_user` varchar(255) DEFAULT NULL,
- `trigger_type` varchar(20) DEFAULT NULL COMMENT 'Trigger Type',
- `trigger_time` datetime NOT NULL COMMENT 'Trigger Time',
- `trigger_status` varchar(20) DEFAULT NULL COMMENT 'Trigger Status',
- `trigger_msg` varchar(255) DEFAULT NULL COMMENT 'Trigger Log',
- `operater` varchar(50) DEFAULT NULL COMMENT 'Operator',
- `status` varchar(50) DEFAULT NULL COMMENT 'Status ,such as: kill,sucess,failed',
- `run_times` int(11) DEFAULT NULL COMMENT 'Run Times',
- `execute_msg` varchar(1000) DEFAULT NULL COMMENT 'Execute Msg',
- `complete_time` datetime DEFAULT NULL COMMENT 'Complete Time',
- `disposable` tinyint(2) DEFAULT '0',
- `version` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- `exec_user` varchar(50) DEFAULT '',
- `project_id` bigint(13) DEFAULT '0' COMMENT 'Project Related',
- `state_speed` bigint(20) DEFAULT NULL,
- `speed_limit_mb` int(12) DEFAULT '0',
- PRIMARY KEY (`id`),
- KEY `job_id` (`job_id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_task_params` (
- `task_id` bigint(11) NOT NULL,
- `param_name` varchar(100) NOT NULL,
- `param_val` varchar(100) DEFAULT '',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`task_id`,`param_name`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_project` (
- `id` bigint(13) NOT NULL AUTO_INCREMENT,
- `project_name` varchar(100) NOT NULL COMMENT 'Project Name',
- `project_desc` varchar(200) DEFAULT NULL COMMENT 'Desc',
- `parent_id` bigint(13) DEFAULT NULL,
- `create_user` varchar(20) DEFAULT NULL COMMENT 'Create User',
- `create_time` datetime DEFAULT NULL COMMENT 'Create Time',
- `modify_user` varchar(20) DEFAULT NULL COMMENT 'Modify User',
- `modify_time` datetime DEFAULT NULL COMMENT 'Modify Time',
- UNIQUE KEY `project_name_create_user` (`project_name`,`create_user`),
- PRIMARY KEY (`id`),
- KEY `index_user` (`create_user`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_queue_elements` (
- `sid` bigint(11) NOT NULL COMMENT 'Seq ID',
- `qid` int(11) NOT NULL COMMENT 'Queue ID',
- `status` int(11) NOT NULL COMMENT 'Element Status',
- `enq_time` datetime DEFAULT NULL COMMENT 'Enque Time',
- `poll_time` datetime DEFAULT NULL COMMENT 'Poll Time',
- `enq_count` int(1) DEFAULT '1',
- `delay_time` datetime DEFAULT NULL,
- `delay_count` int(1) DEFAULT '0',
- `version` int(2) DEFAULT '0' COMMENT 'Version',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`sid`),
- KEY `enq_time` (`enq_time`),
- KEY `poll_time` (`poll_time`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_queue_info` (
- `id` int(11) NOT NULL AUTO_INCREMENT,
- `qname` varchar(50) NOT NULL COMMENT 'Queue Name',
- `description` varchar(200) DEFAULT '',
- `priority` int(11) NOT NULL DEFAULT '-1',
- `is_lock` tinyint(1) DEFAULT '0',
- `lock_host` varchar(50) DEFAULT NULL,
- `lock_time` datetime DEFAULT NULL,
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_tab` (
- `id` int(11) NOT NULL AUTO_INCREMENT,
- `name` varchar(200) NOT NULL COMMENT 'Tab Name',
- `description` varchar(200) DEFAULT '' COMMENT 'Desc',
- `type` int(4) DEFAULT NULL COMMENT 'Tab Type',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Create Time',
- PRIMARY KEY (`id`),
- UNIQUE KEY `name` (`name`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_user_exec` (
- `app_user` varchar(50) NOT NULL COMMENT 'APP User',
- `exec_user` varchar(50) NOT NULL COMMENT 'Executive User',
- PRIMARY KEY (`app_user`,`exec_user`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_user_exec_node` (
- `app_user` varchar(50) NOT NULL COMMENT 'APP User',
- `exec_node_id` int(11) NOT NULL COMMENT 'Executor Node ID',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`app_user`,`exec_node_id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_user_group` (
- `user_name` varchar(50) NOT NULL COMMENT 'User Name',
- `group_id` int(11) NOT NULL COMMENT 'Group ID',
- `join_role` int(4) DEFAULT '0' COMMENT 'Join Role',
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Create Time',
- PRIMARY KEY (`user_name`,`group_id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_user_info` (
- `id` bigint(11) NOT NULL AUTO_INCREMENT,
- `username` varchar(50) NOT NULL,
- `password` varchar(200) DEFAULT '',
- `user_type` int(11) DEFAULT '0',
- `org_code` varchar(50) DEFAULT '',
- `dept_code` varchar(50) DEFAULT '',
- `update_time` datetime NOT NULL,
- `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY (`id`),
- UNIQUE KEY `username` (`username`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_func`(
- `id` INT(11) PRIMARY KEY NOT NULL AUTO_INCREMENT,
- `func_type` VARCHAR(50) NOT NULL,
- `func_name` VARCHAR(100) NOT NULL,
- `tab_name` VARCHAR(50) NOT NULL COMMENT 'Tab',
- `name_dispaly` VARCHAR(100),
- `param_num` INT(11) DEFAULT 0,
- `ref_name` VARCHAR(100) DEFAULT NULL,
- `description` VARCHAR(200),
- `modify_time` DATETIME DEFAULT NULL,
- `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
- UNIQUE INDEX `job_func_tab_name_idx`(`tab_name`, `func_name`)
-)Engine=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `exchangis_job_func_params`(
- `func_id` INT(11) NOT NULL,
- `param_name` VARCHAR(100) NOT NULL,
- `order` INT(11) DEFAULT 0,
- `name_display` VARCHAR(100),
- `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
- PRIMARY KEY(`func_id`, `param_name`)
-)Engine=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_JOB_DETAILS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `JOB_NAME` varchar(190) NOT NULL,
- `JOB_GROUP` varchar(190) NOT NULL,
- `DESCRIPTION` varchar(250) DEFAULT NULL,
- `JOB_CLASS_NAME` varchar(250) NOT NULL,
- `IS_DURABLE` varchar(1) NOT NULL,
- `IS_NONCONCURRENT` varchar(1) NOT NULL,
- `IS_UPDATE_DATA` varchar(1) NOT NULL,
- `REQUESTS_RECOVERY` varchar(1) NOT NULL,
- `JOB_DATA` blob,
- PRIMARY KEY (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
- KEY `IDX_QRTZ_J_REQ_RECOVERY` (`SCHED_NAME`,`REQUESTS_RECOVERY`),
- KEY `IDX_QRTZ_J_GRP` (`SCHED_NAME`,`JOB_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_TRIGGERS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `TRIGGER_NAME` varchar(190) NOT NULL,
- `TRIGGER_GROUP` varchar(190) NOT NULL,
- `JOB_NAME` varchar(190) NOT NULL,
- `JOB_GROUP` varchar(190) NOT NULL,
- `DESCRIPTION` varchar(250) DEFAULT NULL,
- `NEXT_FIRE_TIME` bigint(13) DEFAULT NULL,
- `PREV_FIRE_TIME` bigint(13) DEFAULT NULL,
- `PRIORITY` int(11) DEFAULT NULL,
- `TRIGGER_STATE` varchar(16) NOT NULL,
- `TRIGGER_TYPE` varchar(8) NOT NULL,
- `START_TIME` bigint(13) NOT NULL,
- `END_TIME` bigint(13) DEFAULT NULL,
- `CALENDAR_NAME` varchar(190) DEFAULT NULL,
- `MISFIRE_INSTR` smallint(2) DEFAULT NULL,
- `JOB_DATA` blob,
- PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
- KEY `IDX_QRTZ_T_J` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
- KEY `IDX_QRTZ_T_JG` (`SCHED_NAME`,`JOB_GROUP`),
- KEY `IDX_QRTZ_T_C` (`SCHED_NAME`,`CALENDAR_NAME`),
- KEY `IDX_QRTZ_T_G` (`SCHED_NAME`,`TRIGGER_GROUP`),
- KEY `IDX_QRTZ_T_STATE` (`SCHED_NAME`,`TRIGGER_STATE`),
- KEY `IDX_QRTZ_T_N_STATE` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
- KEY `IDX_QRTZ_T_N_G_STATE` (`SCHED_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
- KEY `IDX_QRTZ_T_NEXT_FIRE_TIME` (`SCHED_NAME`,`NEXT_FIRE_TIME`),
- KEY `IDX_QRTZ_T_NFT_ST` (`SCHED_NAME`,`TRIGGER_STATE`,`NEXT_FIRE_TIME`),
- KEY `IDX_QRTZ_T_NFT_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`),
- KEY `IDX_QRTZ_T_NFT_ST_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_STATE`),
- KEY `IDX_QRTZ_T_NFT_ST_MISFIRE_GRP` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
- CONSTRAINT `EXCHANGIS_QRTZ_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`) REFERENCES `EXCHANGIS_QRTZ_JOB_DETAILS` (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_SIMPLE_TRIGGERS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `TRIGGER_NAME` varchar(190) NOT NULL,
- `TRIGGER_GROUP` varchar(190) NOT NULL,
- `REPEAT_COUNT` bigint(7) NOT NULL,
- `REPEAT_INTERVAL` bigint(12) NOT NULL,
- `TIMES_TRIGGERED` bigint(10) NOT NULL,
- PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
- CONSTRAINT `EXCHANGIS_QRTZ_SIMPLE_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `EXCHANGIS_QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_CRON_TRIGGERS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `TRIGGER_NAME` varchar(190) NOT NULL,
- `TRIGGER_GROUP` varchar(190) NOT NULL,
- `CRON_EXPRESSION` varchar(120) NOT NULL,
- `TIME_ZONE_ID` varchar(80) DEFAULT NULL,
- PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
- CONSTRAINT `EXCHANGIS_QRTZ_CRON_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `EXCHANGIS_QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_SIMPROP_TRIGGERS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `TRIGGER_NAME` varchar(190) NOT NULL,
- `TRIGGER_GROUP` varchar(190) NOT NULL,
- `STR_PROP_1` varchar(512) DEFAULT NULL,
- `STR_PROP_2` varchar(512) DEFAULT NULL,
- `STR_PROP_3` varchar(512) DEFAULT NULL,
- `INT_PROP_1` int(11) DEFAULT NULL,
- `INT_PROP_2` int(11) DEFAULT NULL,
- `LONG_PROP_1` bigint(20) DEFAULT NULL,
- `LONG_PROP_2` bigint(20) DEFAULT NULL,
- `DEC_PROP_1` decimal(13,4) DEFAULT NULL,
- `DEC_PROP_2` decimal(13,4) DEFAULT NULL,
- `BOOL_PROP_1` varchar(1) DEFAULT NULL,
- `BOOL_PROP_2` varchar(1) DEFAULT NULL,
- PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
- CONSTRAINT `EXCHANGIS_QRTZ_SIMPROP_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `EXCHANGIS_QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_BLOB_TRIGGERS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `TRIGGER_NAME` varchar(190) NOT NULL,
- `TRIGGER_GROUP` varchar(190) NOT NULL,
- `BLOB_DATA` blob,
- PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
- KEY `SCHED_NAME` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
- CONSTRAINT `EXCHANGIS_QRTZ_BLOB_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `EXCHANGIS_QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_CALENDARS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `CALENDAR_NAME` varchar(190) NOT NULL,
- `CALENDAR` blob NOT NULL,
- PRIMARY KEY (`SCHED_NAME`,`CALENDAR_NAME`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_PAUSED_TRIGGER_GRPS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `TRIGGER_GROUP` varchar(190) NOT NULL,
- PRIMARY KEY (`SCHED_NAME`,`TRIGGER_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_FIRED_TRIGGERS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `ENTRY_ID` varchar(95) NOT NULL,
- `TRIGGER_NAME` varchar(190) NOT NULL,
- `TRIGGER_GROUP` varchar(190) NOT NULL,
- `INSTANCE_NAME` varchar(190) NOT NULL,
- `FIRED_TIME` bigint(13) NOT NULL,
- `SCHED_TIME` bigint(13) NOT NULL,
- `PRIORITY` int(11) NOT NULL,
- `STATE` varchar(16) NOT NULL,
- `JOB_NAME` varchar(190) DEFAULT NULL,
- `JOB_GROUP` varchar(190) DEFAULT NULL,
- `IS_NONCONCURRENT` varchar(1) DEFAULT NULL,
- `REQUESTS_RECOVERY` varchar(1) DEFAULT NULL,
- PRIMARY KEY (`SCHED_NAME`,`ENTRY_ID`),
- KEY `IDX_QRTZ_FT_TRIG_INST_NAME` (`SCHED_NAME`,`INSTANCE_NAME`),
- KEY `IDX_QRTZ_FT_INST_JOB_REQ_RCVRY` (`SCHED_NAME`,`INSTANCE_NAME`,`REQUESTS_RECOVERY`),
- KEY `IDX_QRTZ_FT_J_G` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
- KEY `IDX_QRTZ_FT_JG` (`SCHED_NAME`,`JOB_GROUP`),
- KEY `IDX_QRTZ_FT_T_G` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
- KEY `IDX_QRTZ_FT_TG` (`SCHED_NAME`,`TRIGGER_GROUP`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_SCHEDULER_STATE` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `INSTANCE_NAME` varchar(190) NOT NULL,
- `LAST_CHECKIN_TIME` bigint(13) NOT NULL,
- `CHECKIN_INTERVAL` bigint(13) NOT NULL,
- PRIMARY KEY (`SCHED_NAME`,`INSTANCE_NAME`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-CREATE TABLE IF NOT EXISTS `EXCHANGIS_QRTZ_LOCKS` (
- `SCHED_NAME` varchar(120) NOT NULL,
- `LOCK_NAME` varchar(40) NOT NULL,
- PRIMARY KEY (`SCHED_NAME`,`LOCK_NAME`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
--- Init Tab
-INSERT INTO `exchangis_tab`(`name`, `description`, `type`) VALUES ('DATAX', 'Alibaba DataX Engine', 0) ON DUPLICATE KEY UPDATE `type` = 0;
-INSERT INTO `exchangis_tab`(`name`, `description`, `type`) VALUES ('SQOOP', 'Apache Sqoop Engine', 0) ON DUPLICATE KEY UPDATE `type` = 0;
-
--- Init Queue
-INSERT INTO `exchangis_queue_info`(`id`, `qname`, `description`) VALUES(1, 'public-queue-01','none') ON DUPLICATE KEY UPDATE `description` = 'none';
-
--- Add Data Source Owner
-INSERT INTO `exchangis_data_source_owner`(`id`, `owner_name`, `owner_desc`) VALUES(1, 'Exchangis', 'WeDataSphere Exchangis') ON DUPLICATE KEY UPDATE `owner_name` = 'Exchangis';
-
--- Add Admin User
-INSERT INTO `exchangis_user_info`(`username`, `password`, `user_type`, `update_time`) VALUES('admin', '3ef7164d1f6167cb9f2658c07d3c2f0a', 2, now()) ON DUPLICATE KEY UPDATE `user_type` = 2;
-
--- Add Job Function
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`) VALUES(1, 'TRANSFORM', 'DATAX', 'dx_substr', 2) ON DUPLICATE KEY UPDATE `func_type` = 'TRANSFROM';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`) VALUES(2, 'TRANSFORM', 'DATAX', 'dx_pad', 3) ON DUPLICATE KEY UPDATE `func_type` = 'TRANSFROM';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`) VALUES(3, 'TRANSFORM', 'DATAX', 'dx_replace', 3) ON DUPLICATE KEY UPDATE `func_type` = 'TRANSFROM';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`, `ref_name`) VALUES(4, 'VERIFY', 'DATAX', 'like', 1, 'dx_filter') ON DUPLICATE KEY UPDATE `func_type` = 'VERIFY';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`, `ref_name`) VALUES(5, 'VERIFY', 'DATAX', 'not like', 1, 'dx_filter') ON DUPLICATE KEY UPDATE `func_type` = 'VERIFY';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`, `ref_name`) VALUES(6, 'VERIFY', 'DATAX', '>', 1, 'dx_filter') ON DUPLICATE KEY UPDATE `func_type` = 'VERIFY';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`, `ref_name`) VALUES(7, 'VERIFY', 'DATAX', '<', 1, 'dx_filter') ON DUPLICATE KEY UPDATE `func_type` = 'VERIFY';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`, `ref_name`) VALUES(8, 'VERIFY', 'DATAX', '=', 1, 'dx_filter') ON DUPLICATE KEY UPDATE `func_type` = 'VERIFY';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`, `ref_name`) VALUES(9, 'VERIFY', 'DATAX', '!=', 1, 'dx_filter') ON DUPLICATE KEY UPDATE `func_type` = 'VERIFY';
-INSERT INTO `exchangis_job_func`(`id`,`func_type`, `tab_name`, `func_name`, `param_num`, `ref_name`) VALUES(10, 'VERIFY', 'DATAX', '>=', 1, 'dx_filter') ON DUPLICATE KEY UPDATE `func_type` = 'VERIFY';
-
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(1, 'startIndex', 'startIndex', 0) ON DUPLICATE KEY UPDATE `name_display` = 'startIndex';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(1, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'padType', 'padType(r or l)', 0) ON DUPLICATE KEY UPDATE `name_display` = 'padType(r or l)';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'padString', 'padString', 2) ON DUPLICATE KEY UPDATE `name_display` = 'padString';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'startIndex', 'startIndex', 0) ON DUPLICATE KEY UPDATE `name_display` = 'startIndex';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'replaceString', 'replaceString', 2) ON DUPLICATE KEY UPDATE `name_display` = 'replaceString';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(4, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(5, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(6, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(7, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(8, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(9, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
-INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(10, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
\ No newline at end of file
diff --git a/bin/install.sh b/bin/install.sh
deleted file mode 100644
index 324954009..000000000
--- a/bin/install.sh
+++ /dev/null
@@ -1,283 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2020 WeBank
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-export BASE_LOG_DIR=""
-export BASE_CONF_DIR=""
-export BASE_DATA_DIR=""
-DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
-SHELL_LOG="${DIR}/console.out"
-export SQL_SOURCE_PATH="${DIR}/exchangis-init.sql"
-MODULES_DIR="${DIR}/../modules"
-PACKAGE_DIR="${DIR}/../packages"
-MODULE_LIST=()
-CONF_FILE_PATH="bin/configure.sh"
-FORCE_INSTALL=false
-SKIP_PACKAGE=false
-SAFE_MODE=true
-UNSAFE_COMMAND=""
-USER=`whoami`
-SUDO_USER=false
-
-usage(){
- printf "\033[1m Install usage:\n\033[0m"
- printf "\t%-15s %-15s %-2s \n" "-m|--modules" "modules to install" "Define the modules to install"
- printf "\t%-15s %-15s %-2s \n" "-f|--force" "force install" "Force program to install modules"
- printf "\t%-15s %-15s %-2s \n" "--skip-pack" "do not decompress" "Skip the phrase of decompressing packages"
- printf "\t%-15s %-15s %-2s \n" "--unsafe" "unsafe mode" "Will clean the module directory existed"
- printf "\t%-15s %-15s %-2s \n" "--safe" "safe mode" "Will not modify the module directory existed (Default)"
- printf "\t%-15s %-15s %-2s \n" "-h|--help" "usage" "View command list"
-}
-
-function LOG(){
- currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
- echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
-}
-
-is_sudo_user(){
- sudo -v >/dev/null 2>&1
-}
-
-clean_modules(){
- if [ ${#MODULE_LIST[@]} -gt 0 ]; then
- for server in ${MODULE_LIST[@]}
- do
- rm -rf ${MODULES_DIR}/${server}
- done
- else
- rm -rf ${MODULES_DIR}/*
- fi
-}
-
-uncompress_packages(){
- local list=`ls ${PACKAGE_DIR}`
- for pack in ${list}
- do
- local uncompress=true
- if [ ${#PACKAGE_NAMES[@]} -gt 0 ]; then
- uncompress=false
- for server in ${PACKAGE_NAMES[@]}
- do
- if [ ${server} == ${pack%%.tar.gz*} ] || [ ${server} == ${pack%%.zip*} ]; then
- uncompress=true
- break
- fi
- done
- fi
- if [ ${uncompress} == true ]; then
- if [[ ${pack} =~ tar\.gz$ ]]; then
- local do_uncompress=0
- if [ ${FORCE_INSTALL} == false ]; then
- interact_echo "Do you want to decompress this package: [${pack}]?"
- do_uncompress=$?
- fi
- if [ ${do_uncompress} == 0 ]; then
- LOG INFO "\033[1m Uncompress package: [${pack}] to modules directory\033[0m"
- tar --skip-old-files -zxf ${PACKAGE_DIR}/${pack} -C ${MODULES_DIR}
- fi
- elif [[ ${pack} =~ zip$ ]]; then
- local do_uncompress=0
- if [ ${FORCE_INSTALL} == false ]; then
- interact_echo "Do you want to decompress this package: [${pack}]?"
- do_uncompress=$?
- fi
- if [ ${do_uncompress} == 0 ]; then
- LOG INFO "\033[1m Uncompress package: [${pack}] to modules directory\033[0m"
- unzip -nq ${PACKAGE_DIR}/${pack} -d ${MODULES_DIR}
- fi
- fi
- # skip other packages
- fi
- done
-}
-
-interact_echo(){
- while [ 1 ]; do
- read -p "$1 (Y/N)" yn
- if [ "${yn}x" == "Yx" ] || [ "${yn}x" == "yx" ]; then
- return 0
- elif [ "${yn}x" == "Nx" ] || [ "${yn}x" == "nx" ]; then
- return 1
- else
- echo "Unknown choise: [$yn], please choose again."
- fi
- done
-}
-
-install_modules(){
- LOG INFO "\033[1m ####### Start To Install Modules ######\033[0m"
- LOG INFO "Module servers could be installed:"
- for server in ${MODULE_LIST[@]}
- do
- printf "\\033[1m [${server}] \033[0m"
- done
- echo ""
- for server in ${MODULE_LIST[@]}
- do
- if [ ${FORCE_INSTALL} == false ]; then
- interact_echo "Do you want to confiugre and install [${server}]?"
- if [ $? == 0 ]; then
- LOG INFO "\033[1m Install module server: [${server}]\033[0m"
- # Call configure.sh
- ${MODULES_DIR}/${server}/${CONF_FILE_PATH} ${UNSAFE_COMMAND}
- fi
- else
- LOG INFO "\033[1m Install module server: [${server}]\033[0m"
- # Call configure.sh
- ${MODULES_DIR}/${server}/${CONF_FILE_PATH} ${UNSAFE_COMMAND}
- fi
- done
- LOG INFO "\033[1m ####### Finish To Install Modules ######\033[0m"
-}
-
-scan_to_install_modules(){
- echo "Scan modules directory: [$1] to find server under exchangis"
- let c=0
- ls_out=`ls $1`
- for dir in ${ls_out}
- do
- if test -e "$1/${dir}/${CONF_FILE_PATH}"; then
- MODULE_LIST[$c]=${dir}
- ((c++))
- fi
- done
- install_modules
-}
-
-while [ 1 ]; do
- case ${!OPTIND} in
- -h|--help)
- usage
- exit 0
- ;;
- -m|--modules)
- i=1
- if [ -z $2 ]; then
- echo "Empty modules"
- exit 1
- fi
- while [ 1 ]; do
- split=`echo $2|cut -d "," -f${i}`
- if [ "$split" != "" ];
- then
- c=$(($i - 1))
- MODULE_LIST[$c]=${split}
- i=$(($i + 1))
- else
- break
- fi
- if [ "`echo $2 |grep ","`" == "" ]; then
- break
- fi
- done
- shift 2
- ;;
- -f|--force)
- FORCE_INSTALL=true
- shift 1
- ;;
- --skip-pack)
- SKIP_PACKAGE=true
- shift 1
- ;;
- --safe)
- SAFE_MODE=true
- UNSAFE_COMMAND=""
- shift 1
- ;;
- --unsafe)
- SAFE_MODE=false
- UNSAFE_COMMAND="--unsafe"
- shift 1
- ;;
- "")
- break
- ;;
- *)
- echo "Argument error! " 1>&2
- exit 1
- ;;
- esac
-done
-
-is_sudo_user
-if [ $? == 0 ]; then
- SUDO_USER=true
-fi
-MODULE_LIST_RESOLVED=()
-if [ ${#MODULE_LIST[@]} -gt 0 ]; then
- c=0
- RESOLVED_DIR=${PACKAGE_DIR}
- if [ ${SKIP_PACKAGE} == true ]; then
- RESOLVED_DIR=${MODULES_DIR}
- fi
- for server in ${MODULE_LIST[@]}
- do
- server_list=`ls ${RESOLVED_DIR} | grep -E "^(${server}|${server}_[0-9]+\\.[0-9]+\\.[0-9]+\\.RELEASE_[0-9]+)(\\.tar\\.gz|\\.zip|)$"`
- for _server in ${server_list}
- do
- # More better method to cut string?
- _server=${_server%%.tar.gz*}
- _server=${_server%%zip*}
- MODULE_LIST_RESOLVED[$c]=${_server}
- c=$(($c + 1))
- done
- done
- if [ ${SKIP_PACKAGE} == true ]; then
- MODULE_LIST=${MODULE_LIST_RESOLVED}
- else
- PACKAGE_NAMES=${MODULE_LIST_RESOLVED}
- fi
-fi
-
-if [ ! -d ${MODULES_DIR} ]; then
- LOG INFO "Creating directory: ["${MODULES_DIR}"]."
- mkdir -p ${MODULES_DIR}
-fi
-
-if [ ${SAFE_MODE} == false ]; then
- LOG INFO "\033[1m ####### Start To Clean Modules Directory ######\033[0m"
- LOG INFO "Cleanning...."
- if [ ${MODULES_DIR} == "" ] || [ ${MODULES_DIR} == "/" ]; then
- LOG INFO "Illegal modules directory: ${MODULES_DIR}" 1>&2
- exit 1
- fi
- clean_modules
- LOG INFO "\033[1m ####### Finish To Clean Modules Directory ######\033[0m"
-fi
-
-if [ ${SKIP_PACKAGE} == false ]; then
- LOG INFO "\033[1m ####### Start To Uncompress Packages ######\033[0m"
- LOG INFO "Uncompressing...."
- uncompress_packages
- LOG INFO "\033[1m ####### Finish To Umcompress Packages ######\033[0m"
-fi
-
-if [ ${#MODULE_LIST[@]} -gt 0 ]; then
- for server in ${MODULE_LIST}
- do
- if [ ! -f ${MODULES_DIR}/${server}/${CONF_FILE_PATH} ]; then
- LOG INFO "Module [${server}] defined doesn't have configure.sh shell" 1>&2
- exit 1
- fi
- done
- install_modules
-else
- # Scan modules directory
- scan_to_install_modules ${MODULES_DIR}
-fi
-
-exit 0
-
diff --git a/bin/start-all.sh b/bin/start-all.sh
deleted file mode 100644
index 8e2abd368..000000000
--- a/bin/start-all.sh
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2020 WeBank
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-START_MODULES=("exchangis-eureka" "exchangis-gateway" "exchangis-service" "exchangis-executor")
-
-function LOG(){
- currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
- echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
-}
-
-abs_path(){
- SOURCE="${BASH_SOURCE[0]}"
- while [ -h "${SOURCE}" ]; do
- DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
- SOURCE="$(readlink "${SOURCE}")"
- [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
- done
- echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
-}
-
-BIN=`abs_path`
-SHELL_LOG="${BIN}/console.out"
-
-LOG INFO "\033[1m Try To Start Modules In Order \033[0m"
-for module in ${START_MODULES[@]}
-do
- ${BIN}/start.sh -m ${module}
- if [ $? != 0 ]; then
- LOG ERROR "\033[1m Start Modules [${module}] Failed! \033[0m"
- exit 1
- fi
-done
\ No newline at end of file
diff --git a/bin/start.sh b/bin/start.sh
deleted file mode 100644
index 8456d3a90..000000000
--- a/bin/start.sh
+++ /dev/null
@@ -1,96 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2020 WeBank
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-MODULE_NAME=""
-MODULE_DEFAULT_PREFIX="exchangis-"
-usage(){
- echo "Usage is [-m module will be started]"
-}
-
-function LOG(){
- currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
- echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
-}
-
-abs_path(){
- SOURCE="${BASH_SOURCE[0]}"
- while [ -h "${SOURCE}" ]; do
- DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
- SOURCE="$(readlink "${SOURCE}")"
- [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
- done
- echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
-}
-
-BIN=`abs_path`
-MODULE_DIR=${BIN}/../modules
-SHELL_LOG="${BIN}/console.out"
-
-interact_echo(){
- while [ 1 ]; do
- read -p "$1 (Y/N)" yn
- if [ "${yn}x" == "Yx" ] || [ "${yn}x" == "yx" ]; then
- return 0
- elif [ "${yn}x" == "Nx" ] || [ "${yn}x" == "nx" ]; then
- return 1
- else
- echo "Unknown choise: [$yn], please choose again."
- fi
- done
-}
-
-start_single_module(){
- LOG INFO "\033[1m ####### Begin To Start Module: [$1] ######\033[0m"
- if [ -f "${MODULE_DIR}/$1/bin/$1.sh" ]; then
- ${MODULE_DIR}/$1/bin/$1.sh start
- elif [[ $1 != ${MODULE_DEFAULT_PREFIX}* ]] && [ -f "${MODULE_DIR}/${MODULE_DEFAULT_PREFIX}$1/bin/${MODULE_DEFAULT_PREFIX}$1.sh" ]; then
- interact_echo "Do you mean [${MODULE_DEFAULT_PREFIX}$1] ?"
- if [ $? == 0 ]; then
- ${MODULE_DIR}/${MODULE_DEFAULT_PREFIX}$1/bin/${MODULE_DEFAULT_PREFIX}$1.sh start
- fi
- else
- LOG ERROR "Cannot find the startup script for module: [$1], please check your installation"
- exit 1
- fi
-}
-
-while [ 1 ]; do
- case ${!OPTIND} in
- -m|--modules)
- if [ -z $2 ]; then
- LOG ERROR "No module provided"
- exit 1
- fi
- MODULE_NAME=$2
- shift 2
- ;;
- "")
- break
- ;;
- *)
- usage
- exit 1
- ;;
- esac
-done
-
-if [ "x${MODULE_NAME}" == "x" ]; then
- usage
- exit 1
-fi
-
-start_single_module ${MODULE_NAME}
-exit $?
diff --git a/bin/stop-all.sh b/bin/stop-all.sh
deleted file mode 100644
index 357b01d29..000000000
--- a/bin/stop-all.sh
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2020 WeBank
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-STOP_MODULES=("exchangis-executor" "exchangis-service" "exchangis-gateway" "exchangis-eureka")
-
-function LOG(){
- currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
- echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
-}
-
-abs_path(){
- SOURCE="${BASH_SOURCE[0]}"
- while [ -h "${SOURCE}" ]; do
- DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
- SOURCE="$(readlink "${SOURCE}")"
- [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
- done
- echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
-}
-
-BIN=`abs_path`
-SHELL_LOG="${BIN}/console.out"
-
-LOG INFO "\033[1m Try to Stop Modules In Order \033[0m"
-for module in ${STOP_MODULES[@]}
-do
- ${BIN}/stop.sh -m ${module}
- if [ $? != 0 ]; then
- LOG ERROR "\033[1m Stop Modules [${module}] Failed! \033[0m"
- exit 1
- fi
-done
\ No newline at end of file
diff --git a/bin/stop.sh b/bin/stop.sh
deleted file mode 100644
index d28c38cac..000000000
--- a/bin/stop.sh
+++ /dev/null
@@ -1,96 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2020 WeBank
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-MODULE_NAME=""
-MODULE_DEFAULT_PREFIX="exchangis-"
-usage(){
- echo "Usage is [-m module will be stoped]"
-}
-
-function LOG(){
- currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
- echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
-}
-
-abs_path(){
- SOURCE="${BASH_SOURCE[0]}"
- while [ -h "${SOURCE}" ]; do
- DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
- SOURCE="$(readlink "${SOURCE}")"
- [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
- done
- echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
-}
-
-BIN=`abs_path`
-MODULE_DIR=${BIN}/../modules
-SHELL_LOG="${BIN}/console.out"
-
-interact_echo(){
- while [ 1 ]; do
- read -p "$1 (Y/N)" yn
- if [ "${yn}x" == "Yx" ] || [ "${yn}x" == "yx" ]; then
- return 0
- elif [ "${yn}x" == "Nx" ] || [ "${yn}x" == "nx" ]; then
- return 1
- else
- echo "Unknown choise: [$yn], please choose again."
- fi
- done
-}
-
-stop_single_module(){
- LOG INFO "\033[1m ####### Begin To Stop Module: [$1] ######\033[0m"
- if [ -f "${MODULE_DIR}/$1/bin/$1.sh" ]; then
- ${MODULE_DIR}/$1/bin/$1.sh stop
- elif [[ $1 != ${MODULE_DEFAULT_PREFIX}* ]] && [ -f "${MODULE_DIR}/${MODULE_DEFAULT_PREFIX}$1/bin/${MODULE_DEFAULT_PREFIX}$1.sh" ]; then
- interact_echo "Do you mean [${MODULE_DEFAULT_PREFIX}$1] ?"
- if [ $? == 0 ]; then
- ${MODULE_DIR}/${MODULE_DEFAULT_PREFIX}$1/bin/${MODULE_DEFAULT_PREFIX}$1.sh stop
- fi
- else
- LOG ERROR "Cannot find the stop script for module: [$1], please check your installation"
- exit 1
- fi
-}
-
-while [ 1 ]; do
- case ${!OPTIND} in
- -m|--modules)
- if [ -z $2 ]; then
- LOG ERROR "No module provided"
- exit 1
- fi
- MODULE_NAME=$2
- shift 2
- ;;
- "")
- break
- ;;
- *)
- usage
- exit 1
- ;;
- esac
-done
-
-if [ "x${MODULE_NAME}" == "x" ]; then
- usage
- exit 1
-fi
-
-stop_single_module ${MODULE_NAME}
-exit $?
\ No newline at end of file
diff --git a/db/exchangis_ddl.sql b/db/exchangis_ddl.sql
new file mode 100644
index 000000000..826c511c4
--- /dev/null
+++ b/db/exchangis_ddl.sql
@@ -0,0 +1,170 @@
+-- exchangis_v4.exchangis_job_ds_bind definition
+
+CREATE TABLE `exchangis_job_ds_bind` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `job_id` bigint(20) NOT NULL,
+ `task_index` int(11) NOT NULL,
+ `source_ds_id` bigint(20) NOT NULL,
+ `sink_ds_id` bigint(20) NOT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=59575 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+
+-- exchangis_v4.exchangis_job_entity definition
+
+CREATE TABLE `exchangis_job_entity` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `name` varchar(100) NOT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `engine_type` varchar(45) DEFAULT '',
+ `job_labels` varchar(255) DEFAULT NULL,
+ `create_user` varchar(100) DEFAULT NULL,
+ `job_content` mediumtext,
+ `execute_user` varchar(100) DEFAULT '',
+ `job_params` text,
+ `job_desc` varchar(255) DEFAULT NULL,
+ `job_type` varchar(50) DEFAULT NULL,
+ `project_id` bigint(13) DEFAULT NULL,
+ `source` text,
+ `modify_user` varchar(50) DEFAULT NULL COMMENT '修改用户',
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=5793 DEFAULT CHARSET=utf8;
+
+
+-- exchangis_v4.exchangis_job_param_config definition
+
+CREATE TABLE `exchangis_job_param_config` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `config_key` varchar(64) NOT NULL,
+ `config_name` varchar(64) NOT NULL,
+ `config_direction` varchar(16) DEFAULT NULL,
+ `type` varchar(32) NOT NULL,
+ `ui_type` varchar(32) DEFAULT NULL,
+ `ui_field` varchar(64) DEFAULT NULL,
+ `ui_label` varchar(32) DEFAULT NULL,
+ `unit` varchar(32) DEFAULT NULL,
+ `required` bit(1) DEFAULT b'0',
+ `value_type` varchar(32) DEFAULT NULL,
+ `value_range` varchar(255) DEFAULT NULL,
+ `default_value` varchar(255) DEFAULT NULL,
+ `validate_type` varchar(64) DEFAULT NULL,
+ `validate_range` varchar(64) DEFAULT NULL,
+ `validate_msg` varchar(255) DEFAULT NULL,
+ `is_hidden` bit(1) DEFAULT NULL,
+ `is_advanced` bit(1) DEFAULT NULL,
+ `source` varchar(255) DEFAULT NULL,
+ `level` tinyint(4) DEFAULT NULL,
+ `treename` varchar(32) DEFAULT NULL,
+ `sort` int(11) DEFAULT NULL,
+ `description` varchar(255) DEFAULT NULL,
+ `status` tinyint(4) DEFAULT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=utf8;
+
+-- exchangis_v4.exchangis_project_info definition
+
+CREATE TABLE `exchangis_project_info` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `name` varchar(64) NOT NULL,
+ `description` varchar(255) DEFAULT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `create_user` varchar(64) DEFAULT NULL,
+ `last_update_user` varchar(64) DEFAULT NULL,
+ `project_labels` varchar(255) DEFAULT NULL,
+ `domain` varchar(32) DEFAULT NULL,
+ `exec_users` varchar(255) DEFAULT NULL,
+ `view_users` varchar(255) DEFAULT NULL,
+ `edit_users` varchar(255) DEFAULT NULL,
+ `source` text,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=1497870871035973934 DEFAULT CHARSET=utf8;
+
+-- exchangis_v4.exchangis_job_entity definition
+
+
+-- exchangis_v4.exchangis_launchable_task definition
+
+CREATE TABLE `exchangis_launchable_task` (
+ `id` bigint(13) NOT NULL,
+ `name` varchar(100) NOT NULL,
+ `job_execution_id` varchar(64) DEFAULT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `engine_type` varchar(45) DEFAULT '',
+ `execute_user` varchar(50) DEFAULT '',
+ `linkis_job_name` varchar(100) NOT NULL,
+ `linkis_job_content` text NOT NULL,
+ `linkis_params` varchar(255) DEFAULT NULL,
+ `linkis_source` varchar(64) DEFAULT NULL,
+ `labels` varchar(64) DEFAULT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_v4.exchangis_launched_job_entity definition
+
+CREATE TABLE `exchangis_launched_job_entity` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `name` varchar(100) NOT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `job_id` bigint(20) DEFAULT NULL,
+ `launchable_task_num` int(20) DEFAULT '0',
+ `engine_type` varchar(100) DEFAULT NULL,
+ `execute_user` varchar(100) DEFAULT NULL,
+ `job_name` varchar(100) DEFAULT NULL,
+ `status` varchar(100) DEFAULT NULL,
+ `progress` varchar(100) DEFAULT NULL,
+ `error_code` varchar(64) DEFAULT NULL,
+ `error_msg` varchar(255) DEFAULT NULL,
+ `retry_num` bigint(10) DEFAULT NULL,
+ `job_execution_id` varchar(255) DEFAULT NULL,
+ `log_path` varchar(255) DEFAULT NULL,
+ `create_user` varchar(100) DEFAULT NULL,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `job_execution_id_UNIQUE` (`job_execution_id`)
+) ENGINE=InnoDB AUTO_INCREMENT=8380 DEFAULT CHARSET=utf8;
+
+-- exchangis_v4.exchangis_launched_task_entity definition
+
+CREATE TABLE `exchangis_launched_task_entity` (
+ `id` bigint(20) NOT NULL,
+ `name` varchar(100) NOT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `job_id` bigint(20) DEFAULT NULL,
+ `engine_type` varchar(100) DEFAULT NULL,
+ `execute_user` varchar(100) DEFAULT NULL,
+ `job_name` varchar(100) DEFAULT NULL,
+ `progress` varchar(64) DEFAULT NULL,
+ `error_code` varchar(64) DEFAULT NULL,
+ `error_msg` varchar(255) DEFAULT NULL,
+ `retry_num` bigint(10) DEFAULT NULL,
+ `task_id` varchar(64) DEFAULT NULL,
+ `linkis_job_id` varchar(200) DEFAULT NULL,
+ `linkis_job_info` varchar(1000) DEFAULT NULL,
+ `job_execution_id` varchar(100) DEFAULT NULL,
+ `launch_time` datetime DEFAULT NULL,
+ `running_time` datetime DEFAULT NULL,
+ `metrics` text,
+ `status` varchar(64) DEFAULT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+INSERT INTO exchangis_job_param_config (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status) VALUES
+('setting.speed.bytes','作业速率限制','','DATAX','INPUT','setting.speed.bytes','作业速率限制','Mb/s',1,'NUMBER','','','REGEX','^[1-9]\\d*$','作业速率限制输入错误',0,0,'',1,'',1,'',1)
+,('setting.speed.records','作业记录数限制','','DATAX','INPUT','setting.speed.records','作业记录数限制','条/s',1,'NUMBER','','','REGEX','^[1-9]\\d*$','作业记录数限制输入错误',0,0,'',1,'',2,'',1)
+,('setting.max.parallelism','作业最大并行度','','DATAX','INPUT','setting.max.parallelism','作业最大并行度','个',1,'NUMBER','','1','REGEX','^[1-9]\\d*$','作业最大并行度输入错误',0,0,'',1,'',3,'',1)
+,('setting.max.memory','作业最大使用内存','','DATAX','INPUT','setting.max.memory','作业最大使用内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]\\d*$','作业最大使用内存输入错误',0,0,'',1,'',4,'',1)
+,('setting.errorlimit.record','最多错误记录数','','DATAX','INPUT','setting.errorlimit.record','最多错误记录数','条',1,'NUMBER','','','REGEX','^[1-9]\\d*$','最多错误记录数输入错误',0,0,'',1,'',5,'',1)
+,('setting.max.parallelism','作业最大并行数','','SQOOP','INPUT','setting.max.parallelism','作业最大并行数','个',1,'NUMBER','','1','REGEX','^[1-9]\\d*$','作业最大并行数输入错误',0,0,'',1,'',1,'',1)
+,('setting.max.memory','作业最大内存','','SQOOP','INPUT','setting.max.memory','作业最大内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]\\d*$','作业最大内存输入错误',0,0,'',1,'',2,'',1)
+,('where','WHERE条件','SOURCE','MYSQL','INPUT','where','WHERE条件','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,500}$','WHERE条件输入过长',0,0,'',1,'',2,'',1)
+,('writeMode','写入方式','SQOOP-SINK','HIVE','OPTION','writeMode','写入方式(OVERWRITE只对TEXT类型表生效)','',1,'OPTION','["OVERWRITE","APPEND"]','OVERWRITE','','','写入方式输入错误',0,0,'',1,'',1,'',1)
+,('partition','分区信息','SINK','HIVE','MAP','partition','分区信息(文本)','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,'',2,'',1)
+;
+INSERT INTO exchangis_job_param_config (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status) VALUES
+('partition','分区信息','SOURCE','HIVE','MAP','partition','分区信息(文本)',NULL,0,'VARCHAR',NULL,NULL,'REGEX','^[\\s\\S]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,NULL,1,NULL,1)
+,('writeMode','写入方式','SQOOP-SINK','MYSQL','OPTION','writeMode','写入方式',NULL,1,'OPTION','["INSERT","UPDATE"]','INSERT',NULL,NULL,'写入方式输入错误',0,0,NULL,1,NULL,1,NULL,1)
+;
\ No newline at end of file
diff --git a/db/exchangis_dml.sql b/db/exchangis_dml.sql
new file mode 100644
index 000000000..6ea326fb9
--- /dev/null
+++ b/db/exchangis_dml.sql
@@ -0,0 +1,17 @@
+-- 插入 job_param_config 记录
+INSERT INTO exchangis_job_param_config (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status) VALUES
+('setting.speed.bytes','作业速率限制','','DATAX','INPUT','setting.speed.bytes','作业速率限制','Mb/s',1,'NUMBER','','','REGEX','^[1-9]\\d*$','作业速率限制输入错误',0,0,'',1,'',1,'',1)
+,('setting.speed.records','作业记录数限制','','DATAX','INPUT','setting.speed.records','作业记录数限制','条/s',1,'NUMBER','','','REGEX','^[1-9]\\d*$','作业记录数限制输入错误',0,0,'',1,'',2,'',1)
+,('setting.max.parallelism','作业最大并行度','','DATAX','INPUT','setting.max.parallelism','作业最大并行度','个',1,'NUMBER','','1','REGEX','^[1-9]\\d*$','作业最大并行度输入错误',0,0,'',1,'',3,'',1)
+,('setting.max.memory','作业最大使用内存','','DATAX','INPUT','setting.max.memory','作业最大使用内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]\\d*$','作业最大使用内存输入错误',0,0,'',1,'',4,'',1)
+,('setting.errorlimit.record','最多错误记录数','','DATAX','INPUT','setting.errorlimit.record','最多错误记录数','条',1,'NUMBER','','','REGEX','^[1-9]\\d*$','最多错误记录数输入错误',0,0,'',1,'',5,'',1)
+,('setting.max.parallelism','作业最大并行数','','SQOOP','INPUT','setting.max.parallelism','作业最大并行数','个',1,'NUMBER','','1','REGEX','^[1-9]\\d*$','作业最大并行数输入错误',0,0,'',1,'',1,'',1)
+,('setting.max.memory','作业最大内存','','SQOOP','INPUT','setting.max.memory','作业最大内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]\\d*$','作业最大内存输入错误',0,0,'',1,'',2,'',1)
+,('where','WHERE条件','SOURCE','MYSQL','INPUT','where','WHERE条件','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,500}$','WHERE条件输入过长',0,0,'',1,'',2,'',1)
+,('writeMode','写入方式','SQOOP-SINK','HIVE','OPTION','writeMode','写入方式(OVERWRITE只对TEXT类型表生效)','',1,'OPTION','["OVERWRITE","APPEND"]','OVERWRITE','','','写入方式输入错误',0,0,'',1,'',1,'',1)
+,('partition','分区信息','SINK','HIVE','MAP','partition','分区信息(文本)','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,'',2,'',1)
+;
+INSERT INTO exchangis_job_param_config (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status) VALUES
+('partition','分区信息','SOURCE','HIVE','MAP','partition','分区信息(文本)',NULL,0,'VARCHAR',NULL,NULL,'REGEX','^[\\s\\S]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,NULL,1,NULL,1)
+,('writeMode','写入方式','SQOOP-SINK','MYSQL','OPTION','writeMode','写入方式',NULL,1,'OPTION','["INSERT","UPDATE"]','INSERT',NULL,NULL,'写入方式输入错误',0,0,NULL,1,NULL,1,NULL,1)
+;
\ No newline at end of file
diff --git a/db/job_content_example.json b/db/job_content_example.json
new file mode 100644
index 000000000..5046a8ed4
--- /dev/null
+++ b/db/job_content_example.json
@@ -0,0 +1,76 @@
+{
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {
+ "sources": [
+ {
+ "config_key": "exchangis.job.ds.params.hive.transform_type",
+ "config_name": "传输方式",
+ "config_value": "二进制",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.partitioned_by",
+ "config_name": "分区信息",
+ "config_value": "2021-07-30",
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.empty_string",
+ "config_name": "空值字符",
+ "config_value": "",
+ "sort": 3
+ }
+ ],
+ "sinks": [
+ {
+ "config_key": "exchangis.job.ds.params.mysql.write_type",
+ "config_name": "写入方式",
+ "config_value": "insert",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.mysql.batch_size",
+ "config_name": "批量大小",
+ "config_value": 1000,
+ "sort": 2
+ }
+ ]
+ },
+ "transforms": [
+ {
+ "source_field_name": "name",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "c_name",
+ "sink_field_type": "VARCHAR"
+ },
+ {
+ "source_field_name": "year",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "d_year",
+ "sink_field_type": "VARCHAR"
+ }
+ ],
+ "settings": [
+ {
+ "config_key": "rate_limit",
+ "config_name": "作业速率限制",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "record_limit",
+ "config_name": "作业记录数限制",
+ "config_value": 10000,
+ "sort": 2
+ },
+ {
+ "config_key": "max_errors",
+ "config_name": "最多错误记录数",
+ "config_value": 100,
+ "sort": 3
+ }
+ ]
+}
\ No newline at end of file
diff --git a/db/job_content_example_batch.json b/db/job_content_example_batch.json
new file mode 100644
index 000000000..864bf89f0
--- /dev/null
+++ b/db/job_content_example_batch.json
@@ -0,0 +1,153 @@
+[{
+ "subJobName": "job0001",
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {
+ "sources": [
+ {
+ "config_key": "exchangis.job.ds.params.hive.transform_type",
+ "config_name": "传输方式",
+ "config_value": "二进制",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.partitioned_by",
+ "config_name": "分区信息",
+ "config_value": "2021-07-30",
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.empty_string",
+ "config_name": "空值字符",
+ "config_value": "",
+ "sort": 3
+ }
+ ],
+ "sinks": [
+ {
+ "config_key": "exchangis.job.ds.params.mysql.write_type",
+ "config_name": "写入方式",
+ "config_value": "insert",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.mysql.batch_size",
+ "config_name": "批量大小",
+ "config_value": 1000,
+ "sort": 2
+ }
+ ]
+ },
+ "transforms": {
+ "type": "MAPPING",
+ "mapping": [
+ {
+ "source_field_name": "name",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "c_name",
+ "sink_field_type": "VARCHAR"
+ },
+ {
+ "source_field_name": "year",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "d_year",
+ "sink_field_type": "VARCHAR"
+ }
+ ]
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 10000,
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.percentage",
+ "config_name": "脏数据占比阈值",
+ "config_value": 100,
+ "sort": 3
+ }
+ ]
+}, {
+ "subJobName": "job0002",
+ "dataSources": {
+ "source_id": "HIVE.10001.superman2_db.funny2_table",
+ "sink_id": "MYSQL.10002.ducky2_db.chicken2_table"
+ },
+ "params": {
+ "sources": [
+ {
+ "config_key": "exchangis.job.ds.params.hive.transform_type",
+ "config_name": "传输方式",
+ "config_value": "二进制",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.partitioned_by",
+ "config_name": "分区信息",
+ "config_value": "2021-07-30",
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.empty_string",
+ "config_name": "空值字符",
+ "config_value": "",
+ "sort": 3
+ }
+ ],
+ "sinks": [
+ {
+ "config_key": "exchangis.job.ds.params.mysql.write_type",
+ "config_name": "写入方式",
+ "config_value": "insert",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.mysql.batch_size",
+ "config_name": "批量大小",
+ "config_value": 1000,
+ "sort": 2
+ }
+ ]
+ },
+ "transforms": {
+ "type": "MAPPING",
+ "mapping": [
+ {
+ "source_field_name": "mid",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "c_mid",
+ "sink_field_type": "VARCHAR"
+ },
+ {
+ "source_field_name": "maxcount",
+ "source_field_type": "INT",
+ "sink_field_name": "c_maxcount",
+ "sink_field_type": "INT"
+ }
+ ]
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 100,
+ "sort": 2
+ }
+ ]
+}]
\ No newline at end of file
diff --git a/db/job_content_example_stream.json b/db/job_content_example_stream.json
new file mode 100644
index 000000000..264147849
--- /dev/null
+++ b/db/job_content_example_stream.json
@@ -0,0 +1,57 @@
+[{
+ "subJobName": "streamjob0001",
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {},
+ "transforms": {
+ "type": "SQL",
+ "sql": "select * from aaa"
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 100,
+ "sort": 2
+ }
+ ]
+}, {
+ "subJobName": "streamjob0002",
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {},
+ "transforms": {
+ "type": "SQL",
+ "sql": "insert into xxx"
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 10000,
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.percentage",
+ "config_name": "脏数据占比阈值",
+ "config_value": 100,
+ "sort": 3
+ }
+ ]
+}]
\ No newline at end of file
diff --git a/docs/en_US/ch1/exchangis_appconn_deploy_en.md b/docs/en_US/ch1/exchangis_appconn_deploy_en.md
new file mode 100644
index 000000000..dc7ed0d83
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_appconn_deploy_en.md
@@ -0,0 +1,90 @@
+# ExchangisAppConn installation documentation
+
+This paper mainly introduces the deployment, configuration and installation of ExchangisAppConn in DSS(DataSphere Studio)1.0.1.
+
+### 1. Preparations for the deployment of ExchangisAppConn
+Before you deploy ExchangisAppConn, please follow the [Exchangis1.0.0 to install the deployment document](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/zh_CN/ch1/exchangis_deploy_cn.md) to complete the installation of Exchangis1.0.0 and other related components, and ensure that the basic functions of the project are available.
+
+### 2. Download and compilation of the ExchangisAppConn plugin
+#### 1) Download binary package
+ We provide ExchangisAppconn's material package, which you can download and use directly. [Click to jump to Release interface](https://github.com/WeBankFinTech/Exchangis/releases)
+#### 2) Compile and package
+
+If you want to develop and compile ExchangisAppConn yourself, the specific compilation steps are as follows:
+1.clone Exchangis's source code
+2.In exchangis-plugins module, find exchangis-appconn, separate compilation exchangis-appconn
+
+```
+cd {EXCHANGIS_CODE_HOME}/exchangis-plugins/exchangis-appconn
+mvn clean install
+```
+The exchangis-appconn.zip installation package will be found in this path.
+```
+{EXCHANGIS_CODE_HOME}\exchangis-plugins\exchangis-appconn\target\exchangis-appconn.zip
+```
+
+### 3. Overall steps for deployment and configuration of ExchangisAppConn
+1、Get the packed exchangis-appconn.zip material package.
+
+2、Place it in the following directory and unzip it
+
+```
+cd {DSS_Install_HOME}/dss/dss-appconns
+unzip exchangis-appconn.zip
+```
+ The extracted directory structure is:
+```
+conf
+db
+lib
+appconn.properties
+```
+
+3、 Execute scripts for automated installation
+
+```shell
+cd {DSS_INSTALL_HOME}/dss/bin
+./install-appconn.sh
+# Script is an interactive installation scheme. You need to enter the string exchangis and the ip and port of exchangis service to complete the installation.
+# Exchangis port here refers to the front-end port, which is configured in nginx. Rather than the back-end service port.
+```
+
+### 4. After the installation of Exchangis-AppConn is completed, the dss service needs to be re-installed to finally complete the plug-in update.
+
+#### 4.1) Make the deployed APPCONN effective
+Make APPCONN effective by using DSS start-stop script, which is located in {DSS_INSTALL_HOME}/sbin, and execute the script by using the following commands in turn
+```
+sh /sbin/dss-stop-all.sh
+sh /sbin/dss-start-all.sh
+```
+There may be startup failure or jam in the middle, so you can quit repeated execution.
+
+#### 4.2) Verify that exchangis-appconn is effective.
+ After the exchangis-appconn is installed and deployed, the following steps can be taken to preliminarily verify whether exchangis-appconn is successfully installed.
+1. Create a new project in DSS workspace
+![image](https://user-images.githubusercontent.com/27387830/169782142-b2fc2633-e605-4553-9433-67756135a6f1.png)
+
+2. Check whether the project is created synchronously on Exchangis. Successful creation means successful installation of appconn
+![image](https://user-images.githubusercontent.com/27387830/169782337-678f2df0-080a-495a-b59f-a98c5a427cf8.png)
+
+For more operation, please refer to [Exchange IS 1.0 User Manual](https://user-images.githubusercontent.com/27387830/169782142-b2fc2633-e605-4553-9433-67756135a6f1.png)
+
+### 5.Exchangis AppConn installation principle
+
+The related configuration information of Exchangis inserted into the following table. By configuring the following table, you can complete the use configuration of Exchangis. When installing Exchangis AppConn, the script will replace init.sql under each AppConn and insert it into the table. (Note: If you only need to install APPCONN quickly, you don't need to pay too much attention to the following fields. Most of the provided init.sql are configured by default. Focus on the above operations)
+
+| Table name | Table function | Remark |
+| :----: | :----: |-------|
+| dss_application | The application table is mainly used to insert the basic information of Exchangis application | Required |
+| dss_menu | Menu, which stores the displayed contents, such as icons, names, etc | Required |
+| dss_onestop_menu_application| And menu and application, which are used for joint search | Required |
+| dss_appconn |Basic information of appconn, used to load appconn | Required |
+| dss_appconn_instance| Information of an instance of AppConn, including its own url information | Required |
+| dss_workflow_node | Schedulis is the information that needs to be inserted as a workflow node | Required |
+
+Exchangis as a scheduling framework, which implements the first-level specification and the second-level specification. The micro-services of exchangis AppConn need to be used in the following table.
+
+| Table name | Table function | Remark |
+| :----: | :----: |-------|
+| dss-framework-project-server | Use exchangis-appconn to complete the project and unify the organization. | Required |
+| dss-workflow-server | Scheduling AppConn is used to achieve workflow publishing and status acquisition. | Required |
diff --git a/docs/en_US/ch1/exchangis_datasource_en.md b/docs/en_US/ch1/exchangis_datasource_en.md
new file mode 100644
index 000000000..7dabbef01
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_datasource_en.md
@@ -0,0 +1,304 @@
+# DataSource1.0
+
+## 1、Background
+
+The earlier versions of **Exchangis0.x**and **Linkis0.x** have integrated data source modules, in which **linkis-datasource** is used as the blueprint (please refer to related documents) to reconstruct the data source module.
+
+## 2、Overall architecture desig
+
+In order to build a common data source module, the data source module is mainly divided into two parts: **datasource-client** and **datasource-server**, in which the server part is placed in the **linkis-datasource** module of **Linkis-1.0**, including the main logic of the service core; Client is placed under the **exchange is-data source** module of **exchange is-1.0**, which contains the calling logic of the client. Look at the overall architecture
+
+![linkis_datasource_structure](../../../images/zh_CN/ch1/datasource_structure.png)
+
+
+Figure 2-1 Overall Architecture Design
+
+
+## 3、Detailed explanation of modules
+
+### 3.1 datasource-server
+
+**datasource-server**: As the name implies, it is a module that stores core services, and it follows the original architecture of **linkis-datasource** (split into **datasourcemanager** and **metadatamanager**)
+
+### 3.2 linkis-datasource
+
+Schematic diagram of current architecture :
+
+![linkis_datasource_structure](../../../images/zh_CN/ch1/linkis_datasource_structure.png)
+
+
+Figure 3-1 Schematic diagram of current architecture
+
+
+It can be seen in the above figure that **linkis-datasource** decouples the related functions of data sources, the basic information part is managed by **datasourcemanager**, and the metadata part is managed by **metadatamanager**. The two sub-modules visit each other through RPC requests, and at the same time, they provide Restful entrances to the outside respectively. The external service requests are uniformly forwarded by **liniks-gateway** before they fall on the corresponding services. Furthermore, **metadatamanage** is connected to the sub-modules of **service** of different data sources in order to plug-in the metadata management platform of the third party. Each sub-module has its own implementation of metadata acquisition interface, such as **service/hive, service/elastic search and service/MySQL**
+
+#### 3.2.1 New demand
+
+##### Front end interface requirements
+
+The original **linkis-datasource** did not include the front-end interface, so now the original data source interface design of **exchangis 1.0** is merged. See **UI document** and **front-end interactive document** for details. Make a detailed description of the requirements involved:
+
+- Type of datasource-list acquisition [data source management]
+
+Description:
+
+Get all data source types accessed and show them
+
+- Datasource environment-list acquisition [data source management]
+
+Description:
+
+Get the preset data source environment parameters in the background and display them as a list
+
+- Add/Modify Datasource-Label Settings [Data Source Management]
+
+Description:
+
+Set the label information of the datasource
+
+- Connectivity detection [datasource management]
+
+Description:
+
+Check the connectivity of connected data sources, and click the Connectivity Monitor button in the data source list
+
+- Add/Modify Datasource-Configure and Load [Datasource Management]
+
+Description:
+
+In order to facilitate the introduction of new data sources or the attribute expansion of existing data sources, the form configuration of new/modified data sources is planned to adopt the method of background storage+front-end loading. The background will save the type, default value, loading address and simple cascading relationship of each attribute field, and the front-end will generate abstract data structures according to these, and then convert them into DOM operations.
+
+Process design:
+
+1. The user selects the datasource type, and the front end requests the background for the attribute configuration list of the data source with the datasource type as the parameter;
+
+2. When the front end gets the configuration list, it first judges the type, selects the corresponding control, then sets the default value and refreshes the interface DOM;
+
+3. After the basic configuration information is loaded and rendered, the values are preloaded and the cascading relationship is established;
+
+4. The configuration is completed, waiting for the user to fill it.
+
+ Associated UI:
+
+![datasource_ui](../../../images/zh_CN/ch1/datasource_ui.png)
+
+
+
+- Batch Processing-Batch Import/Export [Datasource Management]
+
+Description:
+
+Batch import and export of datasource configuration.
+
+##### Backstage demand
+
+**linkis-datasurce** at present, the background has integrated the relevant operation logic about the data source CRUD, and now the contents related to the label and version are added:
+
+- datasource permission setting [datasource management]
+
+Description:
+
+The background needs to integrate it with the labeling function of Linkis1.0, and give the datasource a labeling relationship.
+
+Process design:
+
+1. Users are allowed to set labels on datasources when they create and modify them;
+
+2. When saving, the tag information is sent to the back end as a character list, and the back end converts the tag characters into tag entities, and inserts and updates the tag;
+
+3. Save the datasource and establish the connection between the datasource and the label.
+
+- datasource version function [datasource management]
+
+Description:
+
+The concept of adding a version to a datasource, the function of which is to publish and update. When updating, a new version is added by default. When publishing, the datasource information of the version to be published covers the latest version and is marked as published.
+
+#### 3.2.2 Detailing
+
+Make some modifications and extensions to the entity objects contained in **linkis-datasource**, as follows:
+
+| Class Name | Role |
+| -------------------------------- | ------------------------------------------------------------ |
+| DataSourceType | Indicates the type of data source |
+| DataSourceParamKeyDefinition | Declare data source attribute configuration definition |
+| DataSourceScope[Add] | There are usually three fields for marking the scope of datasource attributes: datasource, data source environment and default (all) |
+| DataSource | Datasource entity class, including label and attribute configuration definitions |
+| DataSourceEnv | The datasource object entity class also contains attribute configuration definitions. |
+| DataSourcePermissonLabel[Delete] | |
+| DataSourceLabelRelation[Add] | Represents the relationship between datasources and permission labels |
+| VersionInfo[Add] | Version information, including datasource version number information |
+
+2.1 Among them, **DataSourceParamKeyDefinition** keeps the original consistent structure, and adds some attributes to support interface rendering. The detailed structure is as follows:
+
+| **Field name** | **Field type** | **Remark** |
+| -------------- | -------------- | ------------------------------------------------------------ |
+| id | string | persistent ID |
+| key | string | attribute name keyword |
+| description | string | describe |
+| name | string | attribute display name |
+| defaultValue | string | attribute default value |
+| valueType | string | attribute value type |
+| require | boolean | is it a required attribute |
+| refId | string | another attribute ID of the cascade |
+| dataSrcTypId | string | the associated data source type ID |
+| refMap[Add] | string | cascading relation table, format should be as follows: value1=refValue1, value2=refValue2 |
+| loadUrl[Add] | string | upload URL, which is empty by default |
+
+2.2 The **DataSource** structure is similar, but it contains label information
+
+| **Field name** | **Field type** | **Remark** |
+| ---------------- | -------------- | ------------------------------------------------------------ |
+| serId | string | persistent ID |
+| id | string | system ID |
+| versions[Add] | list-obj | The associated version VersionInfo list |
+| srcVersion[Add] | string | Version, indicating that the data source was created by version information. |
+| datSourceName | string | Data source name |
+| dataSourceDesc | string | Description of data source |
+| dataSourceTypeId | integer | Data source type ID |
+| connectParams | map | Connection parameter dictionary |
+| parameter | string | Connection attribute parameters |
+| createSystem | string | The created system is usually empty or (exchange is) |
+| dataSourceEnvId | integer | The associated data source environment ID of |
+| keyDefinitions | list-object | List of associated attribute configuration definitions. |
+| labels[Add] | map | Tag string |
+| readOnly[Add]] | boolean | Is it a read-only data source |
+| expire[Add]] | boolean | Is it expired |
+| isPub[Add] | boolean | Publish |
+
+2.3 **VersionInfo** version information. Different versions of data sources mainly have different connection parameters. The structure is as follows:
+
+| **Field name** | **Field type** | **Remark** |
+| -------------- | -------------- | ----------------------------- |
+| version | string | version number |
+| source | long | The associated data source ID |
+| connectParams | map | Version parameter dictionary |
+| parameter | string | Version parameter string |
+
+2.4 **DataSourceType** and **DataSourceEnv** are also roughly the same as the original classes, in which **DataSourceType** needs to add **classifier** fields to classify different datasource types, and the others will not be described.
+
+The main service processing classes of **datasource-server** are as follows:
+
+| **Interface name** | **Interface role** | **Single realization** |
+| ------------------------------- | ------------------------------------------------------------ | ---------------------- |
+| DataSourceRelateService | The operation of declaring datasource association information includes enumerating all datasource types and attribute definition information under different types | Yes |
+| DataSourceInfoService | Declare the basic operation of datasource/datasource environment | Yes |
+| MetadataOperateService | Declare the operation of datasource metadatasource, which is generally used for connection test | Yes |
+| BmlAppService | Declare the remote call to BML module to upload/download the key file of datasource | Yes |
+| DataSourceVersionSupportService | Declare the operations supported by multiple versions of the datasource | Yes |
+| MetadataAppService[Old] | Declare operations on metadata information | Yes |
+| DataSourceBatchOpService[Add] | Declare batch operations on datasources | Yes |
+| MetadataDatabaseService[Add] | Declare operations on metadata information of database classes | Yes |
+| MetadataPropertiesService[Add] | Operation of declaring metadata information of attribute class | Yes |
+
+### 3.3 datasource-client
+
+**datasource-client**: contains the client-side calling logic, which can operate the data source and obtain relevant metadata in the client-side way.
+
+#### 3.3.1 Related demand
+
+##### Backstage demand
+
+As the requesting client, **datasource-client** has no front-end interface requirements, and its back-end requirements are relatively simple. It not only builds a stable, retryable and traceable client, but also directly interfaces with all interfaces supported by sever, and supports various access modes as much as possible.
+
+#### 3.3.2 Detailing
+
+Its organizational structure is generally designed as follows :
+
+![datasource_client_scructure](../../../images/zh_CN/ch1/datasource_client_scructure.png)
+
+
+Figure 3-4 Detailed Design of datasource-client
+
+
+The class/interface information involved is as follows:
+
+| Class/interface name | Class/interface role | Single realization |
+| ----------------------------- | ------------------------------------------------------------ | ------------------ |
+| RemoteClient | The top-level interface of the Client declares the common interface methods of initialization, release and basic permission verification | No |
+| RemoteClientBuilder | Client's construction class is constructed according to different implementation classes of RemoteClient | Yes |
+| AbstractRemoteClient | The abstract implementation of remote involves logic such as retry, statistics and caching | Yes |
+| DataSourceRemoteClient | Declare all operation portals of the data source client | No |
+| MetaDataRemoteClient | Declare all operation portals of metadata client | No |
+| LinkisDataSourceRemoteClient | Datasource client implementation of linkis-datasource | Yes |
+| LinkisMetaDataRemoteClient | Metadata client implementation of linkis-datasource | Yes |
+| MetadataRemoteAccessService | Declare the interface of the bottom layer to access the remote third-party metadata service. | Yes |
+| DataSourceRemoteAccessService | Declare the interface of the bottom layer to access the remote third-party datasource service | Yes |
+
+The class relationship group diagram is as follows:
+
+![datasource_client_class_relation](../../../images/zh_CN/ch1/datasource_client_class_relation.png)
+
+
+Figure 3-5 datasource-client Class Relationship Group Diagram
+
+
+##### Process sequence diagram:
+
+Next, combining all modules, the calling relationship between interfaces/classes in the business process is described in detail :
+
+- Create datasource
+
+Focus:
+
+1. Before creating a datasource, you need to pull the datasource type list and the attribute configuration definition list of the datasource corresponding to the type. In some cases, you also need to pull the datasource environment list ;
+
+2. There are two scenarios for creating datasources, one is created through the interface of **linkis-datasource**, and the other is created through the datasource-client of **exchangis**;
+
+3. Datasource type, attribute configuration definition and datasource environment can be added in the background library by themselves. Currently, there is no interface dynamic configuration method (to be provided).
+
+Now look at the timing diagram of creating a data source:
+
+![datasource_client_create](../../../images/zh_CN/ch1/datasource_client_create.png)
+
+
+Figure 3-6 Sequence diagram of datasource created datasource-client
+
+
+Continue to look at creating data source interface through **datasource-client**:
+
+![datasource_client_create2](../../../images/zh_CN/ch1/datasource_client_create2.png)
+
+
+Figure 3-7 Sequence diagram of datasource created datasource-client call
+
+
+Some additional methods, such as client connection authentication, request recording and life cycle monitoring, are omitted in the above figure, but the whole calling process is simplified
+
+- Update datasource
+
+Focus:
+
+1. There are two ways to update: version update and ordinary update. Version update will generate a new version of datasource (which can be deleted or published), while ordinary update will overwrite the current datasource and will not generate a new version;
+
+2. Only the creator and administrator users of the datasource can update the publication datasource.
+
+![datasource_client_update](../../../images/zh_CN/ch1/datasource_client_update.png)
+
+
+
+- Query datasource
+
+Focus :
+
+1. When you get the datasource list through datasource-client, you need to attach the operating user information for permission filtering of the datasource
+
+Database design :
+
+![datasource_client_query](../../../images/zh_CN/ch1/datasource_client_query.png)
+
+
+
+Interface design: (refer to the existing interface of linkis-datasource for supplement)
\ No newline at end of file
diff --git a/docs/en_US/ch1/exchangis_deploy_en.md b/docs/en_US/ch1/exchangis_deploy_en.md
new file mode 100644
index 000000000..42c3593f5
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_deploy_en.md
@@ -0,0 +1,280 @@
+## Foreword
+
+Exchangis installation is mainly divided into the following four steps :
+
+1. Exchangis dependent on environmental preparation
+2. Exchangis installation and deployment
+3. DSS ExchangisAppConn installation and deployment
+4. Linkis Sqoop engine installation and deployment
+
+## 1. Exchangis dependent on environmental preparation
+
+#### 1.1 Basic software installation
+
+| Dependent components | Must be installed | Install through train |
+| -------------- | ------ | --------------- |
+| MySQL (5.5+) | yes | [How to install mysql](https://www.runoob.com/mysql/mysql-install.html) |
+| JDK (1.8.0_141) | yes | [How to install JDK](https://www.runoob.com/java/java-environment-setup.html) |
+| Hadoop(2.7.2,Other versions of Hadoop need to compile Linkis by themselves.) | yes | [Hadoop stand-alone deployment](https://linkis.apache.org/zh-CN/docs/latest/deployment/quick_deploy) ;[Hadoop distributed deployment](https://linkis.apache.org/zh-CN/docs/latest/deployment/quick_deploy) |
+| Hive(2.3.3,Other versions of Hive need to compile Linkis by themselves.) | yes | [Hive quick installation](https://linkis.apache.org/zh-CN/docs/latest/deployment/quick_deploy) |
+| SQOOP (1.4.6) | yes | [How to install Sqoop](https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html) |
+| DSS1.0.1 | yes | [How to install DSS](https://github.com/WeBankFinTech/DataSphereStudio-Doc/blob/main/zh_CN/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/DSS%E5%8D%95%E6%9C%BA%E9%83%A8%E7%BD%B2%E6%96%87%E6%A1%A3.md) |
+| Linkis1.1.0 | yes | [How to install Linkis](https://linkis.apache.org/zh-CN/docs/latest/deployment/quick_deploy) |
+| Nginx | yes | [How to install Nginx](http://nginx.org/en/linux_packages.html) |
+
+Underlying component checking
+
+$\color{#FF0000}{Note: be sure to reinstall dss1.0.1, and the linkis version must be greater than 1.1.1. Please recompile linkis and use the package released on June 15th }$
+
+[linkis1.1.1 code address ](https://github.com/apache/incubator-linkis/tree/release-1.1.1)
+
+[DSS1.0.1 code address ](https://github.com/WeBankFinTech/DataSphereStudio/tree/master)
+
+datasource enabled
+
+By default, two services related to datasources (ps-data-source-manager, ps-metadatamanager) will not be started in the startup script of linkis. If you want to use datasource services, you can start them by modifying the export enable _ metadata _ manager = true value in $ linkis_conf_dir/linkis-env.sh. When the service is started and stopped through linkis-start-all.sh/linkis-stop-all.sh, the datasource service will be started and stopped. For more details about data sources, please refer to [Data Source Function Usage](https://linkis.apache.org/zh-CN/docs/1.1.0/deployment/start_metadatasource)
+
+#### 1.2 Create Linux users
+
+Please keep the deployment user of Exchangis consistent with that of Linkis, for example, the deployment user is hadoop account.
+
+#### 1.3 在linkis中为exchangis加专用token
+
+Add special token to exchangis in linkis:
+
+```sql
+INSERT INTO `linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`) VALUES ('EXCHANGIS-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
+```
+
+Insert hive data source environment configuration by executing the following sql statement in linkis database. Note that ${HIVE_METADATA_IP} and ${HIVE_METADATA_PORT} in the statement need to be modified before execution, for example:${HIVE_METADATA_IP}=127.0.0.1,${HIVE_METADATA_PORT}=3306:
+
+```sql
+INSERT INTO `linkis_ps_dm_datasource_env` (`env_name`, `env_desc`, `datasource_type_id`, `parameter`, `create_time`, `create_user`, `modify_time`, `modify_user`) VALUES ('开发环境SIT', '开发环境SIT', 4, '{"uris":"thrift://${HIVE_METADATA_IP}:${HIVE_METADATA_PORT}", "hadoopConf":{"hive.metastore.execute.setugi":"true"}}', now(), NULL, now(), NULL);
+INSERT INTO `linkis_ps_dm_datasource_env` (`env_name`, `env_desc`, `datasource_type_id`, `parameter`, `create_time`, `create_user`, `modify_time`, `modify_user`) VALUES ('开发环境UAT', '开发环境UAT', 4, '{"uris":"thrift://${HIVE_METADATA_IP}:${HIVE_METADATA_PORT}", "hadoopConf":{"hive.metastore.execute.setugi":"true"}}', now(), NULL, now(), NULL);
+```
+
+#### 1.4 Underlying component checking
+
+Please ensure that DSS1.0.1 and Linkis1.1.0 are basically available. HiveQL scripts can be executed in the front-end interface of DSS, and DSS workflows can be created and executed normally.
+
+## 2. Exchangis installation and deployment
+
+### 2.1 Prepare installation package
+
+#### 2.1.1 Download binary package
+
+Download the latest installation package from the Released release of Exchangis [click to jump to the release interface](https://github.com/WeBankFinTech/Exchangis/releases).
+
+#### 2.1.2 Compile and package
+
+ Execute the following command in the root directory of the project:
+
+```shell script
+ mvn clean install
+```
+
+ After successful compilation, the installation package will be generated in the `assembly-package/target` directory of the project.
+
+### 2.2 Unzip the installation package
+
+ Execute the following command to decompress:
+
+```shell script
+ tar -zxvf wedatasphere-exchangis-{VERSION}.tar.gz
+```
+
+ The directory structure after decompression is as follows:
+
+```html
+|-- config:One-click installation deployment parameter configuration directory
+|-- db:Database initialization SQL directory
+|-- exchangis-extds
+|-- packages:Exchangis installation package directory
+|-- sbin:Script storage directory
+```
+
+### 2.3 Modify configuration parameters
+
+```shell script
+ vim config/config.sh
+```
+
+```shell script
+#IP of LINKIS_GATEWAY service address, which is used to find linkis-mg-gateway service.
+LINKIS_GATEWAY_HOST=
+
+#The LINKIS_GATEWAY service address port is used to find linkis-mg-gateway service.
+LINKIS_GATEWAY_PORT=
+
+#The URL of LINKIS_GATEWAY service address is composed of the above two parts.
+LINKIS_SERVER_URL=
+
+#Token used to request verification of linkis service, which can be obtained in ${LINKIST_INSTALLED_HOME}/conf/token.propertis of linkis installation directory.
+LINKIS_TOKEN=
+
+#Eureka service port
+EUREKA_PORT=
+
+#Eureka service URL
+DEFAULT_ZONE=
+```
+
+### 2.4 Modify database configuration
+
+```shell script
+ vim config/db.sh
+```
+
+```shell script
+# Set the connection information of the database.
+# Include IP address, port, user name, password and database name.
+MYSQL_HOST=
+MYSQL_PORT=
+MYSQL_USERNAME=
+MYSQL_PASSWORD=
+DATABASE=
+```
+
+### 2.5 Installation and startup
+
+#### 2.5.1 Execute one-click installation script.
+
+ Execute the install.sh script to complete the one-click installation and deployment:
+
+```shell script
+ sh sbin/install.sh
+```
+
+#### 2.5.2 Installation step
+
+ This script is an interactive installation. After executing the install.sh script, the installation steps are divided into the following steps:
+
+1. Initialize database tables.
+
+ When the reminder appears: Do you want to confiugre and install project?
+
+ Enter `y` to start installing Exchange IS service, or `n` to not install it.
+
+#### 2.5.3 Start service
+
+Execute the following command to start Exchangis Server:
+
+```shell script
+ sh sbin/daemon.sh start server
+```
+
+ You can also use the following command to restart Exchangis Server:
+
+```shell script
+./sbin/daemon.sh restart server
+```
+
+After executing the startup script, the following prompt will appear, eureka address will also be typed in the console when starting the service:
+
+![企业微信截图_16532930262583](https://user-images.githubusercontent.com/27387830/169773764-1c5ed6fb-35e9-48cb-bac8-6fa7f738368a.png)
+
+### 2.6 Check whether the service started successfully.
+
+You can check the success of service startup in Eureka interface. Check the method:
+
+Use http://${EUREKA_INSTALL_IP}:${EUREKA_PORT}. It is recommended to open it in Chrome browser to see if the service is registered successfully.
+
+As shown in the figure below:
+
+![补充Eureka截图](../../../images/zh_CN/ch1/eureka_exchangis.png)
+
+### 2.7 Front-end installation and deployment
+
+#### 2.7.1 Get the front-end installation package
+
+Exchangis has provided compiled front-end installation package by default, which can be downloaded and used directly :[Click to jump to the Release interface](https://github.com/WeBankFinTech/Exchangis/releases)
+
+You can also compile the exchange front-end by yourself and execute the following command in the exchanise root directory:
+
+```shell script
+ cd web
+ npm i
+ npm run build
+```
+
+Get the compiled dist.zip front-end package from the `web/` path.
+
+The acquired front-end package can be placed anywhere on the server. Here, it is recommended that you keep the same directory as the back-end installation address, place it in the same directory and unzip it.
+
+#### 2.7.2 Front-end installation deployment
+
+1. Decompress front-end installation package
+
+ If you plan to deploy Exchange is front-end package to the directory `/appcom/install/Exchange is/web`, please copy ` dist.zip to the directory and extract it:
+
+```shell script
+ # Please copy the exchange front-end package to `/appcom/install/exchange/web` directory first.
+ cd /appcom/Install/exchangis/web
+ unzip dist.zip
+```
+
+ Execute the following command:
+
+```shell script
+ vim /etc/nginx/conf.d/exchangis.conf
+```
+
+```
+ server {
+ listen 8098; # Access port If this port is occupied, it needs to be modified.
+ server_name localhost;
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+ location /dist {
+ root /appcom/Install/exchangis/web; # Exchangisfront-end deployment directory
+ autoindex on;
+ }
+
+ location /api {
+ proxy_pass http://127.0.0.1:9020; # The address of the back-end Linkis needs to be modified.
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header x_real_ipP $remote_addr;
+ proxy_set_header remote_addr $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_http_version 1.1;
+ proxy_connect_timeout 4s;
+ proxy_read_timeout 600s;
+ proxy_send_timeout 12s;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection upgrade;
+ }
+
+ #error_page 404 /404.html;
+ # redirect server error pages to the static page /50x.html
+ #
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+ }
+```
+
+#### 2.7.3 Start nginx and visit the front page
+
+ After the configuration is complete, use the following command to refresh the nginx configuration again:
+
+```shell script
+ nginx -s reload
+```
+
+Please visit the Exchange front-end page at http://${EXCHANGIS_INSTALL_IP}:8098/#/projectManage. The following interface appears, indicating that Exchangis successfully installed on the front end. If you really want to try Exchangis, you need to install dss and linkis, and log in secret-free through dss. As shown in the following figure :
+
+![image](https://user-images.githubusercontent.com/27387830/170417473-af0b4cbe-758e-4800-a58f-0972f83d87e6.png)
+
+## 3. DSS ExchangisAppConn installation and deployment
+
+If you want to use Exchangis1.0.0 front-end, you also need to install the DSS ExchangisAppConn plugin. Please refer to: [ExchangisAppConn installation documentation for plugins ](exchangis_appconn_deploy_cn.md)
+
+## 4. Linkis Sqoop engine installation and deployment
+
+If you want to execute the Sqoop operation of Exchangis1.0.0 normally, you also need to install the Linkis Sqoop engine. Please refer to: : [Linkis Sqoop engine installation documentation ](exchangis_sqoop_deploy_cn.md)
+
+## 5. How to log in and use Exchangis
+
+ To be supplemented !
diff --git a/docs/en_US/ch1/exchangis_job_execute_en.md b/docs/en_US/ch1/exchangis_job_execute_en.md
new file mode 100644
index 000000000..b6e62bec8
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_job_execute_en.md
@@ -0,0 +1,201 @@
+# Exchangis synchronous job execution module detailed design document
+
+## 一、Overall flow chart
+
+ ![img](../../../images/zh_CN/ch1/job_overall.png)
+
+
+Figure 1-1 General flow chart
+
+
+ Please note that :
+
+1. If the user directly submits a JSON of the synchronization task to be executed through the REST client, the JSON can be directly submitted to the TaskGenerator without step 2.
+
+2. Every time the front-end or REST client submits, it will generate a JobExecutionId to the front-end, and the subsequent front-end or REST client will obtain the execution status of synchronous job job through jobExecutionId.
+
+3. JobExecutionId is best generated and returned when the user submits it, that is, the TaskGenerator should be performed asynchronously. Generally, the TaskGenerator may take several seconds to several minutes (depending on the number of subJob), so if you wait for the TaskGenerator to finish before returning jobExecutionId, the front-end request will probably time out.
+
+4. Therefore, only the front-end or REST client generates a jobExecutionId for each submission, which is to support the repeated submission of the same ExchangisJob. Therefore, in principle, JobServer won't check that only one instance of ExchangisJob can be executed at the same time, but the Web front-end should ensure that only one instance of the same ExchangisJob is executed at the same time in the same browser.
+
+## 二、Front-background interaction
+
+![img](../../../images/zh_CN/ch1/job_frontend_backend.png)
+
+
+Figure 2-1 Interaction between foreground and background
+
+
+### 1. The necessity of jobExecutionId
+
+Consider the scenario that REST client directly submits a JSON synchronization task that it wants to be executed, and in order to support the repeated submission of the same ExchangisJob, it is necessary to generate a jobExecutionId every time it is submitted.
+
+JobExecutionId is an execution voucher of ExchangisJob, which will be stored in the database. All subsequent requests about the execution of this ExchangisJob need to bring jobExecutionId.
+
+1. The necessity of TaskGenerator asynchrony
+
+Consider a scenario, that is, after the client submits the job, the client hangs up before Exchangis can return to jobExecutionId. In this case, because jobExecutionId is not printed in the log of the client, the submitting user thinks that the job was not submitted successfully, so there may be data confusion. Furthermore, it may take a long time for TaskGenerator to process an ExchangisJob (depending on the number of subJob), so if you wait for TaskGenerator to finish before returning jobExecutionId, the front-end request will probably time out.
+
+Therefore, once the JobServer receives the job execution request, it should immediately generate a jobExecutionId, and generate an execution record for this ExchangisJob in the database, and set the execution status as Inited. As long as the database is persisted successfully, it will asynchronously generate the task of TaskGenerator and immediately return to jobExecutionId.
+
+### 2. Statelessness of JobServer
+
+This paper discusses whether JobServer is stateless, that is, after the front end gets jobExecutionId, no matter which JobServer instance is requested, it can normally get the desired data in execution.
+
+Because there is no special information stored in the memory of JobServer, and the ExchangisJob execution status, progress and Metrics information will be stored in the database, when the front-end request is sent, you only need to go to the database to fetch the relevant data. Therefore, the JobServer is stateless.
+
+### 3. Multi-tenant function
+
+Considering the multi-tenant capability, we can split JobGenerator and JobExecution, that is, JobGenerator is used to receive job execution requests submitted by front-end /REST clients in a distributed manner, and JobGenerator generates task sets and stores them in the database. This microservice can be shared by all tenants; While JobExecution can be divided according to different tenants, so as to avoid mutual influence during execution.
+
+### 4. High availability
+
+TaskChooseRuler of JobExecution will scan all Exchange Tasks in the database. If an ExchangisTask has not been updated in status after more than a period of time, it will be taken over by the new JobServer.
+
+How to take over?
+
+A simple takeover means that the JobServer of all other inventories loads this ExchangisTask to the TaskScheduler at the same time. Because it updates the progress, status and Metrics information, although many of them are updated at the same time, it has no impact on the task.
+
+Complex takeover requires adding a field in the database table of ExchangisTask to identify the JobServer that is executing the ExchangisTask. At this time, multiple job servers will be triggered to grab the ownership of the Exchangistask. Because the scheme is complex, it will not be considered for the time being.
+
+## 三、Detailed explanation of front-end interaction
+
+### 1. Submit
+
+Before execution, the page is shown in the following figure:
+
+As the execution interface (with the link to the submission interface attached) needs to be provided with jobId, before actually submitting for execution, it needs to be saved and then submitted, and a basic check should be made before submission, that is, if no subtask or job fails to be saved, it cannot be submitted for execution.
+
+![img](../../../images/zh_CN/ch1/job_frontend_1.png)
+
+
+Figure 3-1 Task submission
+
+
+Click execute, as shown in the figure below:
+
+Note that the job information desk will pop up at this moment, showing the running status by default, that is, the overall progress and the progress of all subtasks.
+
+There are two interfaces used in the front end. One is to use the [Execution Interface] first, submit the ExchangisJob for execution, and return the jobExecutionId; in the background; Second, call the [Get Job Progress] interface through jobExecutionId, which is used to get the progress information of Job&all task, and to show the progress of the following pages.
+
+![img](../../../images/zh_CN/ch1/job_frontend_2.png)
+
+
+Figure 3-2 Task Execution
+
+
+### 2. Operation status of subtasks
+
+When the user clicks on a running/completed sub-job, the front end triggers the [Get Task Metrics Information] interface in the background of the request, and obtains the task Metrics information through jobExecutionId & taskId, showing the contents of the following page:
+
+![1655260735321](../../../images/zh_CN/ch1/job_frontend_3.png)
+
+
+Figure 3-3 Operation of subtasks
+
+
+Show the main resource usage, flow rate and core indicators.
+
+![1655260937221](../../../images/zh_CN/ch1/job_frontend_4.png)
+
+
+Figure 3-4 Resource usage of subtasks
+
+
+### 3. Real-time log
+
+When the user clicks the "Log" button in the lower right corner as shown in the figure below, the "Real-time Log" Tab will appear at the information desk, and the real-time log of Job will be displayed by default. When you click the "Log" button of the running status, the running log of the whole Job will be displayed by default at first. At this time, the front end will call the interface of "Get Job Real-time Log" by default, and get the job log through jobExecutionId and display it, as shown in the following figure:
+
+![img](../../../images/zh_CN/ch1/job_frontend_5.png)
+
+
+Figure 3-5 Task Real-time Log
+
+
+As long as the user doesn't switch to other tabs of the information desk, the front end will constantly poll the background for real-time logs;
+
+The user can also select to view the log of a certain task through the select selection box, then trigger the request [Get task Real-time Log] interface, get the task log through jobExecutionId & taskId, and continuously poll the latest log.
+
+If the user switches the select box, the previous log will not be refreshed.
+
+It should be Inited here that the background also provides an interface of [Get task List of this Job Execution], which is used to help the front end to get all the task lists and display the contents of the select selection box. If the Job itself is still in the initiated or Scheduled state, but it has not been successfully turned into the Running state, the task list cannot be pulled at this time, so when the user drops down the select selection box, the user should be prompted that "the Job is still being scheduled. Please check the real-time log of subtasks after the Job is turned into the Running state."
+
+After the operation is completed, if the status is successful, the Tab will be switched back to the operation Tab page; If the status is failed, based on the information returned by the [Get Job Progress] interface, the log of the failed sub-job's task will be displayed by default, and the log of the first failed task will be displayed automatically when multiple tasks fail.
+
+## 四、Detailed explanation of background design
+
+### 1. Table structure design
+
+![img](../../../images/zh_CN/ch1/job_backend_datasource_design.png)
+
+
+Figure 4-1 Database Table Structure Design
+
+
+### 2. Interface document
+
+Please refer to the interface document of Exchangis job execution module for details.
+
+### 3. Core module & Core class design
+
+#### 3.1 The UML class diagram of the Bean is as follows:
+
+![img](../../../images/zh_CN/ch1/job_backend_uml_1.png)
+
+
+
+
+Figure 4-2 UML class diagram of entity Bean
+
+
+Please note that all non-interfaces ending in Entity need to be stored in the database and exist as a table.
+
+#### 3.2 The UML class diagram structure of TaskGenerator is as follows:
+
+![img](../../../images/zh_CN/ch1/job_backend_uml_2.png)
+
+
+Figure 4-3 UML class diagram of TaskGenerator
+
+
+TaskGenerator is only responsible for converting the JSON of a Job into a task set that can be submitted to Linkis for execution (that is, all sub Jobs under the job are translated into an ExchangisTask set), and the translation is written into DB.
+
+It should be noted here that TaskGenerator is executed asynchronously, and we will encapsulate JobGenerationSchedulerTask in the Service layer for asynchronous submission to TaskExecution for execution.
+
+#### 3.3 The UML class diagram structure of TaskExecution system is as follows:
+
+![img](../../../images/zh_CN/ch1/job_backend_uml_3.png)
+
+
+Figure 4-4 UML class diagram of Task Execution system
+
+
+1. TaskExecution is mainly composed of TaskConsumer, TaskManager, TaskScheduler and TaskSchedulerLoadBalancer.
+
+2. TaskManager,mainly used to manage all ExchangisTask; in the Running state under this JobServer;
+
+3. TaskConsumer consists of several thread groups with different functions, such as NewTaskCustomer and ReceiveTaskConsumer. Among them, NewTaskConsumer fetch all executable ExchangisTask lists in the initiated state from the database (ExchangisTask lists corresponding to multiple subJob that may include multiple jobs) and submits them to TaskScheduler in batches according to the actual load of Task Scheduler; Before submitting, the status of this task in the database will be updated to Scheduled;; The receiver is used to take over an ExchangisTask that is already in operation, but still has no updated status and Metrics information after a certain period of time, and put the ExchangisTask into the TaskManager to wait for the status to be updated by StatusUpdateSchedulerTask and MetricsUpdateSchedulerTask. TaskChooseRuler is a rule device used to help TaskConsumer filter and select the required ExchangisTask, such as judging whether ExchangisTask can take over, priority strategy and other rules.
+
+4. TaskScheduler is a thread pool for scheduling various types of SchedulerTask;; SubmitSchedulerTask is used to asynchronously submit tasks to Linkis for execution, and write key information returned by Linkis, such as Id and ECM information, into DB; StatusUpdateSchedulerTask and MetricsUpdateSchedulerTask are permanent polling tasks that will never stop. They will constantly get the SchedulerTask that is already in the Running state from TaskManager, request status and Metrics information from Linkis regularly, and update the database.
+
+5. TaskSchedulerLoadBalancer is a loader, which is used to detect the polling situation of Running tasks in TaskManager, the load situation of TaskScheduler and server in real time, and determine how many StatusUpdateSchedulerTask and MetricsUpdateSchedulerTask are finally instantiated by TaskScheduler to poll the status and Metrics information of all running tasks.
+
+#### 3.4 The UML class diagram of TaskScheduler system is as follows:
+
+![img](../../../images/zh_CN/ch1/job_backend_uml_4.png)
+
+
+Figure 4-5 UML class diagram of Task Scheduler system
+
+
+ TaskScheduler is implemented based on linkis-scheduler module.
+
+#### 3.5 The UML class diagram of the Listener system is as follows:
+
+![img](../../../images/zh_CN/ch1/job_backend_uml_5.png)
+
+
+Figure 4-6 UML class diagram of listener system
+
+
+The Listener system is the core to ensure that all information can be updated to the database, and the implementation classes of these listeners should be all service classes.
\ No newline at end of file
diff --git a/docs/en_US/ch1/exchangis_job_execute_interface_en.md b/docs/en_US/ch1/exchangis_job_execute_interface_en.md
new file mode 100644
index 000000000..27be74f00
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_job_execute_interface_en.md
@@ -0,0 +1,361 @@
+# Exchangis job execution module interface document
+
+### 1、Submit the configured job for execution.
+
+Interface description : Submit ExchangisJob and return jobExecutionId in the background.
+
+Request URL:/api/rest_j/v1/exchangis/job/{id}/execute
+
+Request mode:POST
+
+Request parameters:
+
+| Name | Type | Remark | If required | Default value |
+| --------------------- | ------- | ------------------------------------------------------------ | ----------- | ------------- |
+| id | Long | Exchangis's ID | yes | / |
+| permitPartialFailures | boolean | Whether partial failure is allowed. If true, even if some subtasks fail, the whole Job will continue to execute. After the execution is completed, the Job status is Partial_Success. This parameter is the requestBody parameter. | no | false |
+
+Return parameter :
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------------------------------------ | ----------- | ------------- |
+| method | String | Called method (request path) | yes | / |
+| status | int | Response status code | yes | / |
+| message | String | Information of the response | no | / |
+| data | Map | The returned data | yes | / |
+| jobExecutionId | String | Returns the execution log of the Job | yes | / |
+
+Back to example :
+
+```json
+{
+ "method": "/api/rest_j/v1/exchangis/job/{id}/execute",
+ "status": 0,
+ "message": "Submitted succeed(Submit successfully)!",
+ "data": {
+ "jobExecutionId": "555node1node2node3execId1"
+}
+
+```
+
+### 2、Get the execution status of Job
+
+Interface description: Get the status of Job according to jobExecutionId
+
+Request URL:/api/rest_j/v1/exchangis/job/execution/{jobExecutionId}/status
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ---------------------------- | ----------- | ------------- |
+| jobExecutionId | String | Execution ID of ExchangisJob | yes | / |
+
+Back to example :
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------------------------------------------------------------ | ----------- | ------------- |
+| method | String | Called method (request path) | yes | / |
+| status | int | Response status code | yes | / |
+| message | String | Information of the response | no | / |
+| data | Map | The returned data | yes | / |
+| jobExecutionId | String | The status of the executed job, including: initiated, Scheduled, Running, WaitForRetry, Cancelled, Failed, Partial_Success, Success, Undefined, Timeout. Among them, the Running state indicates that it is running, and all of them are completed since Cancelled | yes | / |
+
+Back to example :
+
+```json
+{
+ "method": "/api/rest_j/v1/exchangis/job/execution/{id}/status",
+ "status": 0,
+ "message": "Submitted succeed(Submit successfully)!",
+ "data": {
+ "status": "Running",
+ "progress": 0.1
+}
+```
+
+### 3、Get the task list executed by this Job
+
+Interface description:Get the task list through jobExecutionId
+
+Prerequisite: the task list can only be obtained after the Job's execution status is Running, otherwise the returned task list is empty.
+
+Request URL:/api/rest_j/v1/exchangis/job/execution/{jobExecutionId}/tasklist
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------ | ----------- | ------------- |
+| jobExecutionId | String | string | yes | / |
+
+Back to example :
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------------------------------------------------------------ | ----------- | ------------- |
+| method | String | Called method (request path) | yes | / |
+| status | int | Response status code | yes | / |
+| message | String | Information of the response | no | / |
+| data | Map | The returned data | yes | / |
+| jobExecutionId | String | Task list. The execution status of the Job must be Running before you can get the task list, otherwise the returned task list is empty. Please note: task has no Partial_Success status | yes | / |
+
+Back to example :
+
+```json
+{
+ "method": "/api/rest_j/v1/exchangis/job/execution/{jobExecutionId}/tasklist",
+ "status": 0,
+ "message": "Submitted succeed(Submit successfully)!",
+ "data": {
+ "tasks": [
+ {
+ "taskId": 5,
+ "name": "test-1",
+ "status": "Inited", // There is no task Partial_Success status.
+ "createTime": "2022-01-03 09:00:00",
+ "launchTime": null,
+ "lastUpdateTime": "2022-01-03 09:00:00",
+ "engineType": "sqoop",
+ "linkisJobId": null,
+ "linkisJobInfo": null,
+ "executeUser": "enjoyyin"
+ }
+ ]
+ }
+}
+
+```
+
+### 4、Get the execution progress of Job & task
+
+Interface description: Get the execution progress through jobExecutionId
+
+Prerequisites: the execution status of the Job must be Running before you can get the progress of the task list.
+
+Request URL:/api/rest_j/v1/exchangis/job/execution/{jobExecutionId}/progress
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ---------------------------- | ----------- | ------------- |
+| jobExecutionId | String | Execution ID of ExchangisJob | yes | / |
+
+Back to example :
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------------------------------------------------------------ | ----------- | ------------- |
+| method | String | Called method (request path) | yes | / |
+| status | int | Response status code | yes | / |
+| message | String | Information of the response | no | / |
+| data | Map | The returned data | yes | / |
+| jobExecutionId | String | Task list. The execution status of the Job must be Running before you can get the task list, otherwise the returned task list is empty | yes | / |
+
+Back to example :
+
+```json
+{
+ "method": "/api/rest_j/v1/exchangis/job/execution/{jobExecutionId}/progress",
+ "status": 0,
+ "message": "Submitted succeed(Submit successfully)!",
+ "data": {
+ "job": {
+ "status": "Running",
+ "progress": 0.1,
+ "tasks": {
+ "running": [
+ {
+ "taskId": 5,
+ "name": "test-1",
+ "status": "Running",
+ "progress": 0.1
+ }
+ ],
+ "Inited": [
+ {
+ "taskId": 5,
+ "name": "test-1",
+ "status": "Inited",
+ "progress": 0.1
+ }
+ ],
+ "Scheduled": [],
+ "Success": [],
+ "Failed": [], // If there is a Failed task, the Job will fail directly.
+ "WaitForRetry": [],
+ "Cancelled": [], // If there is a Cancelled task, the Job will fail directly
+ "Undefined": [], // If there is a Undefined task, the Job will fail directly
+ "Timeout": []
+ }
+ }
+ }
+}
+
+```
+
+### 5、Get the indicator information of task runtime
+
+Interface description:Through jobExecutionId and taskId, we can get the information of various indicators when task is running.
+
+Prerequisites: before you can get the indicator information of the task, the execution status of the task must be Running; otherwise, the returned information is empty.
+
+Request URL:/api/rest_j/v1/exchangis/task/execution/{taskId}/metrics
+
+Request mode:POST
+
+Request parameters:
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | --------------------------------------------------- | ----------- | ------------- |
+| jobExecutionId | String | Execution ID of ExchangisJob,put it in requestBody | yes | / |
+| taskId | String | Execution ID of task,put it in URI | yes | / |
+
+Back to example :
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------------------------------------------------------------ | ----------- | ------------- |
+| method | String | Called method (request path) | yes | / |
+| status | int | Response status code | yes | / |
+| message | String | Information of the response | no | / |
+| data | Map | The returned data | yes | / |
+| jobExecutionId | String | Information of each task index. The execution status of task must be Running before you can get the indicator information of task | yes | / |
+
+Back to example :
+
+```json
+{
+ "method": "/api/rest_j/v1/exchangis/task/execution/{taskId}/metrics",
+ "status": 0,
+ "message": "Submitted succeed(Submit successfully)!",
+ "data": {
+ "task": {
+ "taskId": 5,
+ "name": "test-1",
+ "status": "running",
+ "metrics": {
+ "resourceUsed": {
+ "cpu": 10, // Unit:vcores
+ "memory": 20 // Unit:GB
+ },
+ "traffic": {
+ "source": "mysql",
+ "sink": "hive",
+ "flow": 100 // Unit:Records/S
+ },
+ "indicator": {
+ "exchangedRecords": 109345, // Unit:Records
+ "errorRecords": 5,
+ "ignoredRecords": 5
+ }
+ }
+ }
+ }
+}
+```
+
+### 6、Get the real-time log of Job
+
+Interface description:Get the real-time log of Job through jobExecutionId.
+
+Request URL:/api/rest_j/v1/exchangis/job/execution/{jobExecutionId}/log? fromLine=&pageSize=&ignoreKeywords=&onlyKeywords=&lastRows=
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------------------------------------------------------------ | ----------- | ------------- |
+| jobExecutionId | String | Execution ID of ExchangisJob | yes | / |
+| fromLine | int | Read the starting line of | no | 0 |
+| pageSize | int | Read the number of log lines this time | no | 100 |
+| ignoreKeywords | String | Ignore which keywords are in the line, and multiple keywords are separated by English | no | / |
+| onlyKeywords | String | Only the lines where keywords are located are selected, and multiple keywords are separated in English | no | / |
+| lastRows | int | Read only the last few lines of the log, which is equivalent to tail -f log. When this parameter is greater than 0, all the above parameters will be invalid | no | / |
+
+Back to example :
+
+| Name | Type | Remark | If required | Default value |
+| ------- | ------- | ------------------------------------------------------------ | ----------- | ------------- |
+| method | String | Called method (request path) | yes | / |
+| status | int | Response status code | yes | / |
+| message | String | Information of the response | no | / |
+| data | Map | The returned data | yes | / |
+| endLine | int | The end line number of this reading, you can continue reading the log from endLine+1 next time | yes | / |
+| isEnd | boolean | Have all the logs been read | no | / |
+| logs | List | Returns the execution log of the Job | yes | / |
+
+Back to example :
+
+```json
+{
+ "method": "/api/rest_j/v1/exchangis/job/execution/{jobExecutionId}/log",
+ "status": 0,
+ "message": "Submitted succeed(Submit successfully)!",
+ "data": {
+ "endLine": 99, // The end line number of this reading, you can continue reading the log from endLine+1 next time
+ "isEnd": false, // Have all the logs been read
+ "logs": [
+ "all": "",
+ "error": "",
+ "warn": "",
+ "info": ""
+ ]
+ }
+}
+```
+
+### 7、Get the real-time log of task
+
+Interface description: Get the real-time log of task through jobExecutionId and taskId.
+
+Request URL:/api/rest_j/v1/exchangis/task/execution/{taskId}/log? jobExecutionId=&fromLine=&pageSize=&ignoreKeywords=&onlyKeywords=&lastRows=
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | Remark | If required | Default value |
+| -------------- | ------ | ------------------------------------------------------------ | ----------- | ------------- |
+| taskId | String | Execution ID of task | yes | / |
+| jobExecutionId | String | Execution ID of ExchangisJob | yes | / |
+| fromLine | int | Read the starting line of | no | 0 |
+| pageSize | int | Read the number of log lines this time | no | 100 |
+| ignoreKeywords | String | Ignore which keywords are in the line, and multiple keywords are separated by English | no | / |
+| onlyKeywords | String | Only the lines where keywords are located are selected, and multiple keywords are separated in English | no | / |
+| lastRows | int | Read only the last few lines of the log, which is equivalent to tail -f log. When this parameter is greater than 0, all the above parameters will be invalid | no | / |
+
+Back to example :
+
+| Name | Type | Remark | If required | Default value |
+| ------- | ------- | ------------------------------------------------------------ | ----------- | ------------- |
+| method | String | Called method (request path) | yes | / |
+| status | int | Response status code | yes | / |
+| message | String | Information of the response | no | / |
+| data | Map | The returned data | yes | / |
+| endLine | int | The end line number of this reading, you can continue reading the log from endLine+1 next time. | yes | / |
+| isEnd | boolean | Have all the logs been read | yes | / |
+| logs | List | Returns the execution log of the Job | yes | / |
+
+Back to example :
+
+```json
+{
+ "method": "/api/rest_j/v1/exchangis/job/execution/{taskId}/log",
+ "status": 0,
+ "message": "Submitted succeed(Submit successfully)!",
+ "data": {
+ "endLine": 99, // The end line number of this reading, you can continue reading the log from endLine+1 next time.
+ "isEnd": false, // Have all the logs been read
+ "logs": [
+ "all": "",
+ "error": "",
+ "warn": "",
+ "info": ""
+ ]
+ }
+}
+```
+
diff --git a/docs/en_US/ch1/exchangis_sqoop_deploy_en.md b/docs/en_US/ch1/exchangis_sqoop_deploy_en.md
new file mode 100644
index 000000000..d0bd79f35
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_sqoop_deploy_en.md
@@ -0,0 +1,77 @@
+# Sqoop engine uses documentation
+### Prepare the environment
+Sqoop engine is an indispensable component to perform Exchange IS data synchronization task, and only after the installation and deployment of Sqoop engine can it successfully perform data synchronization task. At the same time, make sure sqoop is installed on the deployed machine.
+
+Before you install and deploy Sqoop engine, please follow [Exchangis1.0.0](https://github.com/wedisphere/exchange/blob/dev-1.0.0-rc/docs/zh _ cn/ch1/exchange _ deploy _ cn.md).
+
+Sqoop engine mainly depends on Hadoop basic environment. If this node needs to deploy Sqoop engine, it needs to deploy Hadoop client environment.
+
+It is strongly recommended that you use the native Sqoop to perform the test task on this node before performing the Sqoop task, so as to check whether the environment of this node is normal.
+
+| Environment variable name | Environment variable content | Remark |
+| :----: | :----: |-------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop config path | Required |
+| SQOOP_HOME | Sqoop installation path | Not Required |
+| SQOOP_CONF_DIR | Sqoop config path | Not Required |
+| HCAT_HOME | HCAT config path | Not Required |
+| HBASE_HOME | HBASE config path | Not Required |
+
+
+| Linkis system params | Params | Remark |
+| --------------------------- | -------------------------------------------------------- | ------------------------------------------------------------ |
+| wds.linkis.hadoop.site.xml | Set the location where sqoop loads hadoop parameter file | Required, please refer to the example:"/etc/hadoop/conf/core-site.xml;/etc/hadoop/conf/hdfs-site.xml;/etc/hadoop/conf/yarn-site.xml;/etc/hadoop/conf/mapred-site.xml" |
+| sqoop.fetch.status.interval | Set the interval for getting sqoop execution status | Not Required, the default value is 5s. |
+### Prepare installation package
+#### 1)Download binary package
+
+Exchangis1.0.0 and Linkis 1.1.0 support the mainstream Sqoop versions 1.4.6 and 1.4.7, and later versions may need to modify some codes for recompilation.
+
+[Click to jump to Release interface](https://github.com/WeBankFinTech/Exchangis/releases)
+
+#### 2)Compile and package
+If you want to develop and compile sqoop engine yourself, the specific compilation steps are as follows:
+
+1.clone Exchangis's source code
+
+2.Under exchangis-plugins module, find sqoop engine and compile sqoop separately, as follows :
+
+```
+cd {EXCHANGIS_CODE_HOME}/exchangis-plugins/engine/sqoop
+mvn clean install
+```
+Then the sqoop engine installation package will be found in this path.
+```
+{EXCHANGIS_CODE_HOME}\exchangis-plugins\sqoop\target\out\sqoop
+```
+
+
+### Start deployment
+#### 1)Sqoop engine installation
+1、Get the packed sqoop.zip material package
+
+2、Place it in the following directory and unzip it
+
+```
+cd {LINKIS_HOME}/linkis/lib/linkis-engineconn-plugins
+unzip.zip
+```
+The extracted directory structure is:
+```
+dist
+plugin
+```
+(Note, see which users the current sqoop engine has permissions on, not necessarily root)
+
+
+#### 2)Restart linkis-engineplugin service to make sqoop engine take effect
+New engines joining linkis will not take effect until the engineplugin service of linkis is restarted, and the restart script is. /linkis-daemon.sh in the Linkis installation directory. The specific steps are as follows :
+```
+cd {LINKIS_INSTALL_HOME}/links/sbin/
+./linkis-daemon.sh restart cg-engineplugin
+```
+After the service is successfully started, the installation and deployment of sqoop will be completed.
+
+For a more detailed introduction of engineplugin, please refer to the following article.
+https://linkis.apache.org/zh-CN/docs/latest/deployment/engine_conn_plugin_installation
\ No newline at end of file
diff --git a/docs/en_US/ch1/exchangis_user_manual_en.md b/docs/en_US/ch1/exchangis_user_manual_en.md
index e69de29bb..b7a19158b 100644
--- a/docs/en_US/ch1/exchangis_user_manual_en.md
+++ b/docs/en_US/ch1/exchangis_user_manual_en.md
@@ -0,0 +1,264 @@
+# Exchangis1.0 User Manual
+
+## 一、Product introduction
+
+ This article is a quick entry document of Exchangis 1.0, covering the basic usage process of Exchangis 1.0. Exchangis a lightweight data exchange service platform, which supports data synchronization between different types of data sources. The platform data exchange process is split, and the concepts of data source, data exchange task and task scheduling are abstracted, so as to achieve the purpose of visual management of data synchronization process. In the actual data transmission process, the characteristics of multiple transmission components can be integrated to achieve horizontal expansion of functions.
+
+## 二、Login Exchangis1.0
+
+ Exchangis1.0 is currently a part of DSS**data exchange component**, and it can be accessed in the component list by logging in to DSS. Therefore, before using Exchangis 1.0, please make basic deployment of DSS, Exchange IS 1.0, Linkis and other related components to ensure that the components' functions are available. This article will not go into details. See for details:[exchangis_deploy_en](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/en_US/ch1/exchangis_deploy_en.md)和[exchangis-appconn_deploy_en](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/en_US/ch1/exchangis_appconn_deploy_en.md)
+
+### 1、Login DSS
+
+ By default, the system uses Linux deployment users of Linkis to log in to DSS. If users of hadoop deploy Linkis and DSS, they can log in directly through the account password: hadoop/hadoop. First log in to the webpage according to the front-end deployment address of DSS, and then enter the account password: hadoop/hadoop to enter DSS.
+
+### 2、Enter Exchangis
+
+ Exchangis is accessed through DSS. Click on the word: **Home ->DSS Application Components-> Data Exchange-> Enter Exchange**.
+
+![exchangis1.0_entrance](../../../images/zh_CN/ch1/exchangis1.0_entrance.png)
+
+Pic2-1 Exchangis1.0 entrance
+
+
+## 三、Datasource management
+
+ This module can configure and manage data sources. As the initial step of data synchronization, Exchangis1.0 currently supports direct data import between mysql and hive.
+The main functions of data source are as follows :
+
+1. Create, edit and delete data sources;
+2. Search data sources by type and name, and support quick positioning of data sources;
+3. Data source connection test operation;
+4. Release and record of historical data source version.
+
+![datasource_list](../../../images/zh_CN/ch1/datasource_list.png)
+
+
+Pic4-1 Datasource management list
+
+
+
+### 1、Create datasource
+
+ Click **Create Data Source** and select the data source you want to create. Currently, MySQL and Hive data sources can be created.
+
+![datasource_type](../../../images/zh_CN/ch1/datasource_type.png)
+
+
+Pic4-2 Datasource type
+
+
+
+ Select the MySQL data source and fill in the configuration parameters, among which the asterisk is required. Make sure that the Host, port number, user name and password connected to MySQL database are connected correctly. **Connection parameter** is in Json format, which is used to set the configuration information of MySQL. After filling it out, you can **test the connection**.
+
+![MySQL_datasource_config](../../../images/zh_CN/ch1/MySQL_datasource_config.png)
+
+
+Pic4-3 MySQL datasource config
+
+
+
+ For the configuration of Hive data source, it is different from MySQL. For the time being, it does not provide users with the function of configuring cluster parameters in the interface. For the cluster environment, it is completed by the back-end unified configuration. Users only need to select the required cluster environment and click OK to save it.
+
+![Hive_datasource_config](../../../images/zh_CN/ch1/Hive_datasource_config.png)
+
+
+Pic4-4 Hive datasource config
+
+
+
+### 2、Datasource function
+
+ The data source management module provides the function of **publishing** the version of the configured data source. Only the published data source can be used when configuring derivative tasks, otherwise, it will prompt that it is unavailable. As long as the data source is edited again, it will be regarded as a new version, and the latest version is in the first row. You can **view** the configuration of all historical data source versions in the version list, and you can refer to it whenever you need to roll back.
+
+![datasource_func](../../../images/zh_CN/ch1/datasource_func.png)
+
+
+Pic4-5 Datasource release
+
+
+ The **expired ** function of data source management is used to indicate that this data source has been gradually replaced. Please change the task configuration using this data source in time to avoid the failure of the configured execution task caused by directly deleting the data source.
+![datasource_timelimit](../../../images/zh_CN/ch1/datasource_timelimit.png)
+
+
+Pic4-6 Datasource expiration
+
+
+## 四、Project management
+
+### 1、Project list
+
+ This module can create projects. In actual derivative tasks, there can be multiple derivative tasks under a project, and different projects do not affect each other. For ordinary users, they can only operate their own created projects. On the homepage of project management, you can manage projects, including **create**, **modify** and **delete** and **query and search**. Modification and deletion can only be performed on projects created in Exchangis.
+
+![item_list](../../../images/zh_CN/ch1/item_list.png)
+
+Pic3-1 Project list
+
+
+
+### 2、Task list
+
+ You can manage the created Job data synchronization tasks in the task list, similar to projects, including **Create**, **Modify**, **Delete** and **Search**.
+
+![job_task_list](../../../images/zh_CN/ch1/job_task_list.png)
+
+Pic3-2 Task list
+
+
+ In addition, the **task supports copying**, which can increase the required tasks. The copied task contains all the information of its original task configuration. Click **Create Task** to select the task type and execution engine. **Currently, only offline tasks and SQOOP execution engine** are supported, and streaming tasks and DataX engines will be supported in the future.
+
+![task_type_and_engine](../../../images/zh_CN/ch1/task_type_and_engine.png)
+
+Pic3-3 Task type and Engine config
+
+
+### 3、Synchronize data task configuration and execution
+
+ Data synchronization task configuration and execution is the core function of Exchangis1.0. The basic configuration data synchronization process is: **Add subtasks-> Select source data source database table and destination data source database table (add data source in data source management module for selection)-> Field mapping (default)-> Process control (default)-> Configuration (default)-> Execute**.
+
+The main functions of task execution include:
+1. Add, copy and delete subtask cards ;
+2. Realize the import and export of data between two different types of data sources ;
+3. Selection of library tables for source and destination data sources ;
+4. Datasource field mapping ;
+5. Maximum job concurrency and maximum job memory configuration ;
+6. Data synchronization task execution status view ;
+7. Log view of each main task and each subtask ;
+8. Task execution history status view ;
+9. Execute the task kill operation.
+
+### 4、Selecting and configuring data sources
+
+ Click **Add Sub-task** to start creating a data synchronization task. For the newly created data synchronization sub-task, you must first select the data source library table. The data source must be configured in advance in the **data source management module** before it will appear in the task configuration. Select the data source to support search, and the search method is to search the library first, and then search the table.
+
+![add_subtask](../../../images/zh_CN/ch1/add_subtask.png)
+
+Pic3-4 Add sub-task
+
+
+ When MySQL is the destination data source, it supports **insert** and **update** two writing methods; When it is the source data source, it supports **WHERE conditional statement query**.
+
+ When Hive is the destination data source, partition information configuration is supported, and the writing methods are **append data** and **overwrite**; When the source data source is **partition information configuration is supported**.
+
+### 5、Datasource field mapping
+
+ When the data source library table information is configured, Exchangis1.0 will automatically generate automatic field mapping between the original data source and the destination data source in the **field mapping** line, and you can choose the field you want to map; When HIVE is the destination data source, its mapping field cannot be modified, which should be noted.
+
+### 6、Process control
+
+ Task execution provides the configuration of the maximum parallel number of tasks and the maximum memory configuration of jobs, which can be changed according to actual needs.
+
+### 7、Job execution
+
+ Exchangis1.0 support the simultaneous execution of multiple subtasks. After the task configuration is completed, click Execute to start the data synchronization task, and the workbench will pop up at the bottom of the interface. The workbench mainly contains three functions: **running status, real-time log and execution history** .
+ **Running Status** :You can view the overall progress of the current data synchronization task, including the number of successes and failures of the task, and click on the name of the task to display the information of various running indicators of each task.
+
+ **Real-time log** :The main display contents include two categories. One is the log of the whole derivative job, which can output the status log of each task, such as whether the task is scheduled or not, and whether it is running moderately; The second is the log of each task, and the output is the corresponding derivative log. In the real-time log, the log can be filtered according to keywords and ignored words, and the function of obtaining the last N lines of logs is provided separately; You can also screen and display different types of logs of Error, Warning and Info, just click the corresponding button.
+
+ **Execution History** :It can display the historical execution information of this derivative task and provide a preliminary overview of the historical execution process. If you want to further view the detailed historical information, click on the task name to jump to the synchronous history interface for viewing.
+
+ Data synchronization task execution needs to specify the execution user, which is the login user by default. The specific situation needs to be adjusted according to the configuration of the actual data source.
+
+
+## 五、Synchronous history
+
+ This module can view all data synchronization tasks performed historically, and each user can only view the tasks created by himself. Different users are isolated from each other.
+
+ The main functions are as follows:
+1. Find the required historical task information according to the query conditions ;
+2. For non-terminal tasks, the function of terminating tasks is provided, which can kill non-terminal tasks ;
+3. Check the running status and real-time day of each task ;
+4. View more detailed configuration information and update time of each synchronization task .
+
+![sync_history](../../../images/zh_CN/ch1/sync_history.png)
+
+
+Pic5-1 Synchronous history
+
+
+## 六、Exchangis Appconn uses
+
+ At present, Exchangis1.0 supports docking with DSS in the form of Appconn, **on the DSS side**, data exchange sqoop workflow node can be created in the mode of workflow arrangement through **application development-> project list ** of DSS, where data synchronization tasks can be configured and executed. Exchangis projects and data exchange tasks created in DSS will be created in Exchangis at the same time.
+
+Exchangis Appconn mainly supports the following functions :
+
+1. **Project operation ** is the creation, deletion and modification of DSS projects, which will synchronously affect the projects on Exchangis;
+
+2. **Basic operation of workflow node **is the task of creating, deleting and modifying sqoop workflow nodes in DSS choreographer, which will be synchronized to Exchangis.
+
+ It is the task of creating, deleting and modifying sqoop workflow nodes in DSS choreographer, which will be synchronized to Exchangis;
+
+3. **Workflow derivative operation ** support sqoop workflow node configuration to perform data synchronization tasks;
+
+4. **Workflow publishing operation ** support sqoop workflow nodes to publish to WTSS for task scheduling.
+
+### 1、Project operation
+
+ This module can create, modify and delete DSS projects, and the operations on the DSS side will be synchronized to the Exchange is side. Here, take the project created in DSS as an example, and the process of operation in Exchange is as follows: **Click Create Project-> Fill in project information-> Click Confirm-> Enter Exchange is side-> Click Project Management**, and you can view the synchronously created projects, as shown in the following figure:
+
+![appconn_pro_create](../../../images/zh_CN/ch1/appconn_pro_create.png)
+
+
+Pic6-1 Project operation
+
+
+After the creation, you will see the synchronized project on the Exchange is side.
+
+![appconn_pro_sync](../../../images/zh_CN/ch1/appconn_pro_sync.jpg)
+
+
+Pic6-2 Synchronize the project into Exchangis
+
+
+### 2、Basic operation of workflow node
+
+ Workflow nodes can be created, modified, deleted, and selected depending on each other, and can be associated with each other. Operations on DSS side will be synchronized to Exchangis side. Taking the creation of sqoop workflow node as an example, the operation process of the project in exchangis appconn is as follows: **Create a workflow-> Drag sqoop node from the left plug-in bar to the right canvas-> Click OK to create sqoop node task-> Enter Exchangis to view the synchronously created task**, as shown in the following figure, and the same is true for deleting and modifying sqoop node tasks.
+
+![appconn_pro_sqoop](../../../images/zh_CN/ch1/appconn_pro_sqoop.png)
+
+
+Pic6-3 Sqoop node function
+
+
+ You can see that the derivative task is also synchronized to Exchangis. ![](../../../images/zh_CN/ch1/appconn_pro_sqoop_sync.jpg)
+
+
+Pic6-4 Synchronize the sqoop node into Exchangis
+
+
+### 3、Workflow derivative operation
+
+ Double-click the Sqoop node to operate the workflow node. sqoop workflow node configuration and data synchronization tasks are supported. Derivative task in the form of workflow nodes is the core function of Exchangis Appconn **Each sqoop node represents a data synchronization task**, and the specific operation process is as follows: **Double-click the sqoop node-> Task configuration interface pops up-> Configure task information-> Execute task**, as shown in the following figure:
+
+![sqoop_config](../../../images/zh_CN/ch1/sqoop_config.png)
+
+
+Pic6-5 Double-click the sqoop workflow node to enter the configuration interface.
+
+
+ There are two execution methods here, one is to click the execute button in the pop-up task configuration interface to execute; The other is to click the **Execute button** of DSS choreographer or **select the Execute button** to execute. **Click Execute** to execute all the nodes in this workflow, and **click Select Execute** to execute only the selected workflow nodes, but not all the nodes.
+
+![sqoop_execute](../../../images/zh_CN/ch1/sqoop_execute.png)
+
+
+Pic6-7 Execute job
+
+
+Note: For data synchronization tasks performed in sqoop node of DSS, relevant information can be viewed on Exchangis.
+
+### 4、Workflow publishing operation
+
+ The **publish** function of workflow tasks supports sqoop workflow nodes to publish to WTSS for task scheduling. The data exchange task information created and configured in the * * Development Center **can be published to WTSS, and the task can be scheduled in WTSS**.
+
+### 5、Production center
+
+ Click the drop-down box in the namespace and switch to **Production Center **, where you can see the workflow logs of all projects and check the scheduling status of each workflow.
+
+![production_center](../../../images/zh_CN/ch1/production_center.png)
+
diff --git a/docs/zh_CN/ch1/README.md b/docs/zh_CN/ch1/README.md
deleted file mode 100644
index cc2d8ccb0..000000000
--- a/docs/zh_CN/ch1/README.md
+++ /dev/null
@@ -1,62 +0,0 @@
-[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
-
-[English](../../../README.md) | 中文
-
-## 项目简介
-Exchangis是一个轻量级的、高扩展性的数据交换平台,支持对结构化及无结构化的异构数据源之间的数据传输,在应用层上具有数据权限管控、节点服务高可用和多租户资源隔离等业务特性,而在数据层上又具有传输架构多样化、模块插件化和组件低耦合等架构特点。
-
-Exchangis的传输交换能力依赖于其底层聚合的传输引擎,其顶层对各类数据源定义统一的参数模型,每种传输引擎对参数模型进行映射配置,转化为引擎的输入模型。每聚合一种引擎,都将增加Exchangis一类特性,对某类引擎的特性强化,都是对Exchangis特性的完善。默认聚合以及强化Alibaba的DataX传输引擎。
-
-## 核心特点
-- **数据源管理**
-以绑定项目的方式共享自己的数据源;
-设置数据源对外权限,控制数据的流入和流出。
-
-- **多传输引擎支持**
-传输引擎可横向扩展;
-当前版本完整聚合了离线批量引擎DataX、部分聚合了大数据批量导数引擎SQOOP
-
-- **近实时任务管控**
-快速抓取传输任务日志以及传输速率等信息,实时关闭任务;
-可根据带宽状况对任务进行动态限流
-
-- **支持无结构化传输**
-DataX框架改造,单独构建二进制流快速通道,适用于无数据转换的纯数据同步场景。
-
-- **任务状态自检**
-监控长时间运行的任务和状态异常任务,及时释放占用的资源并发出告警。
-
-## 与现有的系统的对比
-对现有的一些数据交换工具和平台的对比:
-
-| 功能模组 | 描述 | Exchangis | DataX | Sqoop | DataLink | DBus |
-| :----: | :----: |-------|-------|-------|-------|-------|
-| UI | 集成便捷的管理界面和监控窗口| 已集成 | 无 | 无 | 已集成 |已集成 |
-| 安装部署 | 部署难易程度和第三方依赖 | 一键部署,无依赖 | 无依赖 | 依赖Hadoop环境 | 依赖Zookeeper | 依赖大量第三方组件 |
-| 数据权限管理| 多租户权限配置和数据源权限管控 | 支持 | 不支持 | 不支持 | 不支持 | 支持 |
-| |动态限流传输 | 支持 | 部分支持,无法动态调整 | 部分支持,无法动态调整| 支持 | 支持,借助Kafka |
-| 数据传输| 无结构数据二进制传输 | 支持,快速通道 | 不支持 | 不支持 | 不支持,都是记录 | 不支持,需要转化为统一消息格式|
-| | 嵌入处理代码 | 支持,动态编译 | 不支持 | 不支持 | 不支持 | 部分支持 |
-| | 传输断点恢复 | 支持(未开源) | 不支持,只能重试 | 不支持,只能重试 | 支持 | 支持 |
-| 服务高可用 | 服务多点,故障不影响使用| 应用高可用,传输单点(分布式架构规划中) | 单点服务(开源版本) | 传输多点 | 应用、传输高可用 | 应用、传输高可用 |
-| 系统管理 | 节点、资源管理 | 支持 | 不支持 | 不支持 | 支持 | 支持 |
-
-## 整体设计
-
-### 架构设计
-
-![架构设计](../../../images/zh_CN/ch1/architecture.png)
-
-## 相关文档
-[安装部署文档](exchangis_deploy_cn.md)
-[用户手册](exchangis_user_manual_cn.md)
-
-## 交流贡献
-
-如果您想得到最快的响应,请给我们提 issue,或者扫码进群:
-
-![communication](../../../images/communication.png)
-
-## License
-
-Exchangis is under the Apache 2.0 License. See the [License](../../../LICENSE) file for details.
\ No newline at end of file
diff --git a/docs/zh_CN/ch1/exchangis_appconn_deploy_cn.md b/docs/zh_CN/ch1/exchangis_appconn_deploy_cn.md
new file mode 100644
index 000000000..aedf6db48
--- /dev/null
+++ b/docs/zh_CN/ch1/exchangis_appconn_deploy_cn.md
@@ -0,0 +1,87 @@
+ExchangisAppConn安装文档
+本文主要介绍在DSS(DataSphere Studio)1.0.1中ExchangisAppConn的部署、配置以及安装
+### 1.部署ExchangisAppConn的准备工作
+您在部署ExchangisAppConn之前,请按照[Exchangis1.0.0安装部署文档](https://github.com/WeDataSphere/Exchangis/blob/dev-1.0.0-rc/docs/zh_CN/ch1/exchangis_deploy_cn.md)安装完成Exchangis1.0.0及其他相关组件的安装,并确保工程基本功能可用。
+
+### 2.ExchangisAppConn插件的下载和编译
+#### 1)下载二进制包
+我们提供ExchangisAppconn的物料包,您可直接下载使用。[点击跳转 Release 界面](https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Exchangis/exchangis1.0.0-rc/exchangis-appconn.zip)
+#### 2) 编译打包
+
+如果您想自己开发和编译ExchangisAppConn,具体编译步骤如下:
+1.clone Exchangis的代码
+2.在exchangis-plugins模块下,找到exchangis-appconn,单独编译exchangis-appconn
+```
+cd {EXCHANGIS_CODE_HOME}/exchangis-plugins/exchangis-appconn
+mvn clean install
+```
+会在该路径下找到exchangis-appconn.zip安装包
+```
+{EXCHANGIS_CODE_HOME}\exchangis-plugins\exchangis-appconn\target\exchangis-appconn.zip
+```
+
+### 3.ExchangisAppConn插件的部署和配置总体步骤
+1.拿到打包出来的exchangis-appconn.zip物料包
+
+2.放置到如下目录并进行解压
+
+注意:第一次解压exchangis appconn后,确保当前文件夹下没有index_v0000XX.index文件,该文件在后面才会生成
+
+```
+cd {DSS_Install_HOME}/dss/dss-appconns
+unzip exchangis-appconn.zip
+```
+解压出来的目录结构为:
+```
+conf
+db
+lib
+appconn.properties
+```
+
+3.执行脚本进行自动化安装
+
+```shell
+cd {DSS_INSTALL_HOME}/dss/bin
+./install-appconn.sh
+# 脚本是交互式的安装方案,您需要输入字符串exchangis以及exchangis服务的ip和端口,即可以完成安装
+# 这里的exchangis端口是指前端端口,在nginx进行配置。而不是后端的服务端口
+```
+
+### 4.完成exchangis-appconn的安装后,需要重启dss服务,才能最终完成插件的更新
+
+#### 4.1)使部署好的APPCONN生效
+使用DSS启停脚本使APPCONN生效,进入到脚本所在目录{DSS_INSTALL_HOME}/sbin中,依次使用如下命令执行脚本:
+```
+sh ./dss-stop-all.sh
+sh ./dss-start-all.sh
+```
+#### 4.2)验证exchangis-appconn是否生效
+在安装部署完成exchangis-appconn之后,可通过以下步骤初步验证exchangis-appconn是否安装成功。
+1. 在DSS工作空间创建一个新的项目
+![image](https://user-images.githubusercontent.com/27387830/169782142-b2fc2633-e605-4553-9433-67756135a6f1.png)
+
+2. 在exchangis端查看是否同步创建项目,创建成功说明appconn安装成功
+![image](https://user-images.githubusercontent.com/27387830/169782337-678f2df0-080a-495a-b59f-a98c5a427cf8.png)
+
+更多使用操作可参照[Exchangis1.0用户手册](https://user-images.githubusercontent.com/27387830/169782142-b2fc2633-e605-4553-9433-67756135a6f1.png)
+
+### 5.Exchangis AppConn安装原理
+
+Exchangis 的相关配置信息会插入到以下表中,通过配置下表,可以完成 Exchangis 的使用配置,安装 Exchangis AppConn 时,脚本会替换每个 AppConn 下的 init.sql,并插入到表中。(注:如果仅仅需要快速安装APPCONN,无需过分关注以下字段,提供的init.sql中大多以进行默认配置。重点关注以上操作即可)
+
+| 表名 | 表作用 | 备注 |
+| :----: | :----: |-------|
+| dss_application | 应用表,主要是插入 exchangis 应用的基本信息 | 必须 |
+| dss_menu | 菜单表,存储对外展示的内容,如图标、名称等 | 必须 |
+| dss_onestop_menu_application| menu 和 application 的关联表,用于联合查找 | 必须 |
+| dss_appconn |appconn 的基本信息,用于加载 appconn | 必须 |
+| dss_appconn_instance| AppConn 的实例的信息,包括自身的url信息 | 必须 |
+| dss_workflow_node | schedulis 作为工作流节点需要插入的信息 | 必须 |
+
+Exchangis 作为调度框架,实现了一级规范和二级规范,需要使用 exchangis AppConn 的微服务如下表。
+
+| 表名 | 表作用 | 备注 |
+| :----: | :----: |-------|
+| dss-framework-project-server | 使用 exchangis-appconn 完成工程以及组织的统一| 必须 |
+| dss-workflow-server | 借用调度 AppConn 实现工作流发布,状态等获取 | 必须 |
diff --git a/docs/zh_CN/ch1/exchangis_datasource_cn.md b/docs/zh_CN/ch1/exchangis_datasource_cn.md
new file mode 100644
index 000000000..908f4f4c6
--- /dev/null
+++ b/docs/zh_CN/ch1/exchangis_datasource_cn.md
@@ -0,0 +1,306 @@
+# DataSource1.0
+
+## 1、背景
+
+早期版本中的**Exchangis0.x**和**Linkis0.x**都有整合数据源模块,其中以**linkis-datasource**为蓝本(可以参阅相关的文档),重构数据源模块。
+
+## 2、整体架构设计
+
+为了构建公共的数据源模块,数据源模块拆主要拆分成两大部分,**datasource-client**部分和**datasource-server**部分,其中server部分统一放在**Linkis-1.0**的**linkis-datasource**模块中,包含服务核心主逻辑; client部分放在**Exchangis-1.0**的**exchangis-datasource**模块下, 包含客户端的调用逻辑,看下整体架构。
+
+![linkis_datasource_structure](../../../images/zh_CN/ch1/datasource_structure.png)
+
+