English | 简体中文
GT is an open source reverse proxy project that supports peer-to-peer direct connection (P2P) and Internet relay.
It has the following design features:
-
Emphasis on privacy protection, minimize server-side packet analysis to ensure functionality while protecting privacy, such as TCP-based implementation, analyze only the HTTP protocol header of the first data packet for application layer HTTP protocol transmission, without any redundant analysis, and directly forward subsequent data.
-
Emphasis on performance, code implementation tends to use higher performance designs, such as modifying standard libraries to implement designs that reduce memory allocation and copying.
-
P2P connection functionality implemented based on WebRTC, support all platforms that support WebRTC, such as iOS, Android, browsers, etc.
Currently implemented main functions:
- Forward communication protocols based on TCP, such as HTTP(S), WebSocket(S), SSH, SMB
- WebRTC P2P connection
- Multi-user functionality
- Support multiple user authentication methods: API service, local configuration
- Each user has independent configuration
- Limit user speed
- Limit number of client connections
- Refuse access for a period of time if authentication fails more than a certain number of times
- Communication between server and client uses TCP connection pool
- Maintain consistency between command line parameters and YAML configuration parameters
- Support log reporting to Sentry service
- GT
- Index
- Working Principle
- Usage
- Configuration File
- Server User Configuration
- Server TCP Configuration
- Command Line Parameters
- Internal HTTP Penetration
- Internal HTTPS Penetration
- Internal HTTPS SNI Penetration
- Encrypt Client-Server Communication with TLS
- Internal TCP Penetration
- Internal QUIC Penetration
- Intelligent Internal Penetration (Adaptive Selection of TCP/QUIC)
- Client Start Multiple Services Simultaneously
- Server API
- Performance Test
- Run
- Compilation
- Roadmap
- Contribution Guide
┌──────────────────────────────────────┐
│ Web Android iOS PC ... │
└──────────────────┬───────────────────┘
┌──────┴──────┐
│ GT Server │
└──────┬──────┘
┌─────────────────┼─────────────────┐
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ GT Client │ │ GT Client │ │ GT Client │ ...
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ SSH │ │ HTTP(S) │ │ SMB │ ...
└─────────────┘ └─────────────┘ └─────────────┘
Configuration files use YAML format. Both clients and servers can use configuration files.
./release/linux-amd64-server -config server.yaml
./release/linux-amd64-client -config client.yaml
See the server.yaml file for a basic server configuration. See the client.yaml file for a basic client configuration.
The following four ways can be used simultaneously. Conflicts are resolved in order of decreasing priority from top to bottom.
The ith id matches the ith secret. The following two startup methods are equivalent:
./release/linux-amd64-server -addr 8080 -id id1 -secret secret1 -id id2 -secret secret2
./release/linux-amd64-server -addr 8080 -id id1 -id id2 -secret secret1 -secret secret2
id3:
secret: secret3
id1:
secret: secret1-overwrite
version: 1.0
users:
id1:
secret: secret1
id2:
secret: secret2
options:
apiAddr: 1.2.3.4:1234
certFile: /path
keyFile: /path
logFile: /path
logFileMaxCount: 1234
logFileMaxSize: 1234
logLevel: debug
addr: 1234
timeout: 1234m1234ms
tlsAddr: 1234
tlsVersion: tls1.3
users: testdata/users.yaml
Add -allowAnyClient
to the server startup parameters to allow all clients to connect to the server without
configuration on the server side. But the secret
of the first client connecting to the server with the same id
will
be used as the correct secret
and cannot be overwritten by the secret
of subsequent clients connecting to the server
with the same id
to ensure security.
The following three ways can be used simultaneously. Priority: User > Global. User priority: users configuration file > config configuration file. Global priority: command line > config configuration file. If TCP is not configured, it means TCP functionality is not enabled.
The users configuration file can configure TCP for individual users. The following configuration file indicates that user id1 can open TCP ports of any number and any port, and user id2 does not have the permission to open TCP ports.
id1:
secret: secret1
tcp:
- range: 1-65535
id2:
secret: secret2
The config configuration file can configure global TCP and TCP for individual users. The following configuration file indicates that user id1 can open TCP ports of any number and between ports 10000 to 20000, and user id2 can open 1 TCP port between ports 50000 to 65535.
version: 1.0
users:
id1:
secret: secret1
tcp:
- range: 10000-20000
tcpNumber: 0
id2:
secret: secret2
tcp:
- range: 50000-65535
options:
apiAddr: 1.2.3.4:1234
certFile: /path
keyFile: /path
logFile: /path
logFileMaxCount: 1234
logFileMaxSize: 1234
logLevel: debug
addr: 1234
timeout: 1234m1234ms
tlsAddr: 1234
tlsVersion: tls1.3
users: testdata/users.yaml
tcpNumber: 1
./release/linux-amd64-server -h
./release/linux-amd64-client -h
-
Requirement: There is an internal network server and a public network server, and id1.example.com resolves to the address of the public network server. It is hoped to access the webpage service on port 80 of the internal network server through accessing id1.example.com:8080.
-
Server (Public network server)
./release/linux-amd64-server -addr 8080 -id id1 -secret secret1
- Client (Internal network server)
./release/linux-amd64-client -local http://127.0.0.1:80 -remote tcp://id1.example.com:8080 -id id1 -secret secret1
-
Requirement: There is an internal network server and a public network server, and id1.example.com resolves to the address of the public network server. It is hoped to access the HTTP webpage provided on port 80 of the internal network server through accessing https://id1.example.com
-
Server (Public network server)
./release/linux-amd64-server -addr "" -tlsAddr 443 -certFile /root/openssl_crt/tls.crt -keyFile /root/openssl_crt/tls.key -id id1 -secret secret1
- Client (Internal network server), uses the
-remoteCertInsecure
option as a self-signed certificate is used, otherwise this option should not be used (encryption content is decrypted due to man-in-the-middle attack)
./release/linux-amd64-client -local http://127.0.0.1 -remote tls://id1.example.com -remoteCertInsecure -id id1 -secret secret1
-
Requirement: There is an internal network server and a public network server, and id1.example.com resolves to the address of the public network server. It is hoped to access the HTTPS webpage provided on port 443 of the internal network server through accessing https://id1.example.com
-
Server (Public network server)
./release/linux-amd64-server -addr 8080 -sniAddr 443 -id id1 -secret secret1
- Client (Internal network server)
./release/linux-amd64-client -local https://127.0.0.1 -remote tcp://id1.example.com:8080 -id id1 -secret secret1
-
Requirement: There is an internal network server and a public network server, and id1.example.com resolves to the address of the public network server. It is hoped to access the webpage service on port 80 of the internal network server through accessing id1.example.com:8080. Meanwhile, use TLS to encrypt the communication between the client and server.
-
Server (Public network server)
./release/linux-amd64-server -addr 8080 -tlsAddr 443 -certFile /root/openssl_crt/tls.crt -keyFile /root/openssl_crt/tls.key -id id1 -secret secret1
- Client (Internal network server), uses the
-remoteCertInsecure
option as a self-signed certificate is used, otherwise this option should not be used (encryption content is decrypted due to man-in-the-middle attack)
./release/linux-amd64-client -local http://127.0.0.1:80 -remote tls://id1.example.com -remoteCertInsecure -id id1 -secret secret1
-
Requirement: There is an internal network server and a public network server, and id1.example.com resolves to the address of the public network server. It is hoped to access the SSH service on port 22 of the internal network server through accessing id1.example.com:2222, and if the 2222 port is not available on the server side, the server selects a random port.
-
Server (Public network server)
./release/linux-amd64-server -addr 8080 -id id1 -secret secret1 -tcpNumber 1 -tcpRange 1024-65535
- Client (Internal network server)
./release/linux-amd64-client -local tcp://127.0.0.1:22 -remote tcp://id1.example.com:8080 -id id1 -secret secret1 -remoteTCPPort 2222 -remoteTCPRandom
-
Requirements: There is an intranet server and a public network server, and id1.example.com resolves to the address of the public network server. Hopefully by accessing id1.example.com:8080 To access the web page served by port 80 on the intranet server. Use QUIC to build a transport connection between the client and the server. QUIC uses TLS 1.3 for transport encryption. When the user also gives certFile and keyFile, use them for encrypted communication. Otherwise, keys and certificates are automatically generated using the ECDSA encryption algorithm.
-
Server (public network server)
./release/linux-amd64-server -addr 8080 -quicAddr 443 -certFile /root/openssl_crt/tls.crt -keyFile /root/openssl_crt/tls.key -id id1 -secret secret1
- Client (internal network server), because a self-signed certificate is used, the
-remoteCertInsecure
option is used. This option is prohibited from being used in other cases (man-in-the-middle attacks cause encrypted content to be decrypted
./release/linux-amd64-client -local http://127.0.0.1:80 -remote quic://id1.example.com:443 -remoteCertInsecure -id id1 -secret secret1
-
- Requirements: There is an intranet server and a public network server, and id1.example.com resolves to the address of the public network server. Hopefully by accessing id1.example.com:8080
To access the web page served by port 80 on the intranet server. GT server listens to multiple addresses. GT client provides multiple
-remote
options and currently supports intelligent switching between QUIC and TCP/TLS. GT client concurrently sends multiple sets of network status detection probes through QUIC connections to obtain the network delay and packet loss rate between the intranet server and the public network server. Input the trained XGBoost model to obtain the results, and adaptively select whether to use TCP+TLS or QUIC for intranet penetration.
- Requirements: There is an intranet server and a public network server, and id1.example.com resolves to the address of the public network server. Hopefully by accessing id1.example.com:8080
To access the web page served by port 80 on the intranet server. GT server listens to multiple addresses. GT client provides multiple
-
Server (public network server)
./release/linux-amd64-server -addr 8080 -quicAddr 443 -certFile /root/openssl_crt/tls.crt -keyFile /root/openssl_crt/tls.key -id id1 -secret secret1
- Client (intranet server).
-remote
requires at least one QUIC address to be given.
./release/linux-amd64-client -local http://127.0.0.1:80 -remote quic://id1.example.com:443 -remote tcp://id1.example.com:8080 -remoteCertInsecure -id id1 -secret secret1
-
Requirement: There is an internal network server and a public network server, and id1-1.example.com and id1-2.example.com resolve to the address of the public network server. It is hoped to access the service on port 80 of the internal network server through accessing id1-1.example.com:8080, and to access the service on port 8080 of the internal network server through accessing id1-2.example.com:8080, and to access the service on port 2222 of the internal network server through accessing id1-1.example.com:2222, and to access the service on port 2223 of the internal network server through accessing id1-1.example.com:2223. At the same time, the server limits the client's hostPrefix to only contain digits or letters.
-
Note: In this mode, the parameters corresponding to the client local (remoteTCPPort, hostPrefix, etc.) need to be placed between this local and the next local.
-
Server (Public network server)
./release/linux-amd64-server -addr 8080 -id id1 -secret secret1 -tcpNumber 2 -tcpRange 1024-65535 -hostNumber 2 -hostWithID -hostRegex ^[0-9]+$ -hostRegex ^[a-zA-Z]+$
- Client (Internal network server)
./release/linux-amd64-client -remote tcp://id1.example.com:8080 -id id1 -secret secret1 \
-local http://127.0.0.1:80 -useLocalAsHTTPHost -hostPrefix 1 \
-local http://127.0.0.1:8080 -useLocalAsHTTPHost -hostPrefix 2 \
-local tcp://127.0.0.1:2222 -remoteTCPPort 2222 \
-local tcp://127.0.0.1:2223 -remoteTCPPort 2223
The above command line can also be started using a configuration file:
./release/linux-amd64-client -config client.yaml
client.yaml file content:
services:
- local: http://127.0.0.1:80
useLocalAsHTTPHost: true
hostPrefix: 1
- local: http://127.0.0.1:8080
useLocalAsHTTPHost: true
hostPrefix: 2
- local: tcp://127.0.0.1:2222
remoteTCPPort: 2222
- local: tcp://127.0.0.1:2223
remoteTCPPort: 2223
options:
remote: tcp://id1.example.com:8080
id: id1
secret: secret1
The server API detects service availability by simulating a client. The following example can help you better understand this point, where id1.example.com resolves to the address of the public network server. HTTPS is used when apiCertFile and apiKeyFile options are not empty, otherwise HTTP is used.
- Server (Public network server)
./release/linux-amd64-server -addr 8080 -apiAddr 8081
- User
# curl http://id1.example.com:8081/status
{"status": "ok", "version":"linux-amd64-server - 2022-12-09 05:20:24 - dev 88d322f"}
Load testing was performed on this project and frp for comparison using wrk. The internal service points to a test page running nginx locally, and the test results are as follows:
Model Name: MacBook Pro
Model Identifier: MacBookPro17,1
Chip: Apple M1
Total Number of Cores: 8 (4 performance and 4 efficiency)
Memory: 16 GB
$ wrk -c 100 -d 30s -t 10 http://pi.example.com:7001
Running 30s test @ http://pi.example.com:7001
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.22ms 710.73us 37.99ms 98.30%
Req/Sec 4.60k 231.54 4.86k 91.47%
1374783 requests in 30.01s, 1.09GB read
Requests/sec: 45811.08
Transfer/sec: 37.14MB
$ ps aux
PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
2768 0.0 0.1 408697792 17856 s008 S+ 4:55PM 0:52.34 ./client -local http://localhost:8080 -remote tcp://localhost:7001 -id pi -threads 3
2767 0.0 0.1 408703664 17584 s007 S+ 4:55PM 0:52.16 ./server -port 7001
$ wrk -c 100 -d 30s -t 10 http://pi.example.com:7000
Running 30s test @ http://pi.example.com:7000
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 76.92ms 73.46ms 748.61ms 74.21%
Req/Sec 154.63 308.28 2.02k 93.75%
45487 requests in 30.10s, 31.65MB read
Non-2xx or 3xx responses: 20610
Requests/sec: 1511.10
Transfer/sec: 1.05MB
$ ps aux
PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
2975 0.3 0.5 408767328 88768 s004 S+ 5:01PM 0:21.88 ./frps -c ./frps.ini
2976 0.0 0.4 408712832 66112 s005 S+ 5:01PM 1:06.51 ./frpc -c ./frpc.ini
Load testing was performed on this project and frp for comparison using wrk. The internal service points to a test page running nginx locally, and the test results are as follows:
System: Ubuntu 22.04
Chip: Intel i9-12900
Total Number of Cores: 16 (8 performance and 8 efficiency)
Memory: 32 GB
$ ./release/linux-amd64-server -addr 12080 -id id1 -secret secret1
$ ./release/linux-amd64-client -local http://127.0.0.1:80 -remote tcp://id1.example.com:12080 -id id1 -secret secret1
$ wrk -c 100 -d 30s -t 10 http://id1.example.com:12080
Running 30s test @ http://id1.example.com:12080
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 558.51us 2.05ms 71.54ms 99.03%
Req/Sec 24.29k 2.28k 49.07k 95.74%
7264421 requests in 30.10s, 5.81GB read
Requests/sec: 241344.46
Transfer/sec: 197.70MB
$ ./release/linux-amd64-server -addr 12080 -quicAddr 443 -certFile /root/openssl_crt/tls.crt -keyFile /root/openssl_crt/tls.key -id id1 -secret secret1
$ ./release/linux-amd64-client -local http://127.0.0.1:80 -remote quic://id1.example.com:443 -remoteCertInsecure -id id1 -secret secret1
$ wrk -c 100 -d 30s -t 10 http://id1.example.com:12080
Running 30s test @ http://id1.example.com:12080
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 826.65us 1.14ms 66.29ms 98.68%
Req/Sec 12.91k 1.36k 23.53k 79.43%
3864241 requests in 30.10s, 3.09GB read
Requests/sec: 128380.49
Transfer/sec: 105.16MB
$ ./frps -c ./frps.toml
$ ./frpc -c ./frpc.toml
$ wrk -c 100 -d 30s -t 10 http://id1.example.com:12080/
Running 30s test @ http://id1.example.com:12080/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.49ms 8.27ms 154.62ms 92.43%
Req/Sec 4.02k 2.08k 7.51k 53.21%
1203236 requests in 30.08s, 0.93GB read
Requests/sec: 40003.03
Transfer/sec: 31.82MB
By using wrk for stress testing, this project is compared with frp. Each request only returns a field response of less than 10 bytes, which is used to simulate HTTP short requests. The test results are as follows:
$ ./release/linux-amd64-server -addr 12080 -id id1 -secret secret1
$ ./release/linux-amd64-client -local http://127.0.0.1:80 -remote tcp://id1.example.com:12080 -id id1 -secret secret1
$ wrk -c 100 -d 30s -t 10 http://id1.example.com:12080/
Running 30s test @ http://id1.example.com:12080/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.55ms 13.48ms 220.23ms 95.31%
Req/Sec 5.23k 2.11k 12.40k 76.10%
1557980 requests in 30.06s, 191.67MB read
Requests/sec: 51822.69
Transfer/sec: 6.38MB
$ ./release/linux-amd64-server -addr 12080 -quicAddr 443 -certFile /root/openssl_crt/tls.crt -keyFile /root/openssl_crt/tls.key -id id1 -secret secret1
$ ./release/linux-amd64-client -local http://127.0.0.1:80 -remote quic://id1.example.com:443 -remoteCertInsecure -id id1 -secret secret1
$ wrk -c 100 -d 30s -t 10 http://id1.example.com:12080/
Running 30s test @ http://id1.example.com:12080/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.84ms 6.75ms 168.93ms 98.47%
Req/Sec 9.33k 2.13k 22.86k 78.54%
2787908 requests in 30.10s, 342.98MB read
Requests/sec: 92622.63
Transfer/sec: 11.39MB
$ ./frps -c ./frps.toml
$ ./frpc -c ./frpc.toml
$ wrk -c 100 -d 30s -t 10 http://id1.example.com:12080/
Running 30s test @ http://id1.example.com:12080/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.95ms 3.74ms 136.09ms 91.10%
Req/Sec 4.16k 1.22k 12.86k 87.85%
1243103 requests in 30.07s, 152.93MB read
Requests/sec: 41334.52
Transfer/sec: 5.09MB
More container image information can be obtained from https://github.com/ao-space/gt/pkgs/container/gt.
docker pull ghcr.io/ao-space/gt:server-dev
docker pull ghcr.io/ao-space/gt:client-dev
apt-get update
apt-get install make git gn ninja-build python3 python3-pip libgtk-3-dev gcc-aarch64-linux-gnu g++-aarch64-linux-gnu gcc-x86-64-linux-gnu g++-x86-64-linux-gnu -y
You can choose to obtain WebRTC from the mirror or official and compile GT:
-
Get code
git clone <url> cd <folder>
-
Compile
make release
The executable files are in the release directory.
-
Get code
git clone <url> cd <folder>
-
Obtain WebRTC from official
mkdir -p dep/_google-webrtc cd dep/_google-webrtc git clone https://webrtc.googlesource.com/src
Then follow the steps in this link to check out the build toolchain and many dependencies.
-
Compile
WITH_OFFICIAL_WEBRTC=1 make release
The executable files are in the release directory.
You can choose to obtain WebRTC from the mirror or official and compile GT:
-
Get code
git clone <url> cd <folder>
-
Compile
make docker_release_linux_amd64 # docker_release_linux_arm64
The executable files are in the release directory.
-
Get code
git clone <url> cd <folder>
-
Obtain WebRTC from official
mkdir -p dep/_google-webrtc cd dep/_google-webrtc git clone https://webrtc.googlesource.com/src
Then follow the steps in this link to check out the build toolchain and many dependencies.
-
Compile
WITH_OFFICIAL_WEBRTC=1 make docker_release_linux_amd64 # docker_release_linux_arm64
The executable files are in the release directory.
- Add web management functionality
- Support QUIC protocol and BBR congestion algorithm
- Support configuring P2P connections to forward data to multiple services
- Authentication support for public and private keys
We highly welcome contributions to this project. Here are some guiding principles and recommendations to help you get involved:
The best way to contribute is by submitting code. Before submitting code, please ensure you have downloaded and are familiar with the project codebase, and that your code follows the below guidelines:
- Code should be as clean and minimal as possible while being maintainable and extensible.
- Code should follow the project's naming conventions to ensure consistency.
- Code should follow the project's style guide by referencing existing code in the codebase.
To submit code to the project, you can:
- Fork the project on GitHub
- Clone your fork locally
- Make your changes/improvements locally
- Ensure any changes are tested without impacts
- Commit your changes and new pull request
We place strong emphasis on code quality, so submitted code should meet the following requirements:
- Code should be thoroughly tested to ensure correctness and stability
- Code should follow good design principles and best practices
- Code should align with the requirements of your submitted contribution as much as possible
Before submitting code, ensure you provide meaningful and detailed commit messages. This helps us better understand your contribution and merge it more quickly.
Commit messages should include:
- Purpose/reason for the code contribution
- What the code contribution includes/changes
- Optional: How to test the code contribution/results
Messages should be clear and follow conventions set in the project codebase.
If you encounter issues or bugs in the project, feel free to submit issue reports. Before reporting, ensure you have fully investigated and tested the issue, and provide:
- Description of observed behavior
- Context/conditions of when issue occurs
- Relevant background/contextual information
- Description of expected behavior
- Optional: Screenshots or error output
Reports should be clear and follow conventions set in the project codebase.
If you want to suggest adding new features or capabilities, feel free to submit feature requests. Before submitting, ensure you understand the project history/status, and provide:
- Description of suggested feature/capability
- Purpose/intent of the feature
- Optional: Suggested implementation approach(es)
Requests should be clear and follow conventions set in the project codebase.
Lastly, thank you for contributing to this project. We welcome all forms of contribution including but not limited to code contribution, issue reporting, feature requests, documentation writing and more. We believe with your help, this project will become more robust and powerful.
Thanks to the following individuals for contributing to the project: