Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(transport): general TLS ClientHello fragmentation StreamDialer #133

Merged
merged 23 commits into from
Nov 27, 2023

Conversation

jyyi1
Copy link
Contributor

@jyyi1 jyyi1 commented Nov 9, 2023

We introduce a new StreamDialer designed for TLS ClientHello fragmentation, as discussed in Circumventing the GFW with TLS Record Fragmentation. This general-purpose dialer allows for implementing custom splitting logic, enabling diverse fragmentation strategies.

Here is a screenshot that demonstrates the impact of using this StreamDialer to always split at the first 5 bytes of the Client Hello packet:

image

transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/stream_dialer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Fixed Show fixed Hide fixed
transport/tlsfrag/tls.go Fixed Show fixed Hide fixed
transport/tlsfrag/tls.go Fixed Show fixed Hide fixed
@jyyi1 jyyi1 requested a review from fortuna November 16, 2023 18:34
@fortuna fortuna marked this pull request as ready for review November 16, 2023 21:42
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
jyyi1 and others added 5 commits November 17, 2023 13:47
Co-authored-by: Vinicius Fortuna <fortuna@users.noreply.github.com>
Co-authored-by: Vinicius Fortuna <fortuna@users.noreply.github.com>
Co-authored-by: Vinicius Fortuna <fortuna@users.noreply.github.com>
transport/tlsfrag/tls.go Fixed Show fixed Hide fixed
@@ -109,6 +110,6 @@ func appendSOCKS5Address(b []byte, address string) ([]byte, error) {
b = append(b, byte(len(host)))
b = append(b, host...)
}
b = append(b, byte(portNum>>8), byte(portNum))
b = binary.BigEndian.AppendUint16(b, uint16(portNum))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to resolve the CodeQL error: https://github.com/Jigsaw-Code/outline-sdk/pull/133/checks?check_run_id=18912663629

(Although the standard library AppendUint16 is using the same code to append the data)

@jyyi1 jyyi1 requested a review from fortuna November 22, 2023 00:31
m, e := r.Read(b.data[len(b.data) : cap(b.data)-b.padding])
b.data = b.data[:len(b.data)+m]
n += int64(m)
err = e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why aren't you returning on error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because according to the doc https://pkg.go.dev/io#Reader:

Callers should always process the n > 0 bytes returned before considering the error err.

In our case, "processing" means determine whether we have read a full header or a full record. After that, we will jump out of the loop when err != nil (see the condition in for loop).

transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
transport/tlsfrag/buffer.go Outdated Show resolved Hide resolved
}

for len(b.data) < cap(b.data)-b.padding && err == nil {
m, e := r.Read(b.data[len(b.data) : cap(b.data)-b.padding])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the padding and the conflation of the state before and after the header a bit hard to understand.

Consider separating the states and getting rid of the padding variable. For example, this can handle the state before we have the header more clearly:

for {
  // Still waiting to finish the header
  if len(b.data) < recordHeaderLen {
    // Makes it clear we are reading up to recordHeaderLen. No padding
    m, err := r.Read(b.data[len(b.data):recordHeaderLen])
    b.data = b.data[:len(b.data)+m]
    n += int64(m)
    if err != nil {
      return n, err  // could return unexpected EOF if EOF.
    }
    if len(b.data) == recordHeaderLen {
     hdr, err := validateTLSRecord(b.data)
     if err != nil {
       b.validationStatus = err
       return err       
     }
     buf := make([]byte, 0, recordHeaderLen+hdr.PayloadLen())
     b.data = append(buf, b.data...)
    }
    continue
  }
  // State after we have the header
  if !b.validationStatus {
    return 0, b.validationStatus
  }
  ...
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactored it into 3 stages: appending header; validation; appending payload.

transport/tlsfrag/tls.go Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
transport/tlsfrag/writer.go Outdated Show resolved Hide resolved
copy(hdr2, hdr1)
hdr1.SetPayloadLen(uint16(split))
hdr2.SetPayloadLen(uint16(len(content) - split))
w.record = bytes.NewBuffer(splitted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you write to the base writer here?

Copy link
Contributor Author

@jyyi1 jyyi1 Nov 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A single Write might fail, so you still need to track the remaining bytes somewhere in w, and you need to Write them to w.base again in the following w.Writes or w.ReadFroms issued by the caller.

Since the Write to w.base logic will spread across different functions (i.e. w.Write, w.ReadFrom, after split, after invalid w.helloBuf without split), I think it makes more sense to separate the "flush bytes to base" logic from the split, and have a dedicated helper doing that.


// copyHelloBufToRecord copies w.helloBuf into w.record without allocations.
func (w *clientHelloFragWriter) copyHelloBufToRecord() {
w.record = bytes.NewBuffer(w.helloBuf.Bytes())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you just write to the base writer here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above. ☝️

jyyi1 and others added 3 commits November 22, 2023 15:11
Co-authored-by: Vinicius Fortuna <fortuna@users.noreply.github.com>
Co-authored-by: Vinicius Fortuna <fortuna@users.noreply.github.com>
Co-authored-by: Vinicius Fortuna <fortuna@users.noreply.github.com>
jyyi1 and others added 2 commits November 22, 2023 15:12
Co-authored-by: Vinicius Fortuna <fortuna@users.noreply.github.com>
@jyyi1 jyyi1 requested a review from fortuna November 22, 2023 23:17
@jyyi1 jyyi1 merged commit 9d8fe6a into main Nov 27, 2023
6 checks passed
@jyyi1 jyyi1 deleted the junyi/tls-frag-func branch November 27, 2023 20:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants