Skip to content

Commit

Permalink
Fix telepresence connect confusion caused by /.dockerenv file
Browse files Browse the repository at this point in the history
A `/.dockerenv` will be present when running in a GitHub Codespaces
environment. That doesn't mean that telepresence cannot use docker, or
that the root daemon shouldn't start.

Signed-off-by: Thomas Hallgren <thomas@tada.se>
  • Loading branch information
thallgren committed Dec 11, 2024
1 parent 1da2046 commit 18245cf
Show file tree
Hide file tree
Showing 9 changed files with 42 additions and 24 deletions.
5 changes: 5 additions & 0 deletions CHANGELOG.yml
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,11 @@ items:
The ability to collect trace has been removed along with the `telepresence gather-traces` and
`telepresence upload-traces` commands. The underlying code was complex and has not been well maintained since
its inception in 2022. We have received no feedback on it and seen no indication that it has ever been used.
- type: bugfix
title: Fix telepresence connect confusion caused by /.dockerenv file
body: >-
A `/.dockerenv` will be present when running in a GitHub Codespaces environment. That doesn't mean that
telepresence cannot use docker, or that the root daemon shouldn't start.
- type: bugfix
title: Cap timeouts.connectivityCheck at 5 seconds.
body: >-
Expand Down
6 changes: 6 additions & 0 deletions docs/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,12 @@ There's no need for two configmaps that store configuration data for the traffic
The ability to collect trace has been removed along with the `telepresence gather-traces` and `telepresence upload-traces` commands. The underlying code was complex and has not been well maintained since its inception in 2022. We have received no feedback on it and seen no indication that it has ever been used.
</div>

## <div style="display:flex;"><img src="images/bugfix.png" alt="bugfix" style="width:30px;height:fit-content;"/><div style="display:flex;margin-left:7px;">Fix telepresence connect confusion caused by /.dockerenv file</div></div>
<div style="margin-left: 15px">

A `/.dockerenv` will be present when running in a GitHub Codespaces environment. That doesn't mean that telepresence cannot use docker, or that the root daemon shouldn't start.
</div>

## <div style="display:flex;"><img src="images/bugfix.png" alt="bugfix" style="width:30px;height:fit-content;"/><div style="display:flex;margin-left:7px;">Cap timeouts.connectivityCheck at 5 seconds.</div></div>
<div style="margin-left: 15px">

Expand Down
4 changes: 4 additions & 0 deletions docs/release-notes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,10 @@ A default can still be explicitly defined using the `config.intercept.defaultPor
<Title type="change">Tracing was removed.</Title>
<Body>The ability to collect trace has been removed along with the `telepresence gather-traces` and `telepresence upload-traces` commands. The underlying code was complex and has not been well maintained since its inception in 2022. We have received no feedback on it and seen no indication that it has ever been used.</Body>
</Note>
<Note>
<Title type="bugfix">Fix telepresence connect confusion caused by /.dockerenv file</Title>
<Body>A `/.dockerenv` will be present when running in a GitHub Codespaces environment. That doesn't mean that telepresence cannot use docker, or that the root daemon shouldn't start.</Body>
</Note>
<Note>
<Title type="bugfix">Cap timeouts.connectivityCheck at 5 seconds.</Title>
<Body>The timeout value of `timeouts.connectivityCheck` is used when checking if a cluster is already reachable without Telepresence setting up an additional network route. If it is, this timeout should be high enough to cover the delay when establishing a connection. If this delay is higher than a second, then chances are very low that the cluster already is reachable, and if it can, that all accesses to it will be very slow. In such cases, Telepresence will create its own network interface and do perform its own tunneling.
Expand Down
7 changes: 4 additions & 3 deletions pkg/client/cli/cmd/version.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import (
"github.com/telepresenceio/telepresence/v2/pkg/client/cli/daemon"
"github.com/telepresenceio/telepresence/v2/pkg/client/socket"
"github.com/telepresenceio/telepresence/v2/pkg/ioutil"
"github.com/telepresenceio/telepresence/v2/pkg/proc"
)

func version() *cobra.Command {
Expand All @@ -37,13 +38,13 @@ func version() *cobra.Command {
}

func addDaemonVersions(ctx context.Context, kvf *ioutil.KeyValueFormatter) {
remote := false
hasRootDaemon := true
userD := daemon.GetUserClient(ctx)
if userD != nil {
remote = userD.Containerized()
hasRootDaemon = !(proc.IsAdmin() || userD.Containerized())
}

if !remote {
if hasRootDaemon {
version, err := daemonVersion(ctx)
switch {
case err == nil:
Expand Down
21 changes: 7 additions & 14 deletions pkg/client/cli/connect/connector.go
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ func quitDockerDaemons(ctx context.Context) {
func ExistingDaemon(ctx context.Context, info *daemon.Info) (context.Context, error) {
var err error
var conn *grpc.ClientConn
if info.InDocker && !proc.RunningInContainer() {
if info.InDocker {
// The host relies on that the daemon has exposed a port to localhost
// We must use an IP here to avoid that a IPv6 zone is picked up and incorrectly
// parsed by the gRPC dns resolver. See https://github.com/grpc/grpc-go/issues/7882
Expand Down Expand Up @@ -200,21 +200,20 @@ func DiscoverDaemon(ctx context.Context, match *regexp.Regexp, daemonID *daemon.

func launchConnectorDaemon(ctx context.Context, connectorDaemon string, required bool) (context.Context, error) {
cr := daemon.GetRequest(ctx)
cliInContainer := proc.RunningInContainer()
daemonID, err := daemon.IdentifierFromFlags(ctx, cr.Name, cr.KubeFlags, cr.KubeconfigData, cr.Docker || cliInContainer)
daemonID, err := daemon.IdentifierFromFlags(ctx, cr.Name, cr.KubeFlags, cr.KubeconfigData, cr.Docker)
if err != nil {
return ctx, err
}

// Try dialing the host daemon using the well known socket.
// Try dialing the host daemon using the well-known socket.
ctx, err = DiscoverDaemon(ctx, cr.Use, daemonID)
if err == nil {
ud := daemon.GetUserClient(ctx)
if ud.Containerized() && !cliInContainer {
if ud.Containerized() {
ctx = docker.EnableClient(ctx)
cr.Docker = true
}
if ud.Containerized() == (cr.Docker || cliInContainer) {
if ud.Containerized() == cr.Docker {
return ctx, nil
}
// A daemon running on the host does not fulfill a request for a containerized daemon. They can
Expand All @@ -237,7 +236,7 @@ func launchConnectorDaemon(ctx context.Context, connectorDaemon string, required
}

var conn *grpc.ClientConn
if cr.Docker && !cliInContainer {
if cr.Docker {
// Ensure that the logfile is present before the daemon starts so that it isn't created with
// permissions from the docker container.
logDir := filelocation.AppUserLogDir(ctx)
Expand All @@ -261,17 +260,11 @@ func launchConnectorDaemon(ctx context.Context, connectorDaemon string, required
}
if proc.IsAdmin() {
// No use having multiple daemons when running as root.
hn, err := os.Hostname()
if err != nil {
hn = "unknown"
}
args = append(args, "--embed-network")
args = append(args, "--name", "docker-"+hn)
}
dlog.Debugf(ctx, "Creating daemon info file %s (runs on host, or both CLI and daemon runs in container)", daemonID.Name)
err = daemon.SaveInfo(ctx,
&daemon.Info{
InDocker: cliInContainer,
DaemonPort: 0,
Name: daemonID.Name,
KubeContext: daemonID.KubeContext,
Expand Down Expand Up @@ -372,7 +365,7 @@ func EnsureSession(ctx context.Context, useLine string, required bool) (context.
func connectSession(ctx context.Context, useLine string, userD daemon.UserClient, request *daemon.Request, required bool) (*daemon.Session, error) {
var ci *connector.ConnectInfo
var err error
if userD.Containerized() && !proc.RunningInContainer() {
if userD.Containerized() {
patcher.AnnotateConnectRequest(&request.ConnectRequest, docker.TpCache, userD.DaemonID().KubeContext)
}
session := func(ci *connector.ConnectInfo, started bool) *daemon.Session {
Expand Down
12 changes: 9 additions & 3 deletions pkg/client/cli/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"context"
"fmt"
"os"
"slices"

"github.com/spf13/cobra"

Expand All @@ -30,15 +31,20 @@ func InitContext(ctx context.Context) context.Context {
ctx = client.WithEnv(ctx, env)
switch client.ProcessName() {
case userd.ProcessName:
client.DisplayName = "OSS User Daemon"
if proc.RunningInContainer() {
client.DisplayName = "OSS Daemon in container"
} else {
client.DisplayName = "OSS User Daemon"
if slices.Contains(os.Args, "--embed-network") {
client.DisplayName = "OSS Daemon in container"
} else {
// False positive, likely due to a /.dockerenv file in CodesSpace
proc.SetRunningInContainer(false)
}
}
ctx = userd.WithNewServiceFunc(ctx, userDaemon.NewService)
ctx = userd.WithNewSessionFunc(ctx, trafficmgr.NewSession)
case rootd.ProcessName:
client.DisplayName = "OSS Root Daemon"
proc.SetRunningInContainer(false) // We never start the root daemon as a container.
ctx = rootd.WithNewServiceFunc(ctx, rootd.NewService)
ctx = rootd.WithNewSessionFunc(ctx, rootd.NewSession)
default:
Expand Down
4 changes: 0 additions & 4 deletions pkg/client/docker/daemon.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ package docker
import (
"bytes"
"context"
"errors"
"fmt"
"net"
"net/netip"
Expand Down Expand Up @@ -328,9 +327,6 @@ func handleLocalK8s(ctx context.Context, daemonID *daemon.Identifier, config *ap
// options DaemonOptions and DaemonArgs to start the image, and finally connectDaemon to connect to it. A
// successful start yields a cache.Info entry in the cache.
func LaunchDaemon(ctx context.Context, daemonID *daemon.Identifier) (conn *grpc.ClientConn, err error) {
if proc.RunningInContainer() {
return nil, errors.New("unable to start a docker container from within a container")
}
image := ClientImage(ctx)
if err = PullImage(ctx, image); err != nil {
return nil, err
Expand Down
5 changes: 5 additions & 0 deletions pkg/proc/docker_linux.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,11 @@ func RunningInContainer() bool {
return runningInContainer
}

// SetRunningInContainer forces the runningInContainer state.
func SetRunningInContainer(flag bool) {
runningInContainer = flag
}

func AppendOSSpecificContainerOpts(ctx context.Context, opts []string) ([]string, error) {
if RunningInWSL() {
// Using host.docker.internal:host-gateway won't work for the kubeauth process, because Windows Docker Desktop
Expand Down
2 changes: 2 additions & 0 deletions pkg/proc/docker_other.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ import "context"
func RunningInContainer() bool {
return false
}
func SetRunningInContainer(_ bool) {
}

func AppendOSSpecificContainerOpts(ctx context.Context, opts []string) ([]string, error) {
return opts, nil
Expand Down

0 comments on commit 18245cf

Please sign in to comment.