Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support provisioning using docker exec #4686

Open
clofresh opened this issue Jan 15, 2016 · 14 comments
Open

Support provisioning using docker exec #4686

clofresh opened this issue Jan 15, 2016 · 14 comments

Comments

@clofresh
Copy link

Instead of requiring an ssh connection to run a provisioner on a docker container, it would be nice to just do a docker exec so that we don't need to set up an ssh daemon on the container.

(I know I know, I'm not supposed to run a provisioner on a docker image, configuration should be done at build time. But I'm trying to mirror my prod installation which isn't docker)

@apparentlymart
Copy link
Contributor

Would a hypothetical new docker-exec provisioner suit your use-case?

resource "docker_container" "foo" {
    // ...

    provisioner "docker-exec" {
        inline = [
            "echo hello world"
        ]
    }
}

@clofresh
Copy link
Author

Yep that'd work!

@apparentlymart apparentlymart changed the title add a docker exec connection for provisioners Support provisioning using docker exec Jan 17, 2016
@loicalbertin
Copy link
Contributor

I think it would be a terrific feature!

But wouldn't an hypothetical new docker connection type a better solution as it would be used in both remote-exec and file provisioners allowing to upload files to the container as well.

I had a look to the docker provider code and it uses https://github.com/fsouza/go-dockerclient as docker client. This library seems to support both file uploads and command execution

What do you think about this?
Is this in line with the concept of connection?

@loicalbertin
Copy link
Contributor

HashiCorp guys working on terraform (@phinze, @mitchellh, @catsby, @jen20, ...) what do you think about this idea of a new docker connection type ?

Thanks in advance for your feedback.
Loïc

@loicalbertin
Copy link
Contributor

@jen20 I'm interested in contributing such feature. But I'd like to discuss it a little bit and specially check if the docker connection is actually the good way to implement this.

@richard-senior
Copy link

What happened to this?
It's just easier to use docker machine and docker swarm? No need for HashiCorp.
Hmmm? Get your fingers out.

@mesea-mms
Copy link

Would a hypothetical new docker-exec provisioner suit your use-case?

resource "docker_container" "foo" {
    // ...

    provisioner "docker-exec" {
        inline = [
            "echo hello world"
        ]
    }
}

Is this going to be implemented? It's a great idea.

@mjsir911
Copy link

👍 for a docker connection type. Either on existing containers or being able to create new containers based off of on image would be appreciated. Same with kubernetes pods.

@apparentlymart
Copy link
Contributor

apparentlymart commented Jun 24, 2024

The concept of provisioners has since emerged as largely a mistake: they don't really do anything that a managed resource type can't do and Terraform can't track them well because they are not stateful, so Terraform ends up having to make worst-case assumptions like that the failure of any provisioner means that the entire resource object is damaged ("tainted") and therefore needs replacing. The current provisioners remain largely for backward compatibility and because they have only minimal dependencies in the Terraform codebase so they don't cause too many maintenance headaches.

While I don't intend this comment as a "no, absolutely not, never", I find it unlikely that what this issue suggested would be implemented exactly as described. Instead, this is something I would suggest to implement as a new managed resource type in a provider, which allows specifying something to execute in Docker both during its create and during its delete actions. It could also potentially allow executing something on update, but that's typically harder to design because it's unclear what "update" means for an object representing arbitrary imperative actions.

That means that the docker dependencies only need to be downloaded for those who choose to use that particular provider, and that Terraform can track (in its usual way) whether the action has already been taken, propose to replace it when needed, etc.

@tregubovav-dev
Copy link

Docker/Kubernetes/Linux Containers and many hypervisors support APIs to manipulate files and execute commands inside container/VM. I think the best way is to extend provisioner connection block with addition connection types. I think provider can export supported connection type for provisioner and provide connectivity to the container. For example:

  • Docker provider could export connection type docker
  • Kubernetes provider could export connection type kube
  • LXD provider could export connection type 'lxd'
  • Incus provider could export connection type 'incus'
  • etc.

@apparentlymart
Copy link
Contributor

The "communicator" abstraction (which is what connection blocks are configuring) is poorly-specified and already very strained from a design standpoint. It was originally designed only for SSH and had WinRM retrofitted in a clumsy way where the connection content gets decoded by different code depending on the type but is nonetheless expected to follow the same schema in both cases. I don't think that abstraction has any future and is preserved primarily for backward compatibility.

For a system that has a reasonable API for writing a file into something our current best practice is to have the provider for that system offer a managed resource type representing a file in that system, such as local_file for the local filesystem, aws_s3_object for Amazon S3, and so forth.

This allows each system to tailor the resource type schema to suit the capabilities of the remote system, rather than trying to place everything behind an unnecessary abstraction that is a poor fit for some systems. It's an especially poor fit for systems that require additional configuration beyond a hostname to connect to and SSH-like credentials, because those details are a fixed part of the connection block schema that all communicators must use.

There isn't yet an SSH provider for writing files over SSH or SFTP, but that's only because we already have the legacy provisioner/communicator mechanism and so there's not been any strong need for it. We are intending to build an SSH provider for #8367 and once that exists it would be a good home for an ssh_scp_file and/or ssh_sftp_file resource type that would be the new recommended way to represent a file written over SSH.

There is already a Kubernetes provider, but I don't know if it exposes the ability to write files into a container. If it doesn't then that seems like a reasonable feature request for that provider.

I'm not seeing any significant benefit to adding "file writer" or "command runner" as first-class concepts for providers, since resources are already a broad enough abstraction to encompass both, and already have considerable design investment to integrate them well into Terraform's plan/apply workflow, whereas the provisioner features have been intentionally neglected for many years because the design of that concept is a poor fit for everything else Terraform does. If we do add something new in this area, I expect that it will be a better-designed replacement for provisioners/communicators, rather than an evolution of that design.

@tregubovav-dev
Copy link

Hello Martin,
Nice to see Hashicorp's visions for provisioners' functionality. Please declare that provisioner functionality is obsolete and should be used only for compatibility purposes in the Terraform language documentation.
Based on your comment, it appears that we, as customers, need to take the initiative and request providers' maintainers/developers to implement the corresponding functionality, correct?

@apparentlymart
Copy link
Contributor

apparentlymart commented Jun 26, 2024

The current recommendation is that provisioners are a last resort, but they cannot be removed during the 1.x series because they are protected by compatibility promises. The Terraform team intends to preserve the current behaviors but to not change them, as described in the Provisioners section of the Terraform v1.x compatibility promises. There is no reason you cannot continue using the functionality that's already present if it already meets your needs.

If you want any new functionality that is related to an external system that is integrated with Terraform using a Terraform provider (which includes both Docker and Kubernetes) then yes, the appropriate place to record a feature request for any new functionality related to that external system is in the GitHub repository for that system's provider. If there is currently no such provider (as is the case for SSH) then you could open a feature request for such a provider to exist in this repository, but as I mentioned we are already intending to introduce an SSH provider as part of another project so there is no need to open a separate feature request for that one.

We do not intend to add any new target-platform-specific functionality to Terraform Core, because Terraform Core is supposed to be a target-agnostic runtime engine that integrates with other systems using providers. (Existing integrations are retained for backward compatibility but are likely to be deprecated in favor of provider-defined functionality at some point.)

@tregubovav-dev
Copy link

Thank you Martin for clear explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants