Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helper/schema feature: nestable resources #57

Closed
phinze opened this issue Jun 8, 2015 · 8 comments
Closed

helper/schema feature: nestable resources #57

phinze opened this issue Jun 8, 2015 · 8 comments
Labels
enhancement New feature or request upstream-terraform

Comments

@phinze
Copy link
Contributor

phinze commented Jun 8, 2015

I've talked about this feature on several threads now. Time to centralize the conversation!

Background

The context: some resources are so closely related to each other that they are nearly always considered together: Security Group / Rule, DNS Zone / Record, IAM Group / User, etc.

The legacy implementation: it was common to model these as "sub-resources" in the schema, e.g. Security Group Rules Sub-Resource

The problem: "sub-resources" are bug-prone, difficult for provider implementors to work with, and require that all details be known at resource definition time.

Interim solution: lean on top-level resources for their simplicity in implementation and their flexibility (e.g. Security Group Rule Top-Level Resource)

Remaining problem: top-level resources are clunky and verbose to work with in configs

Proposed Solution

Add a helper/schema feature I'm calling "nestable resources", allowing provider authors to configure places to nest top-level resources into the definition of a related resource.

A sketch of what this might look like from the provider implementation side, using Security Group Rules as an example:

// in SecurityGroup resource definition...
"ingress": &schema.NestedResource{
  ResourceType: "aws_security_group_rule",

  // FixedAttributes are always set on the nested resource
  FixedAttributes: map[string]interface{}{
    "type" = "ingress",
  },

  // MappedAttributes define which fields of the parent resource to
  // map into the nested resource
  //    - keys: parent attribute name
  //    - values: nested resource attribute name
  MappedAttributes: map[string]string{}{
    "id": "security_group_id",
  },
},
// ...

This would make the following two configs equivalent:

resource "aws_security_group" "foo" {
  // ...
  ingress {
    // ...
  }
}
resource "aws_security_group" "foo" {
  // ...
}

resource "aws_security_group_rule" "foo" {
  security_group_id = "${aws_security_group.foo.id}"
  // ...
}
@ojongerius
Copy link

🙌

@sheerun
Copy link

sheerun commented Feb 28, 2016

How about child resources to define dependencies on parent resources instead? So following is possible:

resource "aws_security_group" "foo" {
  // ...
  resource "aws_security_group_rule" {
    // ...
  }
}

Because aws_security_group_rule is nested inside aws_security_group it knows to set following defaults:

  • name to "${aws_security_group.name}"
  • depends_on to ["${aws_security_group.foo}"]
  • security_group_id to "${aws_security_group.foo.id}"

Note that aws_security_group doesn't need to know about aws_security_group_rule or any other possible resource to nest in it. The default for any nested resource is just name and depends_on.

@blakestoddard
Copy link

Is there any update available on this? I've got an issue at the moment where I create a bulk of a route table (and it's routes) via the aws_route_table resource, but have the possibility of needing to tack on another route if a conditional is passed. It looks like I can't currently do that because you cannot mix aws_route and aws_route_table objects?

@nbering
Copy link

nbering commented Jul 17, 2017

Concept

I was thinking something similar to this. It would be nice if some resources could have child resources that are opaque in the configuration, but behaved sort of like submodules in the plan.

I actually arrived to this conclusion that nested resources would be beneficial to Terraform after reading The Idea of Lisp on the Practical Developer. Therein is discussed the idea that lisp Macros are simply functions built in lisp that return lisp code, thereby providing a fluid extension point for writing additional language features. My thinking is you could have "macro resources", which behave as a resource - being defined by the plugin and not imported as a configuration - but return multiple sub-resources to be included in the graph and applied to the plan and stored individually in the state file.

I think the key difference between this idea and that of submodules is that nested resources would be defined by the provider plugin, and not by user configuration.

Configuration Benefits

Examples of where this would be useful are things like assignments and security group rules, where there is already an option on the resource, and a separate resource entity, and they are not compatible.

State Benefits

Having the separate resource entity as a child resource might allows something like terraform state mv on a subresource, which would be helpful for refactors of complex infrastructures without having to remove and import resources.

User Experience Benefits

Nested resources as a concept simplifies the complex relationships between some resources by allowing "associative" resources to be used in compatibility with config blocks on parent resources.

This would specifically provide a nice bridge from beginner to power-user. I have a theory that a beginner would use config blocks attached to the parent resource until they run into a limitation of that configuration style, and then transition to what power users would be more likely to use - the separate resource, and make submodules to compose resources together.

@nbering
Copy link

nbering commented Jul 17, 2017

Oh... it would also be kind of neat if the sub-resources did not have to come from the same provider. Ie, you could do a meta-resource like "dns_record" that took configuration common to a bunch of the DNS providers, and a property that said which one to use, making it simpler to switch between providers when used in submodules. For example, some of my projects have different DNS providers just because of who my clients work with. That would allow me to share my VM configuration submodule between clients without having to write a some hacky thing with a count of zero, etc. I do imagine the implementation of such an idea would require non-computed fields much like count does now, but would delegate a portion of the graph builder to the provider plugin to determine what sort of resource a particular node would be.

@displague
Copy link

displague commented Sep 17, 2018

If I wasn't bound by current limitations and conventions (WIP), I may have approached the Linode Instance resource differently.

I think you would want to be able to access your parent's attributes within these sub-resources (explicitly and implicitly within plugins for the parent.ID)). schema.Resource semantics would offer CRUD functions and the Test helpers would provide the means to verify and destroy these sub-resources between tests.

The API required order for this example schema to be applied would be:

  • create bare instance
  • create disk sub-resources (referring to instance ID)
  • create config sub-resources (referring to disk ID)
  • boot instance (referring to config ID)
// The instance can be created with a simple API call
// This automatically creates physical 'config' and 'disk' sub-resources
// TF parent resources *should* have the ability to import child resources.
resource "linode_instance" "simple" {
  region = "us-east"
  type = "g6-nanode-1"
  image = "linode/debian9.1"
}

// .. more complex configurations require more API calls and CRUD operations on 
// REST sub-resources /instance/123/config/456, /instance/123/disk/456
resource "linode_instance" "complex" {
  region = "us-east"
  type = "g6-nanode-1"
 
  // the instance can be created bare and unpowered, but provisioners need it running 
  // creating the disk, via API, requires the instance be created but not running
  disk "diskA" {
    label = "diskA"
    size=1000
    image = "linode/debian9.1"
  }

  disk "scratch" {
    count = 3
    label = "scratch${count.index}"
    size=1000    
  }

  config "configA" {
    // ...
    devices = {
       sda = { disk_id = "${self.disk.diska.id}" }
       sdb = { disk_id = "${self.disk.scratch.0.id}" }
       sdc = { disk_id = "${self.disk.scratch.1.id}" }
       sdd = { disk_id = "${self.disk.scratch.2.id}" }
    }
  }

  config "configsB" {
     label = "configb-${count.index}"
     count = "3"
     devices = {
        sda = { disk_id = "${self.disk.diska.id}" } 
        sdb = { disk_id = "${element(self.disk.scratch.*.id, count.index)}" }
     }
  }

  boot_config_id = "${self.config.configsB.2.id}"
}

@paultyng
Copy link
Contributor

Going to merge this to #220

0.12 supports "named blocks", and once that is supported in the SDK, it should cover some of the use cases described here. Anything more significant, including handling of resources invoking other resources, is probably better left to upstream Terraform. If the usecase for the true nesting of real resources is just about code reusability, we can achieve that in other ways.

@ghost
Copy link

ghost commented Mar 15, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request upstream-terraform
Projects
None yet
Development

No branches or pull requests

8 participants