-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AG needs PKCS12 format but Terraform only provides PEM support #36
Comments
We are having a similar issue. I suggest the following solution to solve the problem: We could implement a pkcs12 data source in the Terraform TLS provider which takes PEM encoded certificates and private keys and outputs them as PKCS12. This allows to reuse this functionality. For example, this is also relevant for Azure Key Vault certificates as they only support PKCS12. |
This may be just me, but wouldn't it make more sense to have the ACME provider be able to provide multiple formats? Either way, I'm also interested in this solution. |
@dominik-lekse Re.
According to https://www.terraform.io/docs/providers/azurerm/r/key_vault_certificate.html#content_type, it does support PEM. |
hey @holderbaum @dominik-lekse @draggeta @Leon99 Thanks for opening this issue :) Taking a quick look into this whilst it's unfortunate that some of the Azure API's only accept PFX files I'd agree with @dominik-lekse that this probably belongs in the TLS Provider. Whilst we could potentially add support for this to the ACME Provider I don't feel like it's a problem specific to that Provider (since being able to convert between certificate types could be useful in other contexts too) - as such I'm going to transfer this issue over to the TLS Provider. Thanks! |
this issue is a duplicate of #29 |
@holderbaum Thanks for your code above. I've been experimenting with this, just putting the content in a local file resource, and you may be able to work around the PKCS12 changing every time by using the lifecycle ignore_changes feature.
The PKCS12 content gets re-generated every time, but it doesn't actually update the resource. If you ever wanted to update it for real you could taint the resource or remove the lifecycle block. It's not great, but it may work short term until the TLS provider adds the PKCS12 export option. |
To me this feels similar to other situations in Terraform where we've established a standard way of representing a particular type of data in Terraform configuration and expected providers to convert to the upstream format they need, since that allows different systems to compose better. This is the same as convention that timestamps should always be in the RFC 3339 format, and providers are responsible for converting from that to whatever timestamp format the remote API expects, so that Terraform configurations do not need to be littered with conversion functions, and any resource type that produces a timestamp only needs to produce one format. In other words, I think it's better to retain the convention that private keys and certificates in Terraform are always in PEM format, and that any provider interfacing with an upstream system that doesn't support PEM must itself handle the conversion, accepting PEM certificates as input and producing PEM certificates as output. By standardizing on one format, we know that everything can compose together without needing to worry about format selections. Since a PEM certificate contains only the certificate data, with the private key supplied out-of-band, this means also that providers accepting certificates and their associated private keys will need to have two separate arguments, rather than packaging them together into one. This composes better with Note also that the Terraform language only supports Unicode strings, so using binary formats like PKCS12 would require base64-encoding them. In Terraform 0.11 binary data tends to just get silently corrupted in certain cases because the unicode-only requirement is not consistently enforced. In 0.12 this will be checked explicitly, producing errors on any attempt to process binary data in strings rather than silently corrupting it. PEM certificates, since they are ASCII-only, can be used directly inside strings and converted to a binary format like PKCS12 just in time to be sent to the API in the target provider. |
Would this resolve the use case of uploading certificates (in any valid format) into an Azure Key Vault and utilizing them as a valid data source for maintaining Application Gateway SSL certificates? Since Azure itself does not support this natively (https://feedback.azure.com/forums/217313-networking/suggestions/31089529-support-ssl-certificates-stored-in-key-vault-secre) this would be a great way to handle multiple gateways with identical keys being updated via automation. |
While I agree with @apparentlymart that in general, terraform providers should accept the "standard" format (in this case, PEM) and convert internally if the upstream service provider expects something else, sometimes this isn't sufficient - imagine receiving PEM from a terraform module, but having to store this somewhere else (s3, consul, local file), while the service picking it up only understands PKCS12. Instead of everybody implementing his own workarounds by manually calling openssl, tls_public_keyIt already allows calculating public keys by passing in a private key, getting various public key representations (OpenSSH format), and fingerprints. For example, to get the OpenSSH pubkey of a Private Key provided in PEM format data "tls_public_key" "example" {
private_key_pem = "${file("~/.ssh/id_rsa")}"
}
output "openssh_pubkey" {
value = "${tls_public_key.example}";
} tls_certificateI don't see much against adding a Possible arguments
(With Exported attributes
This could be used to add support for converting from PEM to PKCS12 and back in a very unobtrusive manner. ExampleGenerate the Openssh Public Key from a PKCS12 file: # load the pkcs12 file
data "tls_certificate" "as_pkcs12" {
pkcs12 = "${file("foo.pfx")}"
}
# use the private key from the pkcs12 file
data "tls_public_key" "as_pem" {
private_key_pem = "${tls_certificate.as_pkcs12.private_key_pem}"
}
# output the openssh representation of the corresponding public key
output "foo" {
value = "${tls_public_key.as_pem.public_key_openssh}"
} |
Exporting other formats from the I would like to keep PEM only for inputs to the resources to reinforce that it is the conventional format, and I'd ask that we also implement support for PEM as input on |
Actually, We can reference to ACME provider, The terraform resource support The reference links as follow: https://www.terraform.io/docs/providers/acme/r/certificate.html |
Has there been any progress on the TLS provider adding tooling to support converting the PEM's to PKCS12 or more clean workarounds for local code to do the conversations outside of terraform? I understand what @apparentlymart is saying, but Microsoft has been asked about support of PEM for their application gateways, app services, key vaults and so on and it's not on the roadmap. Microsoft has been using PKCS12 longer than Hashicorp/Terraform have been around so it's not simple or likely change I think. Additionally, I don't think avoiding this functionality in the TLS provider encourages Microsoft to adopt the PEM format. If anything it encourages Azure users to not use Terraform for solutions related to certificates. I did look into using the ACME provider but our customer is using Venafi so we already have the PEM and need to convert it. We don't need a separate tool to generate the cert. @apparentlymart I saw you mentioned something about PKCS12 being binary and Terraform being Unicode. Is that an actual hard deal breaker like it's literally impossible for the TLS provider to offer this functionality? Or are you just saying it introduces some challenges or complexity? |
I have created a provider that helps to create PKCS12 from PEM files: https://registry.terraform.io/providers/chilicat/pkcs12/latest/docs/resources/from_pem |
This is the workaround I came up with using null resource, in my case for converting cloudflare origin certs (pem) to pfx:
and then you can reference it like this:
|
|
I would like to have this too. My use case is to import import certificates from azure keyvault, stored as pkc12 in kubernetes cluster as secret for nginx ingress (cert/key as pem). |
Superseded by #205. |
TBH, really confusing set of threads here -- while all these are closed -- i don't see the PFX/PKCS in the Terraform TLS provider. Is it somewhere else? |
@cicorias the superseding issue is linked to in the comment immediately before yours, and is still open. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Description
To configure the Application Gateway Resource with a TLS certificate, a pkcs12 file format is required. Unfortunately, all other native terraform resources (outside azure) work with traditional PEM format.
We currently try to provision an AG with a certificate that has been generated using the ACME terraform provider in conjunction with its Azure DNS provider (https://www.terraform.io/docs/providers/acme/dns_providers/azure.html).
The ACME certificate resource (https://www.terraform.io/docs/providers/acme/r/certificate.html) creates a Lets Encrypt certificate successfully and returns its public and private key as PEM.
Do get this into the AG we currently use an external datasource that calls openssl:
Terraform
Data Source
Actual Issue
While this works from a technical perspective, it is far from optimal. The Data Source is executed on every
terraform apply
and the resulting pkcs12 archive is different on every execution, at least on the binary level - even though the inner PEM files stay the sameThat means, for every apply the AG "changes" it's certificate which takes roughly 10 minutes.
Do you have any smart ideas on how to make this work better?
Ideally, the azure resources should use the same certificate formats as all other TF resources, that would be PEM.
New or Affected Resource(s)
Thank you very much :)
The text was updated successfully, but these errors were encountered: