-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
naming conventions: cross-service integrations #1743
Comments
There is something that "works" with the lambda-event-sources approach, I agree. The main problem I have with it is that in many cases it will pull down many dependencies you don't really need. But I am not sure that's a huge issue to be honest (esp given the single version line approach we are now taking). I think CodePipeline is an example for how this can go really far, with classes sprinkled across the entire framework. Initially it felt like a good idea, but I am seeing @skinny85 to constantly needing to make modifications across the entire repo, which is not going to be very sustainable as we have more and more maintainers, so I can see the value in moving all of the codepipeline actions into a single module. Discoverability is tricky. On one hand, when you know where to look, a single package with all integrations is highly discoverable. On the other hand, the IDE doesn't help you there. You will have |
I think the reason for these repo-wide changes is because we were doing breaking changes to the API of the Actions. The moment we stabilize the API at 1.0, we will no longer need (or in fact even could) do these, and then the separation will be much more beneficial. |
I still think it would make sense to bring in all the codepipeline stuff into a single library for discoverability purposes (similar to lambda-event-sources). Any reason why this won't be a more sustainable model? |
I can think of 2 reasons against it:
|
I actually think this is backwards. The Actions encode something about the pipeline model owned by the pipeline team. Why should Lambda team own the class that generates a piece of CodePipeline's JSON model?
Today I'm pulling in CodePipeline and all of its dependencies if I want to instantiate a Lambda (regardless of any pipelines). How does that make sense? |
It's not part of the CodePipeline JSON. It is 100% owned and maintained by the service team. The CodePipeline team does not have to know anything about that JSON for it to work (see: ECR Action that was added recently).
You're not. You're just pulling the |
@rix0rrr @skinny85 @RomainMuller I am leaning towards @rix0rrr's proposal to organize all integration types for a service in a central package. The main factor in this decision to me (like most design decisions we have) is to try and put myself in the shoes of a user and ask what is their mental model when they want to find and use those classes, and what would be the natural name for these types. For example, if I define a pipeline, there are a few things I want to be able to do, ideally without leaving my IDE:
In both those scenarios, I would argue that the mental hierarchy is "pipeline => actions => specific-action". So This approach also resolves some of the naming issues @rix0rrr raises: I am not very concerned about pulling in all the dependencies with the integration module. We have that today with One thing we haven't discussed explicitly, but I am assuming it falls of this decision - does that mean that we now let go of all these Do we want to get rid of eventRule.addTarget(new event_targets.LambdaTarget(lambda)); Or do we leave it as an option for specific classes to implement What about our |
Imho, yes.
We can't, really. It's similar to new LambdaTarget({ on: pipeline.failureEvent, lambda: lambda }); Or what we have today. |
Can we talk about what should be the name of these classes in languages that don't support qualified imports, like Java, Go (Python I believe as well?), etc.? In the CodePipeline case, should the name be |
The name of the class should be In all languages types are qualified by the module they reside in. This is why we call the Lambda function This is what my IDE shows me when I type When I pick one of the options (based on the fully qualified name), the IDE will create an import for me that allows me to refer to this class as To me this is one of the benefits of this approach. If this class would have resided under the |
I'm a little worried that the However, assuming you want to stick with It combines the strengths of both approaches: the ownership story is clear, discoverability is aided by the central package, while dependency-conscious customers can use the Actions only through the service packages, without pulling in the entire Construct Library along with The reason I think this is fine is because the names in this scheme are redundant anyway. If you assume Java/Go customers will use an unqualified I understand this is currently not possible with JSII, but I believe the advantages are large enough to at least try to make it work. Would love to hear your opinion on this. |
We won't be able to call it Collisions on the consumer side can always be resolved by the user. So even if for some odd reason there will be two modules in our framework that has a class A dependency-only package with re-exports is not possible in jsii, and adding it only in TypeScript will break a fundamental concept of jsii that all languages have the exact type system because those re-exported types will be hidden in all languages besides TypeScript, so people will go to the documentation and will see a module called "codepipeline-actions", but it won't have any Java classes it in. That's not something we can afford. It might be possible to consider adding a type-aliasing feature in jsii but I don't see that happening in the short term, and I am not convinced that it's a good idea in general. The constraints we get from jsii have proven to be quite helpful at keeping our code organization simple and straightforward and I think that's an important aspect of the framework design. I don't see the dependency issue as a major concern based on our experience so far, and the ownership story of "polluting" modules with integration classes throughout the entire framework is not more clear in my mind. If only, I suspect it might create confusion. |
I don't agree, but I'm willing to agree to disagree. A consequence of this decision is that the |
To sum up the discussion: we're going with the 'single package for all integrations' approach. |
seems like settled on a decision here. can this issue be closed? |
Yes, thanks! |
Acute need came out of #1646, but this pattern crops up more, for example in #1663 (TBD).
Problem Statement
We have classes that users can use to link certain services together. Examples are CloudWatch Targets, StepFunctions Tasks, Lambda Event Sources, and probably there are more occurrences of this that I can't think of right now.
Usually such an integration is event based, and usually there are 3 entities involved:
Obviously there will be a source and a target, and usually the event passing is customized in some way as well (by specifying additional properties).
Today, our integrations look like this:
CloudWatch Events
The configuration becomes super complex when we try to enable interactions between more flexible services, such as ECS, which has various parameters that control the invocation. With the code as written, we even have to split those parameters over two statements, and there's no static or runtime safety in the second statement.
In this case, we would benefit from having everything in the
Ec2EventRuleTarget
class, and we should probably make similar classes for the other service integrations in order to stay consistent across the framework (instead of using the resource directly as the integration point).Question is, what should those classes be named, and in what module should they live?
Right now, the integrations are implemented in the module of the resource.
Step Functions
To invoke a Lambda, Step Functions used to work like this:
We would take the ARN of the Lambda, give that to StepFunctions, and everything would be good.
They've now launched integration with other services, and there are some complications.
Resource: "arn:aws:states:::ecs:runTask.sync"
, and in an additional fieldParameters
you put something like:We could model this as follows:
But it would be even nicer to combine the two:
And be able to use that object in a workflow directly. Again, what do we name this class, and in what module does it live?
Right now, the integrations are implemented in the module of the resource.
Lambda Event Sources
Lambda Event Sources already have taken the model where we actually have a class per integration (
FooEventSource
) and they all live in their own module@aws-cdk/aws-lambda-event-sources
.Used as:
Complications
NAMING
FargateRunTaskTask
.FargateRunTask
vs.SfnFargateRunTask
)FargateRunTask
might suffice in a package named@aws-cdk/aws-stepfunctions-tasks
, but it might not be so clear in theaws-ecs
package proper.LOCATION
@aws-cdk/aws-bloop
to land that integration in the CDK. This might be desirable as a forcing function, or it might not be because of expediency.Honestly, I don't have any good proposals here yet. I'm leaning towards following the Lambda Event Sources model, where integrations go into their own package with suffix naming. So I think I'd like:
aws-stepfunctions-tasks
withFargateRunTask
,TablePutItemTask
, etc.aws-events-targets
withFargateRun(Task)Target
,InvokeFunctionTarget
, etc.I'm interested in hearing opinions though.
The text was updated successfully, but these errors were encountered: