Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add event 'ordinal' field #191

Closed
wants to merge 1 commit into from

Conversation

BenBeattieHood
Copy link

@BenBeattieHood BenBeattieHood commented May 6, 2018

Events are inherently ordered - and without such cannot be interpreted by their consumers. This change makes the order explicit, rather than a requirement of the transport.

The ordinal should be per aggregate instance, and so I'd assume a source structured as AggregateType/AggregateId#CorrelationId, where AggregateType/AggregateId/Ordinal is essentially the address for an individual event. It might be worth calling out these values into separate fields, but that should be a separate PR.

Including the ordinal also allows simpler deduplication (valid are >= last processed ordinal, rather than a growing list of processed ids), as well as a simple mechanism to check for gaps (eg. if you've received event ordinals 1,2,3,6,7, it's easy to know you still need to wait for 4 & 5)

@BenBeattieHood
Copy link
Author

BenBeattieHood commented May 6, 2018

Here's an example to support the PR:
Imagine you had an unordered transport, eg. you were reading the events from an S3 bucket keyed by event id:

{ eventId: "4f251db0", eventType: "FileUploaded", data: { fileName: "xyz.txt", contents: "abc" } }
{ eventId: "d11727c5", eventType: "FileModified", data: { fileName: "xyz.txt", contents: "ghi" } }
{ eventId: "8d29b541", eventType: "FileModified", data: { fileName: "xyz.txt", contents: "def" } }

Ordering the above by timestamp is unreliable (because time is relative to the source server), and optional. Ordering by 'best guess' might interpret the file's final state as either def or ghi.

Including an ordinal per originating aggregate as below would give a definitive final state:

{ eventId: "4f251db0", ordinal: "1", eventType: "FileUploaded", data: { fileName: "xyz.txt", contents: "abc" } }
{ eventId: "8d29b541", ordinal: "2", eventType: "FileModified", data: { fileName: "xyz.txt", contents: "def" } }
{ eventId: "d11727c5", ordinal: "3", eventType: "FileModified", data: { fileName: "xyz.txt", contents: "ghi" } }

@sslavic
Copy link
Contributor

sslavic commented May 6, 2018

Say there's order-created event published by e-commerce system upon successful creation of an order, and in the event data it includes ID of an order that was created. What would be ordinal in that case and why would it be required for consumer to interpret and process that event?

@BenBeattieHood
Copy link
Author

BenBeattieHood commented May 6, 2018

If the Order was the Aggregate, I'd assume the ordinal would be 1 for the OrderCreated event; and any further events on that aggregate (eg. OrderProcessed, OrderShipped, OrderVoided, etc) would be subsequent ordinals.
It's probably worth saying the ordinal should be per aggregate instance - I've updated my pr desc above to include this clarification.

@sslavic
Copy link
Contributor

sslavic commented May 7, 2018

OK. Still question remains, why would it be required?

@BenBeattieHood
Copy link
Author

BenBeattieHood commented May 7, 2018

Because otherwise one would be assuming either:

  • that the events were transported in order (which this spec shouldn't dictate);
  • or that OrderCreated was the first event on the aggregate, and that there was no DraftOrderCreated etc. It would be implying ordering in the type data, which assumes a single possible flow.
    Both these, and the alternative that order is unimportant, would couple in different ways the consumer to the producer.

@duglin
Copy link
Collaborator

duglin commented May 7, 2018

@BenBeattieHood please sign your commit

Couple of comments on the PR itself:

  • I agree that if we do include this it should be OPTIONAL since not all events require this
  • If we do include this then I think we'd need to make it an integer of some kind so we can guarantee the infrastructure can process it and order the Events. Leaving it as a string means it can be any kind of ordering and w/o some additional info (like "ordinalKind") the string would be meaningless to us.
  • My initial reaction to the idea is that this should be handled by a layer above CloudEvents, which means it should be an extension or in the data. But, I also wonder why CloudEvents would need this when HTTP doesn't.

@BenBeattieHood
Copy link
Author

@duglin Thanks for your thoughts.

I feel events are inherently ordered, - ie. there are no events which are not produced by their aggregate in a defined order. In those cases where data is produced without inherent order, the data is a command, rather than an event. Commands have different system coupling (transient, version-dependent), so it's important we differentiate them from events here.

I'd prefer the ordinal were a number rather than a string, but I believe numbers are not a datatype in the spec. Lexicographic ordering does add a dependency on the encoding, but I assumed we already had that in order to deserialize any of the data. Am open to ideas.

HTTP is just a transport, and transport layers don't benefit from ordering (eg. concurrency is helpful during transport). Only the logical layer benefits from ordering, so I'd argue it was more of a logical concern.

@BenBeattieHood BenBeattieHood force-pushed the patch-1 branch 3 times, most recently from 402d1f3 to 007b3b2 Compare May 7, 2018 01:13
Signed-off-by: BenBeattieHood <benb@pageuppeople.com>
@duglin
Copy link
Collaborator

duglin commented May 8, 2018

I'd prefer the ordinal were a number rather than a string, but I believe numbers are not a datatype in the spec.

We should :-)

My mentioning of HTTP is because, for the moment, I keep thinking of this spec as something similar to HTTP. Very little (if any) understanding of what is being passed around - its just there to help move an Event from one place to another, like HTTP does for messages. Any semantics associated with understanding beyond simple routing would be done at a layer above us. Similar to how some have suggested that correlationID should be above us as well. We can provide the mechanism(location?) by which that info could be added to the event, but our spec itself doesn't understand what it actually means or how its used. I tend to think of ordering in the same way.

@clemensv
Copy link
Contributor

clemensv commented May 9, 2018

All events that we used in our interop demo are a proof point against the stated absolute premise of this PR: "Events are inherently ordered - and without such cannot be interpreted by their consumers."

The events we've all been firing and using were discrete (ie. not interrelated). We do not have a single instance of an event type in our Microsoft Event Grid where such correlation exists. The correlation DOES exist in telemetry/event streams that run through Event Hubs and Kafka.

I generally distinguish Discrete and Series events, and a sequence number criterion is only applicable to the latter.

I'm not opposed to adding sequencing via an optional attribute, but the sequence needs to have a context reference to be useful. If you're looking at a multiplexed event stream that has events from 100000 devices, you will have 100000 sources of sequences. Any meaningful sequence that isn't just time of arrival (which we have) will be determined by the sender. Two devices sending data from opposing corners of the same room don't coordinate sequences; if you want to use a vector clock, you explicitly put the sender in charge of sequencing. The ordinal can't be reflecting the absolute log sequence and not even the instant of arrival in middleware or at the consumer.

The sequence would therefore have to be explicitly tied to and scoped to the "source" attribute value, and there would need to be a compliance rule for generating the sequence.

Is it monotonically increasing? Can it decrease? (hello Bitcoin block-hash!) Do we assume that the sequence is gapless? Can I and should I be able to tell that I didn't miss anything? What about dupes?

@BenBeattieHood
Copy link
Author

BenBeattieHood commented May 9, 2018

I couldn't find the examples you mentioned - can you link? Sorry - I did look.

Maybe I'm a bear of little brain, but I'm racking what I've got and I can't honestly think of a single event that isn't ordered.

Let's take your example of the 'alarm' event, being discrete.
If you received an sms from your boss that your server was on fire, and then one from the same boss that it has all been resolved and there's no need to come in, would you go or not?
I'm guessing you'll trust some perceived precedence, and that'll be based on your trust in idem potency and order. If you got a follow up one that the servers were on fire again, you'd probably change your mind; and if you got a further follow-up one that all was well you might again change your mind.

^ It's a long-winded example, but I'm trying to show how one calculate one's 'projection' of the events based on an inferred order. And so embedding the order explicitly avoids a ton of problems we get from trying to guesstimate it.

IMHO there are no events that do not require order. In your example of 10000 devices, these are 10000 concurrent ordered event streams, with their order determined by each source device. In eventsourcing this is even more extreme, with each aggregate type having millions of instances: but each instance embeds its order into its events emitted.

There are no global streams - instead one per source instance. To clarify your other question, there's no need for a hash or vector clock, just a logical transaction within a source instance during event emission. If you want to coordinate order across several sources (your example of devices on different sides of a room), then both events need to be translated onto a new aggregate's event stream after emission.

The event order increases monotonically, contiguously - whether as a number or a lexographically orderable string. As order is something the producer determines, arrival time is meaningless to order. However, a nice side-benefit from a predictable ordinal is that dupes and delays caused during transport can be removed using the event order value; and so arrival time is for performance metrics rather than order per se.

@duglin
Copy link
Collaborator

duglin commented May 10, 2018

For what its worth, I get my sms messages out of order all the time :-)

@BenBeattieHood
Copy link
Author

Haha :) That's one of the reasons why I picked sms. Getting out of order texts at 2am about server farm outages/false alarms would be nicer if they were in order.

@duglin duglin added the v1.0 label May 10, 2018
@clemensv
Copy link
Contributor

@BenBeattieHood The question isn't whether you can ascribe some sort of order, but whether order is significant. The demo example we used was "blobCreated", fired when a new blob has been created in an S3 bucket or an Azure blob container or an Google blob container.

Yes, there's probably some order here from the container's perspective in terms of which file got created first, but for event handlers that look to react to the creation of any particular file, that order is just not relevant.

For your alarm counter-example, we already have a criterion that seems sufficient: eventTime (which is producer-set)

As we seem to agree on the source defining the sequence, I restate my feedback from above as something that the PR needs to address:

"The sequence would therefore have to be explicitly tied to and scoped to the "source" attribute value, and there would need to be a compliance rule for generating the sequence."

@duglin duglin mentioned this pull request May 17, 2018
@jroper
Copy link
Contributor

jroper commented May 18, 2018

One thing I'm not sure of is how does the infrastructure know which aggregate an event applies to? So let's say you had a stream that had events for two different files:

{ eventId: "4f251db0", ordinal: "1", eventType: "FileUploaded", data: { fileName: "xyz.txt", contents: "abc" } }
{ eventId: "8d29b541", ordinal: "2", eventType: "FileModified", data: { fileName: "xyz.txt", contents: "def" } }
{ eventId: "d11727c5", ordinal: "1", eventType: "FileUploaded", data: { fileName: "abc.txt", contents: "ghi" } }

We see two events with ordinal 1, and that's because they refer to two different aggregates (xyz.txt and abc.txt). But let's say a layer on top was using this to ensure events are handled in order, how would it know that it doesn't need to reorder the third event to be before the second?

There has been mention of using the source attribute for this, but I think that would need to change the definition of the source attribute, as there is nothing in its definition that says it must be the same for every event from a single aggregate, in fact the wording there leads me to believe that it could be a unique id per event. The alternative is to explicitly encode the aggregate id as a separate attribute, which I've raised in #209. This is in line and maps very nicely to existing technologies that express the aggregate id in the events, such as Kafka, Kinesis and Azure Event Hubs, with their partition keys.

@BenBeattieHood
Copy link
Author

@jroper yes, I fully agree. I felt it was a separate issue, and so I'm glad you've created it.


### ordinal
* Type: `String`
* Description: Value expressing the relative order of the event. This enables
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we need to say what its relative to? Also, are there any uniqueness constraints? Could a source just put "1" for all events and still be compliant?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@duglin - good call, I should've included that it must be increasing contiguously

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think requiring it to increase contiguously would be too limiting, not all sources of events could guarantee that - particularly if you had something that was filtering events, to output a contiguous sequence, it would need to keep track of all ordinal sets in order to renumber the ordinal field and have to implement atomic increments of the field etc. In contrast, if it's monotonically increasing, then such a filter could be stateless.

Copy link
Author

@BenBeattieHood BenBeattieHood May 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're filtering events, then you need to be projecting to a new stream, something like:
event -> consumer -> command (incl validation) -> new_event
because your validation and reprojection changes. Filtering and writing directly into a new stream without validation (unless an explicit pass-thru) would break the stream's truth.
Contiguous events are simple to do if you're validating events - events can't validate without a validating state, and a validating state represents a projection at an ordinal (..ie so you already need to have the ordinal).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way of thinking about it is that one can filter commands, but not events.

@duglin
Copy link
Collaborator

duglin commented May 24, 2018

@BenBeattieHood you mentioned that using timestamp won't work because its scoped to the source, but wouldn't that be true for any ordinal value? I guess someone could say they could sync some ordinal value across multiple sources (if that's what they wanted) but then couldn't they do the same thing for timestamp?

@cathyhongzhang
Copy link

event source label and correlation discussion date poll: https://doodle.com/poll/kqkmdedrznpwgcq6

@BenBeattieHood
Copy link
Author

@duglin I thought timestamp had been discussed as scoped to the recipient? If timestamp were specified as scoped to the source, it would work; but it wouldn't support a contiguity assertion that would be needed to support an unordered/duplicating transport layer. A contiguous value like a number or a lexographically orderable string would allow consumers to realise the absence of events still in transport (as well as order them for consumption).

@jroper
Copy link
Contributor

jroper commented May 25, 2018

On the data type - let me just throw this datapoint out there for consideration. In Akka persistence (an event sourcing implementation, which is commonly used as a source of events that get published out to message brokers), we have multiple ordinal fields that could be used, and different ones might be relevant for different use cases. One of them is a contiguous integer, relative to the ID of the persistent entity (a persistent entity might for example be an order, or a customer, or a blog post, so a single stream of events would have many different persistent entities). Another ordinal field is a version 1 time based UUID (of course, there are problems with using timestamps for ordering in a distributed system, but the problems aren't impossible to solve especially when you introduce certain constraints, and that's not really relevant to the discussion here), which will specify an order relative to some tag (in Akka persistence, a tag is something that groups events from multiple entities). Time based UUIDs do have an inherent ordering, and so can be used for ordering events. Of course, they are not contiguous, but they can be guaranteed to be strictly increasing.

@jroper
Copy link
Contributor

jroper commented May 25, 2018

I've just raised this PR for an event key:

#218

Depending on which of these PRs gets merged first, it may make sense to update the other one to specify a relationship between the ordinal and the eventKey (or of course it can be done in a separate PR), to say that the ordinal defines order with respect to the eventKey if specified.

@BenBeattieHood
Copy link
Author

BenBeattieHood commented May 25, 2018

@jroper The per-aggregate instance contiguous integer you mention in Akka sounds like the appropriate choice here, if you mean that the multiple entities are all under the same aggregate instance.

@ofpiyush
Copy link

Consumers are often written in multiple languages and are getting smaller and smaller thanks to the microservices and FAAS wave, both of which would benefit from not having a data layer just to deal with waiting and re-ordering.

Adding ordinal allows pieces in between to take care of waiting for events etc while the end consumer can remain free of that logic.
i.e. the implementation happens in one language/piece which doesn't need to deserialise payload.

Implementing as a well-defined Extra Attribute is welcome as well.

@duglin
Copy link
Collaborator

duglin commented Jun 1, 2018

Marking as v0.2 since it is being proposed as REQUIRED

@duglin duglin added this to the v0.2 milestone Jun 1, 2018
@jroper
Copy link
Contributor

jroper commented Jun 1, 2018

I'm really not sure that making this required is possible. I think there are many cases where the technology producing the messages simply will not be capable of producing an ordinal field. I suspect most IoT devices won't attach ordinals, and can't, because when you power them off and back on, whatever state they had for tracking the ordinal will probably be lost. Also consider a vehicle that's emitting events via an unreliable link - it may be designed to not care that events get dropped, but for such events, the ordinal field, at least as currently defined, won't be implementable.

I think it must be optional, otherwise CloudEvents will only be usable in a very narrow portion of use cases, far narrower than this spec is intended to be used for.

@BenBeattieHood
Copy link
Author

Both good points @jroper. I wonder if in these cases could they emit using the time UUID? It seems like both those data sources would either be consumed into a simple status/log, or would be interpreted & ordered through an aggregate before usage? Unsure, but it seems the use cases you're thinking of could use non-contiguous ordinals?

@jroper
Copy link
Contributor

jroper commented Jun 1, 2018

If they used timestamps, then really, they'd just be filling the field for the sake of filling the field, not because the field was actually providing value to their use case.

Also to consider - servers publishing events, where aggregates are synchronized in a database, and multiple servers emit events for the same aggregate. In this case, they might not have a mechanism for ensuring a non time based sequence, and a time based sequence is likely to cause problems because the servers clocks might not be synchronized.

But, my main point is, whether it's optional or not should not be based on whether the field can be provided in every circumstance, but whether the field provides actual value to every use case. If it's not needed for every use case, and in all the apps I'm building, an ordinal field in the messaging transport doesn't even exist, so obviously isn't needed, then it shouldn't be required.

@BenBeattieHood
Copy link
Author

BenBeattieHood commented Jun 1, 2018

Not really: if they supply timestamps, it allows for unordered transport (which tends to be more efficient).

If you're publishing events from multiple IoT devices, these would be independent aggregate instances. If you're zipping/combining these aggregates' streams, then you'd be first consuming these into a third synthetic aggregate instance, which would emit its own combined events.

Re: ordinals over transport, I thought play and lagom - and especially lightbend - were based on kafka, which uses an ordinal for high/low water marks?

I feel the benefit clients have from an ordinal allows for simpler bookmarking and dedup when consuming from the stream. If you can't say when logically an event happened, then I'd assume it's hard to hold a state (physical for iot; virtual for aggregates) to validate the event; and if you can't validate before emitting the event, then it's really just a message, not an event.

@BenBeattieHood
Copy link
Author

(I want to add that I'm learning a lot from different viewpoints in this conversation, and appreciate the time and thought folks are contributing to this discussion. I feel we're heading in a good direction - thanks for your continued patience while we work this out)

@jroper
Copy link
Contributor

jroper commented Jun 1, 2018

Kafka has an offset - but it's not per aggregate, and that's more of an internal mechanism in Kafka. Getting events into Kafka with an ordinal is the hard thing.

Consider an application that does this:

  • Update a row in an SQL database for entity X, eg update my_entity_table set property = "value" where id = "X"
  • Publish an event to a message broker about entity X, eg eventType: "my_entity_updated", eventKey: "X", data: { "property": "value" }

What is the ordinal in that scenario? Such an application is not tracking all the history for entity X, it has no idea how many updates it has done on X in the past, and so has no idea how many events it has emitted for X in the past, and so cannot generate a meaningful ordinal field. It can put a timestamp in the ordinal, but for what purpose? CloudEvents already has an eventTime field that contains the timestamp, what further value will duplicating that information in the ordinal field give? It'll just be duplicating it for the sake of meeting the spec, not to actually deliver value, and that's why the spec should make it optional.

@BenBeattieHood
Copy link
Author

BenBeattieHood commented Jun 1, 2018

If an entity within an aggregate is emitting events directly, then the aggregate's root entity will need to hold an aggregate version on it.

You're right re kafka that the ordinal is a transport ordinal, not an aggregate one - but I was just indicating that as an ordered transport it is already providing you with an implicit ordering. So while you may not have an ordinal in your apps explicitly, this is only because you are benefiting from the ordered transport.

I think the root of this discussion is the question, as you've said, 'can an event source emit events without state?' If it can have state, then that state has an implicit ordinal, as the state is the fold of the emitted events; if it cannot have state, then the question we're asking is 'are these events consumable without order?'

Your examples were good for this point, but I still feel they have an implicit underlying time-based order, which will make transport and consumption much simpler if it is included in the metadata. Without it, it appears one would require an ordered transport layer (such as in kafka), as I'm unsure you can consume events without order.

I'd call out though that I made the assumption transport would be persistent, so the consumption would be pull rather than push. This is an assumption - but it's based on the idea that back pressure is best handled during transport. Perhaps best I'd kept that out of the mix.

@jroper
Copy link
Contributor

jroper commented Jun 1, 2018

It sounds to me like you're making an argument that every place where events are used should use ordinals - should have a well defined ordering. And perhaps you're right - I'm not arguing with that, the systems I write do certainly have a well defined ordering for events, and I think that's important. But requiring every application to conform to a particular architecture is not the goal of CloudEvents - my understanding is that it's trying to extract a common way of describing events from existing implementations. There are many, many people using events today that have no concept of ordinals in their events. And perhaps, this is to their detriment, their events don't have a well defined ordering (timestamp is not well defined if your event source is multiple machines operating statelessly) and perhaps they are relying on a well defined ordering that isn't there. But it's not the job of this spec to fix that, it's the job of this spec to look at current use cases, find commonalities, and extract them into a spec. So, an ordinal field should only be required if all (or most) places that currently use events have an ordinal. But I don't think most do, I think there is a lot more in practice in use today that looks like the above, where a database is updated and an event published, all statelessly, from multiple machines where clocks aren't synced so can't be relied upon for well defined ordering.

Put another way, I think CloudEvents would be successful at achieving its goal if you can take 95% of events being published today, and without making any changes to those events, without adding any new additional metadata, there should be some sensible mapping of what exists in those events, and what the spec requires, and the events should conform to the spec. Making the ordinal field required would mean that 95% of events published today would have to already, without modification, have a concept of an ordinal field, and I don't think they do.

@BenBeattieHood
Copy link
Author

That's a super way of summarizing it. Yes, I essentially agree with this.

But I'm arguing for it because I don't believe it's an architecture, but a principle - without order, you imply no state; and without state, you imply no validation; and without validation, I doubt it can be an event.

The issue for me is not that the producer should assert the ordinal (which would be an architecture), but that the consumer can rely on it. I think in the above you're thinking about producers, where the ordinal is really (if we avoid architecture) for the consumers.

Hope this is another way of framing that'll make more sense?

@duglin
Copy link
Collaborator

duglin commented Jun 18, 2018

During the 6/15 f2f we agreed to include this as an extension. @BenBeattieHood would you be willing to rework the PR as an extension instead of as a property of the spec?

@duglin
Copy link
Collaborator

duglin commented Jun 28, 2018

@BenBeattieHood ^^^

duglin pushed a commit to duglin/spec that referenced this pull request Aug 16, 2018
Fixes: cloudevents#191

I didn't address any of the comments in cloudevents#191 because I wasn't sure how
the WG in general felt about them. So please speak up in this PR if you'd like
to see a change.

Signed-off-by: Doug Davis <dug@us.ibm.com>
@duglin duglin mentioned this pull request Aug 16, 2018
@duglin
Copy link
Collaborator

duglin commented Aug 16, 2018

@BenBeattieHood I created #291 to define it as an extension based on the f2f meeting. Hope that's ok.

duglin pushed a commit to duglin/spec that referenced this pull request Aug 16, 2018
Fixes: cloudevents#191

I didn't address any of the comments in cloudevents#191 because I wasn't sure how
the WG in general felt about them. So please speak up in this PR if you'd like
to see a change.

Signed-off-by: Doug Davis <dug@us.ibm.com>
duglin pushed a commit to duglin/spec that referenced this pull request Sep 1, 2018
Fixes: cloudevents#191

I didn't address any of the comments in cloudevents#191 because I wasn't sure how
the WG in general felt about them. So please speak up in this PR if you'd like
to see a change.

Signed-off-by: Doug Davis <dug@us.ibm.com>
@duglin
Copy link
Collaborator

duglin commented Sep 5, 2018

@BenBeattieHood would you be ok if we close this PR with the assumption that #291 will cover it? Or would you prefer to wait to see how that one ends up?

@BenBeattieHood
Copy link
Author

Thanks @duglin - I've been appreciating how you've been helping keep these PRs focused, and being a good mediator of discussion.

Like @cneijenhuis among others, I'd appreciate the cloudevents spec being more opinionated as to order. To my sense unordered events are only useful to permit very basic stateless sensors (and at the transport layer, which is rightly out-of-spec); whereas event order seems a preeminent concern for successfully decoupling processing. I remain convinced that "events are inherently ordered - and without such cannot be interpreted by their consumers," and unlike @clemensv I do not feel the interop demo showed a successfully decoupled system - the consumer had to infer or be built with much knowledge of the producer, and feel this coupling seems to go against most of the principles of the cloudevents spec.

However, this said, I'm happy the discussion here is continuing via the other PR - so long as folks are across the backstory here so we don't have to repeat too much. I'd really like us to come around to considering events as innately ordered, and therefore sequence being a necessary as-yet uncaptured part of their core metadata.

@duglin duglin closed this in #291 Sep 27, 2018
duglin pushed a commit that referenced this pull request Sep 27, 2018
* Add 'ordinal' attribute

Fixes: #191

I didn't address any of the comments in #191 because I wasn't sure how
the WG in general felt about them. So please speak up in this PR if you'd like
to see a change.

Signed-off-by: Doug Davis <dug@us.ibm.com>

* use sequence and add more clarifying text

Signed-off-by: Doug Davis <dug@us.ibm.com>

* grab Christoph's suggested text

Signed-off-by: Doug Davis <dug@us.ibm.com>

* more tweaks

Signed-off-by: Doug Davis <dug@us.ibm.com>
clemensv pushed a commit to clemensv/spec that referenced this pull request Oct 31, 2018
* Add 'ordinal' attribute

Fixes: cloudevents#191

I didn't address any of the comments in cloudevents#191 because I wasn't sure how
the WG in general felt about them. So please speak up in this PR if you'd like
to see a change.

Signed-off-by: Doug Davis <dug@us.ibm.com>

* use sequence and add more clarifying text

Signed-off-by: Doug Davis <dug@us.ibm.com>

* grab Christoph's suggested text

Signed-off-by: Doug Davis <dug@us.ibm.com>

* more tweaks

Signed-off-by: Doug Davis <dug@us.ibm.com>
Signed-off-by: Clemens Vasters <clemensv@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants