Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update join #48

Merged
merged 3 commits into from
Dec 8, 2016
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 49 additions & 10 deletions handbook/strings/join.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,42 +12,81 @@ Joins a set of strings into a single string
## Syntax

```eve
text = join[token, index, with]
text = join[token, given, index, with]
```

## Attributes

- `token` - set of elements to be joined
- `given` - establishes the set being joined, in case tokens are not unique, you can add attributes here that will make them unique. See examples for more. Must at least provide tokens as part of the given set, or only the first token will be returned.
- `index` - indicates the order of the `tokens` in the joined string
- `with` - inserted between every element in `tokens`

## Description

`text = join[token, index, with]` takes `tokens` tokens together using `with` in an order specified by `index`. Returns the joined string.
`text = join[token, index, given, with]` takes `tokens` together using `with` in an order specified by `index`. Returns the joined string.

## Examples

Split a sentence into tokens
Split a sentence into tokens, and join the tokens into a sentence again

```eve
search
// Split the sentence into words
(token, index) = split[text: "the quick brown fox", by: " "]

bind
[#token token index]
// Join the words back into a sentence, but with hyphens instead of spaces
text = join[token given: token, index with: "-"]

bind @view
[#value | value: text] // Expected "the-quick-brown-fox"
```

Join the tokens into a sentence again, but with hyphens instead of spaces
---

Since join is an aggregate, set semantics play an important part here; if we don't specify what makes each token unique, then the results can be surprising. The following example will demonstrate this.

Let's split the phrase "hello world" into letters:

```eve
search
[#token token index]
text = join[token, index, with: " "]
//token = (h, e, l, l, o, w, o, r, l, d)
(token, index) = split[text: "hello world", by: ""]

bind
[#div text] // Expected "the-quick-brown-fox"
[#phrase token index]

bind @view
[#value | value: token]
```

You'll notice here that when we display the tokens on their own, we're missing some letters. This is Eve's set semantics at work: the letter `l` is displayed once, and it's not displayed again because this would be a duplicate element in the set. The letters are also probably out of order, because sets are unordered. This is why we need the index when reconstructing the phrase.

So let's join this phrase back together. Like last time, we'll join with a `-`.

```eve
search
[#phrase token index]
text = join[token given: token index with: "-"]

bind @view
[#value | value: text]
```

The output here is "h-e-l-o- -w-r-d", which from the previous block we could have guessed. The problem is that we've joined the tokens, given only the tokens themselves. Since some tokens are duplicate, they are filtered out of the set before it's joined into a string. In order to keep the duplicate tokens, we have to provide more to make them unique. The `index` is a perfect candidate for this:

```eve
search
[#phrase token index]
// given = (("h", 1), ("e", 2), ("l", 3), ("l", 4) ... ("l", 10), ("d", 11))
text = join[token given: (token, index) index with: "-"]

bind @view
[#value | value: text]
```

Now we have the correct result of "h-e-l-l-o- -w-o-r-l-d"

## See Also

[join](../join) | [split](../split) | [char-at](../char-at) | [find](../find) | [length](../length) | [replace](../replace)
[split](../split)