Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make .mds testable #198

Closed
dora-gt opened this issue Aug 7, 2019 · 17 comments
Closed

Make .mds testable #198

dora-gt opened this issue Aug 7, 2019 · 17 comments

Comments

@dora-gt
Copy link
Collaborator

dora-gt commented Aug 7, 2019

I noticed that we made .mds runnable so we could even test it.
It has a pro that if the documents go outdated, we will be able to notice that.

That said, it will require more efforts, so... not now but sometime in the future.

@emschwartz
Copy link
Member

I think that's a great idea. It would probably take a bit more to include assertions but it might be worth just having CI run them as part of the normal flow

@emschwartz emschwartz added the ci label Aug 7, 2019
@emschwartz emschwartz added this to the Next Milestone milestone Aug 15, 2019
@dora-gt dora-gt mentioned this issue Aug 29, 2019
15 tasks
@dora-gt
Copy link
Collaborator Author

dora-gt commented Sep 3, 2019

I strongly feel the importance of this feature because interledger-rs is rapidly growing and there are many changes which affect documents.

I'm thinking of what is the right approach to implement this.

  1. Insert test codes into README.md with a specific mark
some explanations

<!--! #test
    # do some test
-->

#test means this code is for test, should not be run by run-md.sh. This option seems good because sometimes it can reuse exit codes of commands of the procedure. The tests output some information to a file, CI process reads it.

  1. Run tests after run-md.sh
    • CI runs run-md.sh for some markdown files.
    • After that, CI runs tests whether the final state is correct or not.

This option only tests whether the result is correct or not. Although it seems correct approach as an integrated test, may be inefficient because it cannot reuse exit codes and sometimes it is hard to define what is the correct state as the result.

or... should we support both?

What do you think? @interledger-rs/contributors

@bstrie
Copy link
Contributor

bstrie commented Sep 3, 2019

Rather than adding new tests within each file that are hidden from view, I would like to invoke run-md.sh as part of the test, and find a way to make the usual run of each README produce output that could be tested. For example, when the simple test produces this output at the end:

Alice's balance: {"balance":"-500"}
Node B's balance on Node A: {"balance":"500"}
Node A's balance on Node B: {"balance":"-500"}
Bob's balance: {"balance":"500"}

...we could also write each JSON response to a different file in the logs directory, and have a test that:

  1. Deletes any existing output files
  2. Executes run-md.sh with simple/README.md
  3. Blocks while waiting for all four files to be created (admittedly, this one feels a bit gross, and we would need to come up with some reasonable maximum timeout)
  4. Reads all four files and tests that the results are as expected

I haven't taken a look at the other examples to see if they produce testable output, but if not then perhaps they could be changed to do so.

@dora-gt
Copy link
Collaborator Author

dora-gt commented Sep 4, 2019

Thank you Ben.
(I've edited the original post because the expression was not hiding the test from readers)

I understand the approach. One thing I'm curious about is why you would like the approach.
It doesn't make .mds messy? or.. it looks simpler?

@bstrie
Copy link
Contributor

bstrie commented Sep 4, 2019

@dora-gt My biggest concern is that I want to make sure that the examples on the master branch run successfully at all times. My reason is selfish: I use the examples as an easy way to simulate real traffic between nodes and to demonstrate to myself that my commits are working as expected. For example, right now I'm adding functionality that will allow users to subscribe to a feed that will alert them when they have incoming payments, and to show that it works I run the simple example and wait for the notifications. However, over the past two weeks the simple example has been repeatedly broken (it is broken on master right now, for example) which sidetracks me into having to look into that first.

I would be fine with the version that you suggested previously where we insert hidden tests into the files, but I'm afraid that those tests might not be enough to stop the examples from breaking. Can you show what sort of test you have in mind here?

@bstrie
Copy link
Contributor

bstrie commented Sep 4, 2019

(To elaborate on my specific concern in the last comment, the last two bugs that I have fixed in the simple example would likely only have been found by integration tests such as those performed by running the examples: the first was a bug in HTTP request username deserialization that only manifested during account update, not account creation; the second was a bug in environment variable deserialization that would only be triggered by actually trying to run the interledger executable.)

@dora-gt
Copy link
Collaborator Author

dora-gt commented Sep 5, 2019

My biggest concern is that I want to make sure that the examples on the master branch run successfully at all times.

Yes, we should achieve this goal to welcome the other developers. I'm sorry that examples get broken so often. I'll solve the issue, implementing this feature. Once implemented, it will be easier to notice that the examples are not working.

What I was thinking about inserting tests in .mds is like:

    ```bash
    curl.....
    ```

    <!--! #test
    if [$? -ne 0]; then
        # test fails here
    fi
    -->

but... if the section includes multiple commands, this might not work because it might be able to use only the last exit code.

It seems that testing the last state (or some output which means an integrated state) would suffice the objective so far. So let's start with the integrated test, and if we find some specific situations that evade the integrated test, then I would consider unit tests.

Thank you Ben. If there are objections, please tell me. > other guys.

@bstrie
Copy link
Contributor

bstrie commented Sep 5, 2019

I'm sorry that examples get broken so often.

Don't worry, it's not your fault and not your responsibility to test the examples against every single commit. :)

if the section includes multiple commands, this might not work because it might be able to use only the last exit code.

I had looked into this in the past and I believe that curl does not actually return a non-zero exit code when it receives an HTTP status code other than 200. Source: https://superuser.com/questions/590099/can-i-make-curl-fail-with-an-exitcode-different-than-0-if-the-http-status-code-i

@dora-gt
Copy link
Collaborator Author

dora-gt commented Sep 12, 2019

I wrote a brief plan for this issue. If you have any opinions, please tell me.

https://docs.google.com/document/d/1L-HO-j_5yMZG7JYgssH7YBYVrkhc_33Wc2Q5Nz4glFs/edit?usp=sharing

dora-gt added a commit to dora-gt/interledger-rs that referenced this issue Sep 12, 2019
Signed-off-by: Taiga Nakayama <dora@dora-gt.jp>

interledger#198
dora-gt added a commit to dora-gt/interledger-rs that referenced this issue Sep 15, 2019
@bstrie
Copy link
Contributor

bstrie commented Oct 10, 2019

@dora-gt I've only just seen the Google Doc with your plan, I have a question regarding this line:

We have to run docker in docker. Is that possible on CircleCI?
or… should we just test non-docker mode? is it sufficient?

Is there some reason that non-docker mode would be less desirable than docker mode for the purpose of CI? Especially since there appear to be complications regarding running docker from within docker.

@emschwartz
Copy link
Member

I think the main consideration is just making sure both experiences work. That said, having one of them tested is definitely an improvement over not having either tested

@bstrie
Copy link
Contributor

bstrie commented Oct 10, 2019

Ah I see, I was under the impression that we were only testing one mode, and that docker mode was that mode. Indeed it would be good to test both, but as a minimal implementation even having one tested would be very useful in an immediate sense for catching most of the breakage to examples.

@dora-gt
Copy link
Collaborator Author

dora-gt commented Oct 11, 2019

(Sorry for this late response)

I think if we provide both normal mode (I mean running locally) and docker mode, basically both should be tested. If we dare to test only one mode (maybe it is normal mode), we should have enough reason to do that.

I can't help thinking of "what do users feel if something is not running?"
I would like to take responsibility for what we provide ... as far as possible.

@emschwartz
Copy link
Member

Agreed. However, if we can get one done faster than the other, we should do that in the meantime

@dora-gt
Copy link
Collaborator Author

dora-gt commented Oct 14, 2019

I think I need to wait for the decision of #399 (whether we move to GitHub actions or not). If we move to GitHub actions, these should be changed according to that.

@emschwartz
Copy link
Member

Can this be closed?

@dora-gt
Copy link
Collaborator Author

dora-gt commented Nov 21, 2019

Yes, I'll close.

@dora-gt dora-gt closed this as completed Nov 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants