Skip to content

Commit

Permalink
[Final Submission] Executable tutorial: Continuous benchmarking with …
Browse files Browse the repository at this point in the history
…Github Actions (#1358)

* Create README.md

* Feedback submission

Submitted our feedback before the 24h deadline

* added annotated pdf

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Delete feedback_annotated.pdf

* Update README.md

* Created feedback adjustments

Displays what changes we made after reading through the feedback. Both relevant for the tutorial and feedback task.

* Update Feedback_adjustments.md

Our feedback adjustments 
Since this is part of grading for the feedback we also refer to our feedback PR #12

* Added more updates for feedback

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update Feedback_adjustments.md

* Update Feedback_adjustments.md
  • Loading branch information
jhammarstedt committed May 3, 2021
1 parent 637b368 commit 09acaec
Show file tree
Hide file tree
Showing 2 changed files with 121 additions and 5 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Feedback adjustments

**Extra fix**: Got a question from another student that did the tutorial about the possibility to benchmark the two functions simuntaniously. So modified the tutorial, scipts and benchmarks to compare both of them in the same workflow. Also added support for this in the GitHub pages table

### Step 1

A minor annoyance was that the config of email was set to a katacoda execute field, making execute with the wrong email, suggest that you change it into a copy-paste only, since there is no real point in executing that.
* **No need to have specific user email so added defaults that are configured automatically in the beginning**

The permission script (chmod) seems more like a chore that has to be done, it does not really feel very relevant to the tutorial. if possible put in in a script that is executed when the step starts automatically instead.

* **Fixed by running the clone and permissions as a script beforehand.**

I also tried the other option, to not write any code manually, however this caused problems, since all the files were committed the first time, making the outputs of the action a bit confusing since they were not what was said.

* **Fixed by adding a disclaimer that the option 2 would not allow to run the partial workflow**

### Step 2
I have one complaint on this page that follows to some of the other pages. It's not really about the actual content. Rather katacoda. The formatting when copying the code is not correct, which made it a bit annoying to do. I guess one could rewrite it manually but I think it would be nice if you took a closed look at the formatting of it. The issue is consistent on most fields where you copy the longer texts. Works better when you copy the whole file once, maybe let the user copy the whole thing first and the explain everything. There is also a katacoda command to copy things straight into a file (check it out here https://katacoda.com/scenario-examples/scenarios/clipboard).

* **Fixed by only giving one snippet that will be formated correctly instead of dividing them up**

### Step 3
The content of the workflow file on top is different from the one at the bottom. I didn't notice this until the workflow failed when I pushed it, please fix that (missing "src/" prefixed in the python command).

* **Fixed by correcting the paths**

### Step 4
A nice addition would be to explain the steps shown in the log (how the relate to the steps in the workflow file) making it more intuitive why that subsection should be selected.

* **Fixed by adding a explanation for why each step exists in the log**

### Step 5
Would be nice if you gave some more info about what python benchmark does (what measures it takes a.s.o), I know that there is a link, but a short inline explanation would be nice.

* **Fixed by adding a section about benchmarking and what it outputs**

Also suggest you add a note what the functions do and why you chose them (it is fairly obvious if you know python, but not everyone do) .

* **Fixed by adding comments explaining the purpose of the functions in the benchmarking.py script.**

### Step 6
Same thing with the copying of code messing with the indentation, other than that the content is good.

* **Fixed by adding explanation and a fully formatted snippet at the end**

This is where I had an issue with a merge conflict though. Previous steps seems to have produced files that i had to pull before i could push it. Not sure if that was supposed to happen, if not, look into it. If it is on purpose, then I suggest you add a note about it.

* **Fixed by adding note about merge conflict**

### Step 7
Since you say that the tutorial does not cover bs4, which is completely reasonable, but it feels weird to copy or manually write that code, if it is not explained. Maybe you can include that file automatically and only tell the user that it exists.

* **Fixed by adding the generate_output.json file so user won't have to do it**

### Outro
Feel bad that I missed the easter egg, had to go back and try it. It was a really cool one, definitely subtle and fun. Maybe include some kind of hint earlier somehow, or just get the user to check out the scripts folder.

* **Added a little note on page 8 to again remind the user to check the script folder, still want it to be somewhat discrete and subtle**

### Overall points
- There are a few spelling mistakes, make sure you read through carefully one more time.

* **Looked through the full text again and ran every page through a spelling check**
62 changes: 57 additions & 5 deletions contributions/executable-tutorial/johhamm-carllei/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,65 @@
# Tutorial: Continuous benchmarking using Github Actions
<img src="https://www.maxi-muth.de/wordpress/wp-content/uploads/2014/09/Professortocat_v2.png" height = 100 width = 100 align ="right" />

## Members ##
## Authors ##
* Johan Hammarstedt (johhamm@kth.se), Github: [jhammarstedt](https://github.com/jhammarstedt)
* Carl Leijonberg (carllei@kth.se), Github : [carllei](https://github.com/carllei)

## Relevant links
📚 Katacoda tutorial is found [here](https://www.katacoda.com/jhamm/scenarios/ghactiondemo)

🗝 Project repository is found [here](https://github.com/jhammarstedt/Benchmarking-DevOps)

📣 First PR [#1158](https://github.com/KTH/devops-course/pull/1158)

📝 Feedback is found in this PR [#1358](https://github.com/KTH/devops-course/pull/1358) and modifications are found in the `Feedback_adjustments.md`


## Task
We want to create a katacoda tutorial on how to do continuous benchmarking using Github Actions. This will help developers easily compare benchmark results and alert on worse performance when making a new PR. The statistics will most likely be displayed as markdown on github and initialy we will use pytest for python scripts. This would be a fun project to do since it would be reusable in our other work to measure performance changes.
This project was aimed to teach others how to set up a Github Action to create continuous benchmarking with pytest. We also added a simple visualization with Github Pages that we walk through briefly in the tutorial.

This will help developers easily compare benchmark results and alert on worse performance when making new commits. The statistics from the latest run are found in `output.json` and the historical comparison table is visualized in the generated page available [here](https://jhammarstedt.github.io/Benchmarking-DevOps/).

<img src="https://www.katacoda.com/images/logo-head.png" align="right" />

## Katacoda tutorial
We have created a katacoda tutorial that runs a bash terminal and a VS code environment in the browser. It walks through every step to build and set up this repo yourself.

You will learn how to:
* Create simple Github Action that will let you test and compare python scripts on pushes to your GitHub repository
* With a few modifications, you can also implement them for other tasks to enable CI/CD in your other projects!
* You can also use this action with similar performance tools for other programming languages.
* Create your first Github Page that will display the results from your testing

<img src="https://github.com/jhammarstedt/katacoda-scenarios/blob/main/ghactionDemo/images/Summary_tutorial.PNG?raw=true" />

## Aiming for --> 💥 Distinction (Extra header for DevOps course)

| | Yes | No | Remarkable |
|-------------------------------------------- | ----|----|-------------|
|The TA can successful execute all the commands of the tutorial (mandatory) |💥 Yes | No |💥 In the browser |
|If local execution, runs on Linux | Yes | No | Easy to setup and run |
|The tutorial gives enough background |💥 Yes | No | 💥 Comprehensive background |
|The tutorial is easy to follow |💥 Yes | No | 💥 Well documented |
|The tutorial is original, no such tutorial exists on the web |💥 Yes | No | The teaching team never heard about it |
|The tutorial contains [easter eggs](https://github.com/OrkoHunter/python-easter-eggs) |💥 Yes | No |💥 Subtle and fun |
|The tutorial is successful (attracts comments and success) |💥 Yes | No | Lively discussion |
|The language is correct | 💥 Yes | No | 💥 Interesting narrative |


## Easter egg hint for tutorial

<details>
<summary>Click me for hint</summary>
Did you collect the 🥚 from scripts?
<details>
<summary> Fun fact regarding easter egg (open after finding it) </summary>
The author of the action did not support memes by repo owners, which would be a problem for our purpose. So I raised that
<a href="https://clipart.world/wp-content/uploads/2020/09/Colorful-Easter-Egg-clipart-transparent.png" target="_top">issue</a> and got a new feature merged for this tutorial 🤙

</details>
</details>

### If time
* Add visualization using github pages.
* Add support for more language

## Future work
* Add support for more languages

0 comments on commit 09acaec

Please sign in to comment.