Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/logging #49

Merged
merged 8 commits into from
May 6, 2022
Merged

Conversation

salehsedghpour
Copy link
Collaborator

This closes #39 .

This PR is suggesting using Elasticsearch as a logging solution.

The proof of concept is described in the Logging.md file.

@salehsedghpour salehsedghpour requested a review from alekodu May 4, 2022 09:04
community/Logging.md Outdated Show resolved Hide resolved
running.

Let's assume that our pod name is `service1-75576df8cd-zjpg7`, then from the **Discover** page, in the search bar enter
`kubernetes.pod_name:service1`. This filters the log data for Pods named `service`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a typo in: .... for Pods named 'service', I guess it's 'service1'?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll fix it in a final commit. I don't push this commit, while the history of our discussion will be destroyed by my push.

# deploy ELK stack
$(kubectl apply -f ./elk/namespace.yaml)
echo "kubectl apply -f elk/"
echo "Deploying Elasticsearch stack with 3 nodes ..."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do these 3 nodes refer to?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, 3 replicas of elastic search pods?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, 3 replicas of elastic search pods?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird, I don't know why some of my comments got duplicated :S

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, let's call it data node!

Copy link
Member

@alekodu alekodu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added few comments...

# deploy ELK stack
$(kubectl apply -f ./elk/namespace.yaml)
echo "kubectl apply -f elk/"
echo "Deploying Elasticsearch stack with 3 nodes ..."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, 3 replicas of elastic search pods?

To browse kibana, just copy the following address or replace the public ip of the node with the <node-ip>:
http://<node-ip>:30000
or
http://192.168.1.2:30000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we keep this local IP 192.168... in this readme?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we print the something similar to

the <node-ip>:
    http://<node-ip>:30000
    or
    http://an-ip-address:30000

the <node-ip> is exactly print as <node-ip>.
Then an-ip-address is really an IP address. It is an example like the way we have for service name.

community/Logging.md Show resolved Hide resolved
# Deploy the microservices to clusters
for d in ./k8s/*; do
echo "applying deployment manifests to ${d##./k8s/}"
[[ -d "$d" ]] && kubectl apply --prune -f k8s/${d##./k8s/} -l version=${d##./k8s/} --context ${d##./k8s/}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will get an elasticsearch deployment per cluster, each of them with 3 replicas, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but this part of the code (line 47 to 50) is actually responsible for deploying services.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix: at the current point, we only have elasticsearch deployment on a single cluster. We should discuss it if we need an elastic search per cluster or not.

generator/deploy.sh Outdated Show resolved Hide resolved
type FileConfig struct {
ClusterLatencies []ClusterLatency `json:"cluster_latencies"`
Services []Service `json:"services"`
Settings Setting `json:"settings,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if you want a variable of type "struct" to be optional with omitempty, then you need to define here a pointer to it instead. See: https://www.sohamkamani.com/golang/omitempty/

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's right. We are going to pass this setting in configmap for the microservices. In a microservice, we are going to parse it in the run.sh script using jq. We need the logging to be there, either with true value or false value. So when we set an struct with omitempty if we pass nothing there, it will convert it to a false value.

But if we consider it without omitempty and our enduser doesn't pass this value in the input file, then our generator is looking for such thing which is not passed and raise an error and does not generate anything!

The fastest way for me to develop it, was to set it as omitempty. Do you think that we should focus more on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok... let's then leave it as it is.

generator/src/pkg/model/input.go Show resolved Hide resolved
# deploy ELK stack
$(kubectl apply -f ./elk/namespace.yaml)
echo "kubectl apply -f elk/"
echo "Deploying Elasticsearch stack with 3 nodes ..."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, 3 replicas of elastic search pods?

# deploy ELK stack
$(kubectl apply -f ./elk/namespace.yaml)
echo "kubectl apply -f elk/"
echo "Deploying Elasticsearch stack with 3 nodes ..."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird, I don't know why some of my comments got duplicated :S

type FileConfig struct {
ClusterLatencies []ClusterLatency `json:"cluster_latencies"`
Services []Service `json:"services"`
Settings Setting `json:"settings,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok... let's then leave it as it is.

@salehsedghpour
Copy link
Collaborator Author

Ok then, shall we merge it?

@alekodu alekodu merged commit 2994eca into EricssonResearch:main May 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support better server logging system
2 participants